DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/4] dts: add dts api docs
@ 2023-03-23 10:40 Juraj Linkeš
  2023-03-23 10:40 ` [RFC PATCH v1 1/4] dts: code adjustments for sphinx Juraj Linkeš
                   ` (5 more replies)
  0 siblings, 6 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-03-23 10:40 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock
  Cc: dev, Juraj Linkeš

Augment the meson build system with dts api generation. The api docs are
generated from Python docstrings in DTS using Sphinx. The format of
choice is the Google format [0].

The guide html sphinx configuration is used to preserve the same style.

The build requires the same Python version and dependencies as DTS,
because Sphinx imports the Python modules. Dependencies are installed
using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> doc

There's only one properly documented module that serves as a
demonstration of the style - framework.testbed_model.node.

I didn't figure out how to separate dts build from the rest of the docs,
which I think is required because of the different dependencies.
I thought the enable_docs option would do this, so I added
enable_dts_docs, but it doesn't seem to be working. Requesting comment
on this.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes

Juraj Linkeš (4):
  dts: code adjustments for sphinx
  dts: add doc generation dependencies
  dts: add doc generation
  dts: format docstrigs to google format

 doc/api/meson.build                           |   1 +
 doc/guides/conf.py                            |  22 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      |  29 +
 doc/meson.build                               |   5 -
 dts/doc-index.rst                             |  20 +
 dts/framework/config/__init__.py              |  11 +
 .../{testbed_model/hw => config}/cpu.py       |  13 +
 dts/framework/dts.py                          |   8 +-
 dts/framework/remote_session/__init__.py      |   3 +-
 dts/framework/remote_session/linux_session.py |   2 +-
 dts/framework/remote_session/os_session.py    |  12 +-
 .../remote_session/remote/__init__.py         |  16 -
 .../{remote => }/remote_session.py            |   0
 .../{remote => }/ssh_session.py               |   0
 dts/framework/settings.py                     |  55 +-
 dts/framework/testbed_model/__init__.py       |  10 +-
 dts/framework/testbed_model/hw/__init__.py    |  27 -
 dts/framework/testbed_model/node.py           | 164 ++--
 dts/framework/testbed_model/sut_node.py       |   9 +-
 .../testbed_model/{hw => }/virtual_device.py  |   0
 dts/main.py                                   |   3 +-
 dts/meson.build                               |  50 ++
 dts/poetry.lock                               | 770 ++++++++++++++++--
 dts/pyproject.toml                            |   7 +
 dts/tests/TestSuite_hello_world.py            |   6 +-
 meson.build                                   |   6 +
 meson_options.txt                             |   2 +
 28 files changed, 1027 insertions(+), 225 deletions(-)
 create mode 100644 dts/doc-index.rst
 rename dts/framework/{testbed_model/hw => config}/cpu.py (95%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
 create mode 100644 dts/meson.build

-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 1/4] dts: code adjustments for sphinx
  2023-03-23 10:40 [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
@ 2023-03-23 10:40 ` Juraj Linkeš
  2023-03-23 10:40 ` [RFC PATCH v1 2/4] dts: add doc generation dependencies Juraj Linkeš
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-03-23 10:40 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock
  Cc: dev, Juraj Linkeš

sphinx-build only imports the Python modules when building the
documentation; it doesn't run DTS. This requires changes that make the
code importable without running it. This means:
* properly guarding argument parsing in the if __name__ == '__main__'
  block.
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create unnecessary log files.
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere.
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by more granular imports.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              | 11 ++++
 .../{testbed_model/hw => config}/cpu.py       | 13 +++++
 dts/framework/dts.py                          |  8 ++-
 dts/framework/remote_session/__init__.py      |  3 +-
 dts/framework/remote_session/linux_session.py |  2 +-
 dts/framework/remote_session/os_session.py    | 12 +++-
 .../remote_session/remote/__init__.py         | 16 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               |  0
 dts/framework/settings.py                     | 55 ++++++++++---------
 dts/framework/testbed_model/__init__.py       | 10 +---
 dts/framework/testbed_model/hw/__init__.py    | 27 ---------
 dts/framework/testbed_model/node.py           | 12 ++--
 dts/framework/testbed_model/sut_node.py       |  9 ++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/main.py                                   |  3 +-
 dts/tests/TestSuite_hello_world.py            |  6 +-
 17 files changed, 88 insertions(+), 99 deletions(-)
 rename dts/framework/{testbed_model/hw => config}/cpu.py (95%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ebb0823ff5..293c4cb15b 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -7,6 +7,8 @@
 Yaml config parsing methods
 """
 
+# pylama:ignore=W0611
+
 import json
 import os.path
 import pathlib
@@ -19,6 +21,15 @@
 
 from framework.settings import SETTINGS
 
+from .cpu import (
+    LogicalCore,
+    LogicalCoreCount,
+    LogicalCoreCountFilter,
+    LogicalCoreList,
+    LogicalCoreListFilter,
+    lcore_filter,
+)
+
 
 class StrEnum(Enum):
     @staticmethod
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/config/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/config/cpu.py
index d1918a12dc..8fe785dfe4 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/config/cpu.py
@@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 0502284580..22a09b7e34 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,7 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+import logging
 import sys
 
 from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
@@ -12,7 +13,8 @@
 from .testbed_model import SutNode
 from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG | logging.Logger = logging.getLogger("DTSRunner")
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -24,6 +26,10 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
     check_dts_python_version()
 
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index ee221503df..17ca1459f7 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,8 @@
 
 from .linux_session import LinuxSession
 from .os_session import OSSession
-from .remote import CommandResult, RemoteSession, SSHSession
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
 
 
 def create_session(
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index a1e3bc3a92..c8ce5fe6da 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,8 +2,8 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+from framework.config import LogicalCore
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
 from framework.utils import expand_range
 
 from .posix_session import PosixSession
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 4c48ae2567..246f0358ea 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -6,13 +6,13 @@
 from collections.abc import Iterable
 from pathlib import PurePath
 
-from framework.config import Architecture, NodeConfiguration
+from framework.config import Architecture, LogicalCore, NodeConfiguration
 from framework.logger import DTSLOG
 from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
 from framework.utils import EnvVarsDict, MesonArgs
 
-from .remote import CommandResult, RemoteSession, create_remote_session
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
 
 
 class OSSession(ABC):
@@ -173,3 +173,9 @@ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
         if needed and mount the hugepages if needed.
         If force_first_numa is True, configure hugepages just on the first socket.
         """
+
+
+def create_remote_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 8a1512210a..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 71955f4581..144f9dea62 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -59,15 +59,18 @@ def __call__(
 
 @dataclass(slots=True, frozen=True)
 class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+    config_file_path: Path = Path(Path(__file__).parent.parent, "conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: _Settings = _Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -81,7 +84,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
         "and targets.",
     )
@@ -90,7 +94,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -98,7 +102,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
         "compiling DPDK.",
@@ -108,7 +112,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
+        default=SETTINGS.verbose,
         help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
         "to the console.",
     )
@@ -117,7 +121,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
+        default=SETTINGS.skip_setup,
         help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
     )
 
@@ -125,7 +129,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--tarball",
         "--snapshot",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball "
         "which will be used in testing.",
@@ -134,7 +138,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -142,8 +146,9 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--test-cases",
         action=_env_arg("DTS_TESTCASES"),
-        default="",
-        help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
+        nargs="*",
+        default=SETTINGS.test_cases,
+        help="[DTS_TESTCASES] A list of test cases to execute. "
         "Unknown test cases will be silently ignored.",
     )
 
@@ -151,7 +156,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -165,10 +170,11 @@ def _check_tarball_path(parsed_args: argparse.Namespace) -> None:
         raise ConfigurationError(f"DPDK tarball '{parsed_args.tarball}' doesn't exist.")
 
 
-def _get_settings() -> _Settings:
+def set_settings() -> None:
+    global SETTINGS
     parsed_args = _get_parser().parse_args()
     _check_tarball_path(parsed_args)
-    return _Settings(
+    SETTINGS = _Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
@@ -176,9 +182,6 @@ def _get_settings() -> _Settings:
         skip_setup=(parsed_args.skip_setup == "Y"),
         dpdk_tarball_path=parsed_args.tarball,
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=parsed_args.test_cases,
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index f54a947051..148f81993d 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,14 +9,6 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
 from .node import Node
 from .sut_node import SutNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index d48fafe65d..90467981c3 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,19 +12,17 @@
 from framework.config import (
     BuildTargetConfiguration,
     ExecutionConfiguration,
-    NodeConfiguration,
-)
-from framework.logger import DTSLOG, getLogger
-from framework.remote_session import OSSession, create_session
-from framework.settings import SETTINGS
-
-from .hw import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
+    NodeConfiguration,
     lcore_filter,
 )
+from framework.logger import DTSLOG, getLogger
+from framework.remote_session import create_session
+from framework.remote_session.os_session import OSSession
+from framework.settings import SETTINGS
 
 
 class Node(object):
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 2b2b50d982..6db4a505bb 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -7,13 +7,18 @@
 import time
 from pathlib import PurePath
 
-from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.config import (
+    BuildTargetConfiguration,
+    LogicalCoreCount,
+    LogicalCoreList,
+    NodeConfiguration,
+)
 from framework.remote_session import CommandResult, OSSession
 from framework.settings import SETTINGS
 from framework.utils import EnvVarsDict, MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
 from .node import Node
+from .virtual_device import VirtualDevice
 
 
 class SutNode(Node):
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..060ff1b19a 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,11 @@
 
 import logging
 
-from framework import dts
+from framework import dts, settings
 
 
 def main() -> None:
+    settings.set_settings()
     dts.run_all()
 
 
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..96c31a6c8c 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -6,12 +6,8 @@
 No other EAL parameters apart from cores are used.
 """
 
+from framework.config import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from framework.test_suite import TestSuite
-from framework.testbed_model import (
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-)
 
 
 class TestHelloWorld(TestSuite):
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 2/4] dts: add doc generation dependencies
  2023-03-23 10:40 [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
  2023-03-23 10:40 ` [RFC PATCH v1 1/4] dts: code adjustments for sphinx Juraj Linkeš
@ 2023-03-23 10:40 ` Juraj Linkeš
  2023-03-23 10:40 ` [RFC PATCH v1 3/4] dts: add doc generation Juraj Linkeš
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-03-23 10:40 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 770 +++++++++++++++++++++++++++++++++++++++++----
 dts/pyproject.toml |   7 +
 2 files changed, 710 insertions(+), 67 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index 0b2a007d4d..500f89dac1 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,24 +1,69 @@
+# This file is automatically @generated by Poetry and should not be changed by hand.
+
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
-version = "22.1.0"
+version = "22.2.0"
 description = "Classes Without Boilerplate"
 category = "main"
 optional = false
-python-versions = ">=3.5"
+python-versions = ">=3.6"
+files = [
+    {file = "attrs-22.2.0-py3-none-any.whl", hash = "sha256:29e95c7f6778868dbd49170f98f8818f78f3dc5e0e37c0b1f474e3561b240836"},
+    {file = "attrs-22.2.0.tar.gz", hash = "sha256:c9227bfc2f01993c03f68db37d1d15c9690188323c067c641f1a35ca58185f99"},
+]
 
 [package.extras]
-dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
-docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
-tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
-tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
+cov = ["attrs[tests]", "coverage-enable-subprocess", "coverage[toml] (>=5.3)"]
+dev = ["attrs[docs,tests]"]
+docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope.interface"]
+tests = ["attrs[tests-no-zope]", "zope.interface"]
+tests-no-zope = ["cloudpickle", "cloudpickle", "hypothesis", "hypothesis", "mypy (>=0.971,<0.990)", "mypy (>=0.971,<0.990)", "pympler", "pympler", "pytest (>=4.3.0)", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-mypy-plugins", "pytest-xdist[psutil]", "pytest-xdist[psutil]"]
+
+[[package]]
+name = "babel"
+version = "2.12.1"
+description = "Internationalization utilities"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"},
+    {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"},
+]
 
 [[package]]
 name = "black"
-version = "22.10.0"
+version = "22.12.0"
 description = "The uncompromising code formatter."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
+    {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
+    {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
+    {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
+    {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
+    {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
+    {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
+    {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
+    {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
+    {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
+    {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
+    {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
+]
 
 [package.dependencies]
 click = ">=8.0.0"
@@ -33,6 +78,103 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2022.12.7"
+description = "Python package for providing Mozilla's CA Bundle."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2022.12.7-py3-none-any.whl", hash = "sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18"},
+    {file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"},
+]
+
+[[package]]
+name = "charset-normalizer"
+version = "3.1.0"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+category = "dev"
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.1.0.tar.gz", hash = "sha256:34e0a2f9c370eb95597aae63bf85eb5e96826d81e3dcf88b8886012906f509b5"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e0ac8959c929593fee38da1c2b64ee9778733cdf03c482c9ff1d508b6b593b2b"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d7fc3fca01da18fbabe4625d64bb612b533533ed10045a2ac3dd194bfa656b60"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04eefcee095f58eaabe6dc3cc2262f3bcd776d2c67005880894f447b3f2cb9c1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20064ead0717cf9a73a6d1e779b23d149b53daf971169289ed2ed43a71e8d3b0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1435ae15108b1cb6fffbcea2af3d468683b7afed0169ad718451f8db5d1aff6f"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c84132a54c750fda57729d1e2599bb598f5fa0344085dbde5003ba429a4798c0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75f2568b4189dda1c567339b48cba4ac7384accb9c2a7ed655cd86b04055c795"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11d3bcb7be35e7b1bba2c23beedac81ee893ac9871d0ba79effc7fc01167db6c"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:891cf9b48776b5c61c700b55a598621fdb7b1e301a550365571e9624f270c203"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5f008525e02908b20e04707a4f704cd286d94718f48bb33edddc7d7b584dddc1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b06f0d3bf045158d2fb8837c5785fe9ff9b8c93358be64461a1089f5da983137"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:49919f8400b5e49e961f320c735388ee686a62327e773fa5b3ce6721f7e785ce"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22908891a380d50738e1f978667536f6c6b526a2064156203d418f4856d6e86a"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win32.whl", hash = "sha256:12d1a39aa6b8c6f6248bb54550efcc1c38ce0d8096a146638fd4738e42284448"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:65ed923f84a6844de5fd29726b888e58c62820e0769b76565480e1fdc3d062f8"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9a3267620866c9d17b959a84dd0bd2d45719b817245e49371ead79ed4f710d19"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6734e606355834f13445b6adc38b53c0fd45f1a56a9ba06c2058f86893ae8017"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8303414c7b03f794347ad062c0516cee0e15f7a612abd0ce1e25caf6ceb47df"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaf53a6cebad0eae578f062c7d462155eada9c172bd8c4d250b8c1d8eb7f916a"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3dc5b6a8ecfdc5748a7e429782598e4f17ef378e3e272eeb1340ea57c9109f41"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e1b25e3ad6c909f398df8921780d6a3d120d8c09466720226fc621605b6f92b1"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ca564606d2caafb0abe6d1b5311c2649e8071eb241b2d64e75a0d0065107e62"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b82fab78e0b1329e183a65260581de4375f619167478dddab510c6c6fb04d9b6"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bd7163182133c0c7701b25e604cf1611c0d87712e56e88e7ee5d72deab3e76b5"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:11d117e6c63e8f495412d37e7dc2e2fff09c34b2d09dbe2bee3c6229577818be"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:cf6511efa4801b9b38dc5546d7547d5b5c6ef4b081c60b23e4d941d0eba9cbeb"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:abc1185d79f47c0a7aaf7e2412a0eb2c03b724581139193d2d82b3ad8cbb00ac"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cb7b2ab0188829593b9de646545175547a70d9a6e2b63bf2cd87a0a391599324"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win32.whl", hash = "sha256:c36bcbc0d5174a80d6cccf43a0ecaca44e81d25be4b7f90f0ed7bcfbb5a00909"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:cca4def576f47a09a943666b8f829606bcb17e2bc2d5911a46c8f8da45f56755"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c95f12b74681e9ae127728f7e5409cbbef9cd914d5896ef238cc779b8152373"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fca62a8301b605b954ad2e9c3666f9d97f63872aa4efcae5492baca2056b74ab"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac0aa6cd53ab9a31d397f8303f92c42f534693528fafbdb997c82bae6e477ad9"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c3af8e0f07399d3176b179f2e2634c3ce9c1301379a6b8c9c9aeecd481da494f"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a5fc78f9e3f501a1614a98f7c54d3969f3ad9bba8ba3d9b438c3bc5d047dd28"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:628c985afb2c7d27a4800bfb609e03985aaecb42f955049957814e0491d4006d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:74db0052d985cf37fa111828d0dd230776ac99c740e1a758ad99094be4f1803d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1e8fcdd8f672a1c4fc8d0bd3a2b576b152d2a349782d1eb0f6b8e52e9954731d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:04afa6387e2b282cf78ff3dbce20f0cc071c12dc8f685bd40960cc68644cfea6"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dd5653e67b149503c68c4018bf07e42eeed6b4e956b24c00ccdf93ac79cdff84"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d2686f91611f9e17f4548dbf050e75b079bbc2a82be565832bc8ea9047b61c8c"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:4155b51ae05ed47199dc5b2a4e62abccb274cee6b01da5b895099b61b1982974"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322102cdf1ab682ecc7d9b1c5eed4ec59657a65e1c146a0da342b78f4112db23"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e633940f28c1e913615fd624fcdd72fdba807bf53ea6925d6a588e84e1151531"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3a06f32c9634a8705f4ca9946d667609f52cf130d5548881401f1eb2c39b1e2c"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7381c66e0561c5757ffe616af869b916c8b4e42b367ab29fedc98481d1e74e14"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3573d376454d956553c356df45bb824262c397c6e26ce43e8203c4c540ee0acb"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e89df2958e5159b811af9ff0f92614dabf4ff617c03a4c1c6ff53bf1c399e0e1"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:78cacd03e79d009d95635e7d6ff12c21eb89b894c354bd2b2ed0b4763373693b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de5695a6f1d8340b12a5d6d4484290ee74d61e467c39ff03b39e30df62cf83a0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c60b9c202d00052183c9be85e5eaf18a4ada0a47d188a83c8f5c5b23252f649"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f645caaf0008bacf349875a974220f1f1da349c5dbe7c4ec93048cdc785a3326"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea9f9c6034ea2d93d9147818f17c2a0860d41b71c38b9ce4d55f21b6f9165a11"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:80d1543d58bd3d6c271b66abf454d437a438dff01c3e62fdbcd68f2a11310d4b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:73dc03a6a7e30b7edc5b01b601e53e7fc924b04e1835e8e407c12c037e81adbd"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6f5c2e7bc8a4bf7c426599765b1bd33217ec84023033672c1e9a8b35eaeaaaf8"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win32.whl", hash = "sha256:12a2b561af122e3d94cdb97fe6fb2bb2b82cef0cdca131646fdb940a1eda04f0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:3160a0fd9754aab7d47f95a6b63ab355388d890163eb03b2d2b87ab0a30cfa59"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38e812a197bf8e71a59fe55b757a84c1f946d0ac114acafaafaf21667a7e169e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6baf0baf0d5d265fa7944feb9f7451cc316bfe30e8df1a61b1bb08577c554f31"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8f25e17ab3039b05f762b0a55ae0b3632b2e073d9c8fc88e89aca31a6198e88f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3747443b6a904001473370d7810aa19c3a180ccd52a7157aacc264a5ac79265e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b116502087ce8a6b7a5f1814568ccbd0e9f6cfd99948aa59b0e241dc57cf739f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d16fd5252f883eb074ca55cb622bc0bee49b979ae4e8639fff6ca3ff44f9f854"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fa558996782fc226b529fdd2ed7866c2c6ec91cee82735c98a197fae39f706"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f6c7a8a57e9405cad7485f4c9d3172ae486cfef1344b5ddd8e5239582d7355e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac3775e3311661d4adace3697a52ac0bab17edd166087d493b52d4f4f553f9f0"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:10c93628d7497c81686e8e5e557aafa78f230cd9e77dd0c40032ef90c18f2230"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:6f4f4668e1831850ebcc2fd0b1cd11721947b6dc7c00bf1c6bd3c929ae14f2c7"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0be65ccf618c1e7ac9b849c315cc2e8a8751d9cfdaa43027d4f6624bd587ab7e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:53d0a3fa5f8af98a1e261de6a3943ca631c526635eb5817a87a59d9a57ebf48f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win32.whl", hash = "sha256:a04f86f41a8916fe45ac5024ec477f41f886b3c435da2d4e3d2709b22ab02af1"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:830d2948a5ec37c386d3170c483063798d7879037492540f10a475e3fd6f244b"},
+    {file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.3"
@@ -40,6 +182,10 @@ description = "Composable command line interface toolkit"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
+    {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
+]
 
 [package.dependencies]
 colorama = {version = "*", markers = "platform_system == \"Windows\""}
@@ -51,20 +197,82 @@ description = "Cross-platform colored terminal text."
 category = "dev"
 optional = false
 python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
+files = [
+    {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
+    {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
+]
+
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
 
 [[package]]
 name = "isort"
-version = "5.10.1"
+version = "5.12.0"
 description = "A Python utility / library to sort Python imports."
 category = "dev"
 optional = false
-python-versions = ">=3.6.1,<4.0"
+python-versions = ">=3.8.0"
+files = [
+    {file = "isort-5.12.0-py3-none-any.whl", hash = "sha256:f84c2818376e66cf843d497486ea8fed8700b340f308f076c6fb1229dff318b6"},
+    {file = "isort-5.12.0.tar.gz", hash = "sha256:8bef7dde241278824a6d83f44a544709b065191b95b6e50894bdc722fcba0504"},
+]
 
 [package.extras]
-pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
-requirements_deprecated_finder = ["pipreqs", "pip-api"]
-colors = ["colorama (>=0.4.3,<0.5.0)"]
+colors = ["colorama (>=0.4.3)"]
+pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"]
 plugins = ["setuptools"]
+requirements-deprecated-finder = ["pip-api", "pipreqs"]
+
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
 
 [[package]]
 name = "jsonpatch"
@@ -73,6 +281,10 @@ description = "Apply JSON-Patches (RFC 6902)"
 category = "main"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "jsonpatch-1.32-py2.py3-none-any.whl", hash = "sha256:26ac385719ac9f54df8a2f0827bb8253aa3ea8ab7b3368457bcdb8c14595a397"},
+    {file = "jsonpatch-1.32.tar.gz", hash = "sha256:b6ddfe6c3db30d81a96aaeceb6baf916094ffa23d7dd5fa2c13e13f8b6e600c2"},
+]
 
 [package.dependencies]
 jsonpointer = ">=1.9"
@@ -84,14 +296,22 @@ description = "Identify specific nodes in a JSON document (RFC 6901)"
 category = "main"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "jsonpointer-2.3-py2.py3-none-any.whl", hash = "sha256:51801e558539b4e9cd268638c078c6c5746c9ac96bc38152d443400e4f3793e9"},
+    {file = "jsonpointer-2.3.tar.gz", hash = "sha256:97cba51526c829282218feb99dab1b1e6bdf8efd1c43dc9d57be093c0d69c99a"},
+]
 
 [[package]]
 name = "jsonschema"
-version = "4.17.0"
+version = "4.17.3"
 description = "An implementation of JSON Schema validation for Python"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "jsonschema-4.17.3-py3-none-any.whl", hash = "sha256:a870ad254da1a8ca84b6a2905cac29d265f805acc57af304784962a2aa6508f6"},
+    {file = "jsonschema-4.17.3.tar.gz", hash = "sha256:0f864437ab8b6076ba6707453ef8f98a6a0d512a80e93f8abdb676f737ecb60d"},
+]
 
 [package.dependencies]
 attrs = ">=17.4.0"
@@ -101,6 +321,66 @@ pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
 format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
 format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
 
+[[package]]
+name = "markupsafe"
+version = "2.1.2"
+description = "Safely add untrusted strings to HTML/XML markup."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:665a36ae6f8f20a4676b53224e33d456a6f5a72657d9c83c2aa00765072f31f7"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:340bea174e9761308703ae988e982005aedf427de816d1afe98147668cc03036"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22152d00bf4a9c7c83960521fc558f55a1adbc0631fbb00a9471e097b19d72e1"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28057e985dace2f478e042eaa15606c7efccb700797660629da387eb289b9323"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca244fa73f50a800cf8c3ebf7fd93149ec37f5cb9596aa8873ae2c1d23498601"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d9d971ec1e79906046aa3ca266de79eac42f1dbf3612a05dc9368125952bd1a1"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e007132af78ea9df29495dbf7b5824cb71648d7133cf7848a2a5dd00d36f9ff"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7313ce6a199651c4ed9d7e4cfb4aa56fe923b1adf9af3b420ee14e6d9a73df65"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-win32.whl", hash = "sha256:c4a549890a45f57f1ebf99c067a4ad0cb423a05544accaf2b065246827ed9603"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:835fb5e38fd89328e9c81067fd642b3593c33e1e17e2fdbf77f5676abb14a156"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2ec4f2d48ae59bbb9d1f9d7efb9236ab81429a764dedca114f5fdabbc3788013"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:608e7073dfa9e38a85d38474c082d4281f4ce276ac0010224eaba11e929dd53a"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65608c35bfb8a76763f37036547f7adfd09270fbdbf96608be2bead319728fcd"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2bfb563d0211ce16b63c7cb9395d2c682a23187f54c3d79bfec33e6705473c6"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da25303d91526aac3672ee6d49a2f3db2d9502a4a60b55519feb1a4c7714e07d"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9cad97ab29dfc3f0249b483412c85c8ef4766d96cdf9dcf5a1e3caa3f3661cf1"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:085fd3201e7b12809f9e6e9bc1e5c96a368c8523fad5afb02afe3c051ae4afcc"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1bea30e9bf331f3fef67e0a3877b2288593c98a21ccb2cf29b74c581a4eb3af0"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-win32.whl", hash = "sha256:7df70907e00c970c60b9ef2938d894a9381f38e6b9db73c5be35e59d92e06625"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:e55e40ff0cc8cc5c07996915ad367fa47da6b3fc091fdadca7f5403239c5fec3"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a6e40afa7f45939ca356f348c8e23048e02cb109ced1eb8420961b2f40fb373a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf877ab4ed6e302ec1d04952ca358b381a882fbd9d1b07cccbfd61783561f98a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63ba06c9941e46fa389d389644e2d8225e0e3e5ebcc4ff1ea8506dce646f8c8a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f1cd098434e83e656abf198f103a8207a8187c0fc110306691a2e94a78d0abb2"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:55f44b440d491028addb3b88f72207d71eeebfb7b5dbf0643f7c023ae1fba619"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a6f2fcca746e8d5910e18782f976489939d54a91f9411c32051b4aab2bd7c513"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:0b462104ba25f1ac006fdab8b6a01ebbfbce9ed37fd37fd4acd70c67c973e460"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-win32.whl", hash = "sha256:7668b52e102d0ed87cb082380a7e2e1e78737ddecdde129acadb0eccc5423859"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6d6607f98fcf17e534162f0709aaad3ab7a96032723d8ac8750ffe17ae5a0666"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a806db027852538d2ad7555b203300173dd1b77ba116de92da9afbc3a3be3eed"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a4abaec6ca3ad8660690236d11bfe28dfd707778e2442b45addd2f086d6ef094"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f03a532d7dee1bed20bc4884194a16160a2de9ffc6354b3878ec9682bb623c54"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cf06cdc1dda95223e9d2d3c58d3b178aa5dacb35ee7e3bbac10e4e1faacb419"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22731d79ed2eb25059ae3df1dfc9cb1546691cc41f4e3130fe6bfbc3ecbbecfa"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f8ffb705ffcf5ddd0e80b65ddf7bed7ee4f5a441ea7d3419e861a12eaf41af58"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8db032bf0ce9022a8e41a22598eefc802314e81b879ae093f36ce9ddf39ab1ba"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2298c859cfc5463f1b64bd55cb3e602528db6fa0f3cfd568d3605c50678f8f03"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-win32.whl", hash = "sha256:50c42830a633fa0cf9e7d27664637532791bfc31c731a87b202d2d8ac40c3ea2"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:bb06feb762bade6bf3c8b844462274db0c76acc95c52abe8dbed28ae3d44a147"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99625a92da8229df6d44335e6fcc558a5037dd0a760e11d84be2260e6f37002f"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bca7e26c1dd751236cfb0c6c72d4ad61d986e9a41bbf76cb445f69488b2a2bd"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40627dcf047dadb22cd25ea7ecfe9cbf3bbbad0482ee5920b582f3809c97654f"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40dfd3fefbef579ee058f139733ac336312663c6706d1163b82b3003fb1925c4"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:090376d812fb6ac5f171e5938e82e7f2d7adc2b629101cec0db8b267815c85e2"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2e7821bffe00aa6bd07a23913b7f4e01328c3d5cc0b40b36c0bd81d362faeb65"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c0a33bc9f02c2b17c3ea382f91b4db0e6cde90b63b296422a939886a7a80de1c"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:b8526c6d437855442cdd3d87eede9c425c4445ea011ca38d937db299382e6fa3"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-win32.whl", hash = "sha256:137678c63c977754abe9086a3ec011e8fd985ab90631145dfb9294ad09c102a7"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:0576fe974b40a400449768941d5d0858cc624e3249dfd1e0c33674e5c7ca7aed"},
+    {file = "MarkupSafe-2.1.2.tar.gz", hash = "sha256:abcabc8c2b26036d62d4c746381a6f7cf60aafcc653198ad678306986b09450d"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -108,6 +388,10 @@ description = "McCabe checker, plugin for flake8"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"},
+    {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
+]
 
 [[package]]
 name = "mypy"
@@ -116,6 +400,31 @@ description = "Optional static typing for Python"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "mypy-0.961-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:697540876638ce349b01b6786bc6094ccdaba88af446a9abb967293ce6eaa2b0"},
+    {file = "mypy-0.961-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b117650592e1782819829605a193360a08aa99f1fc23d1d71e1a75a142dc7e15"},
+    {file = "mypy-0.961-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bdd5ca340beffb8c44cb9dc26697628d1b88c6bddf5c2f6eb308c46f269bb6f3"},
+    {file = "mypy-0.961-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3e09f1f983a71d0672bbc97ae33ee3709d10c779beb613febc36805a6e28bb4e"},
+    {file = "mypy-0.961-cp310-cp310-win_amd64.whl", hash = "sha256:e999229b9f3198c0c880d5e269f9f8129c8862451ce53a011326cad38b9ccd24"},
+    {file = "mypy-0.961-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b24be97351084b11582fef18d79004b3e4db572219deee0212078f7cf6352723"},
+    {file = "mypy-0.961-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f4a21d01fc0ba4e31d82f0fff195682e29f9401a8bdb7173891070eb260aeb3b"},
+    {file = "mypy-0.961-cp36-cp36m-win_amd64.whl", hash = "sha256:439c726a3b3da7ca84a0199a8ab444cd8896d95012c4a6c4a0d808e3147abf5d"},
+    {file = "mypy-0.961-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5a0b53747f713f490affdceef835d8f0cb7285187a6a44c33821b6d1f46ed813"},
+    {file = "mypy-0.961-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0e9f70df36405c25cc530a86eeda1e0867863d9471fe76d1273c783df3d35c2e"},
+    {file = "mypy-0.961-cp37-cp37m-win_amd64.whl", hash = "sha256:b88f784e9e35dcaa075519096dc947a388319cb86811b6af621e3523980f1c8a"},
+    {file = "mypy-0.961-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:d5aaf1edaa7692490f72bdb9fbd941fbf2e201713523bdb3f4038be0af8846c6"},
+    {file = "mypy-0.961-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9f5f5a74085d9a81a1f9c78081d60a0040c3efb3f28e5c9912b900adf59a16e6"},
+    {file = "mypy-0.961-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f4b794db44168a4fc886e3450201365c9526a522c46ba089b55e1f11c163750d"},
+    {file = "mypy-0.961-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:64759a273d590040a592e0f4186539858c948302c653c2eac840c7a3cd29e51b"},
+    {file = "mypy-0.961-cp38-cp38-win_amd64.whl", hash = "sha256:63e85a03770ebf403291ec50097954cc5caf2a9205c888ce3a61bd3f82e17569"},
+    {file = "mypy-0.961-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5f1332964963d4832a94bebc10f13d3279be3ce8f6c64da563d6ee6e2eeda932"},
+    {file = "mypy-0.961-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:006be38474216b833eca29ff6b73e143386f352e10e9c2fbe76aa8549e5554f5"},
+    {file = "mypy-0.961-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9940e6916ed9371809b35b2154baf1f684acba935cd09928952310fbddaba648"},
+    {file = "mypy-0.961-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a5ea0875a049de1b63b972456542f04643daf320d27dc592d7c3d9cd5d9bf950"},
+    {file = "mypy-0.961-cp39-cp39-win_amd64.whl", hash = "sha256:1ece702f29270ec6af25db8cf6185c04c02311c6bb21a69f423d40e527b75c56"},
+    {file = "mypy-0.961-py3-none-any.whl", hash = "sha256:03c6cc893e7563e7b2949b969e63f02c000b32502a1b4d1314cabe391aa87d66"},
+    {file = "mypy-0.961.tar.gz", hash = "sha256:f730d56cb924d371c26b8eaddeea3cc07d78ff51c521c6d04899ac6904b75492"},
+]
 
 [package.dependencies]
 mypy-extensions = ">=0.4.3"
@@ -129,19 +438,39 @@ reports = ["lxml"]
 
 [[package]]
 name = "mypy-extensions"
-version = "0.4.3"
-description = "Experimental type system extensions for programs checked with the mypy typechecker."
+version = "1.0.0"
+description = "Type system extensions for programs checked with the mypy type checker."
 category = "dev"
 optional = false
-python-versions = "*"
+python-versions = ">=3.5"
+files = [
+    {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
+    {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
+]
+
+[[package]]
+name = "packaging"
+version = "23.0"
+description = "Core utilities for Python packages"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.0-py3-none-any.whl", hash = "sha256:714ac14496c3e68c99c29b00845f7a2b85f3bb6f1078fd9f72fd20f0570002b2"},
+    {file = "packaging-23.0.tar.gz", hash = "sha256:b6ad297f8907de0fa2fe1ccbd26fdaf387f5f47c7275fedf8cce89f99446cf97"},
+]
 
 [[package]]
 name = "pathspec"
-version = "0.10.1"
+version = "0.11.1"
 description = "Utility library for gitignore style pattern matching of file paths."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pathspec-0.11.1-py3-none-any.whl", hash = "sha256:d8af70af76652554bd134c22b3e8a1cc46ed7d91edcdd721ef1a0c51a84a5293"},
+    {file = "pathspec-0.11.1.tar.gz", hash = "sha256:2798de800fa92780e33acca925945e9a19a133b715067cf165b8866c15a31687"},
+]
 
 [[package]]
 name = "pexpect"
@@ -150,21 +479,29 @@ description = "Pexpect allows easy control of interactive console applications."
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
+    {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
+]
 
 [package.dependencies]
 ptyprocess = ">=0.5"
 
 [[package]]
 name = "platformdirs"
-version = "2.5.2"
-description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
+version = "3.1.1"
+description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "platformdirs-3.1.1-py3-none-any.whl", hash = "sha256:e5986afb596e4bb5bde29a79ac9061aa955b94fca2399b7aaac4090860920dd8"},
+    {file = "platformdirs-3.1.1.tar.gz", hash = "sha256:024996549ee88ec1a9aa99ff7f8fc819bb59e2c3477b410d90a16d32d6e707aa"},
+]
 
 [package.extras]
-docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
-test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
+docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.22,!=1.23.4)"]
+test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
 
 [[package]]
 name = "ptyprocess"
@@ -173,28 +510,40 @@ description = "Run a subprocess in a pseudo terminal"
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
+    {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
+]
 
 [[package]]
 name = "pycodestyle"
-version = "2.9.1"
+version = "2.10.0"
 description = "Python style guide checker"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pycodestyle-2.10.0-py2.py3-none-any.whl", hash = "sha256:8a4eaf0d0495c7395bdab3589ac2db602797d76207242c17d470186815706610"},
+    {file = "pycodestyle-2.10.0.tar.gz", hash = "sha256:347187bdb476329d98f695c213d7295a846d1152ff4fe9bacb8a9590b8ee7053"},
+]
 
 [[package]]
 name = "pydocstyle"
-version = "6.1.1"
+version = "6.3.0"
 description = "Python docstring style checker"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
+    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+]
 
 [package.dependencies]
-snowballstemmer = "*"
+snowballstemmer = ">=2.2.0"
 
 [package.extras]
-toml = ["toml"]
+toml = ["tomli (>=1.2.3)"]
 
 [[package]]
 name = "pyflakes"
@@ -203,6 +552,25 @@ description = "passive checker of Python programs"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pyflakes-2.5.0-py2.py3-none-any.whl", hash = "sha256:4579f67d887f804e67edb544428f264b7b24f435b263c4614f384135cea553d2"},
+    {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
+]
+
+[[package]]
+name = "pygments"
+version = "2.14.0"
+description = "Pygments is a syntax highlighting package written in Python."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "Pygments-2.14.0-py3-none-any.whl", hash = "sha256:fa7bd7bd2771287c0de303af8bfdfc731f51bd2c6a47ab69d117138893b82717"},
+    {file = "Pygments-2.14.0.tar.gz", hash = "sha256:b3ed06a9e8ac9a9aae5a6f5dbe78a8a58655d17b43b93c078f094ddc476ae297"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
 
 [[package]]
 name = "pylama"
@@ -211,6 +579,10 @@ description = "Code audit tool for python"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pylama-8.4.1-py3-none-any.whl", hash = "sha256:5bbdbf5b620aba7206d688ed9fc917ecd3d73e15ec1a89647037a09fa3a86e60"},
+    {file = "pylama-8.4.1.tar.gz", hash = "sha256:2d4f7aecfb5b7466216d48610c7d6bad1c3990c29cdd392ad08259b161e486f6"},
+]
 
 [package.dependencies]
 mccabe = ">=0.7.0"
@@ -219,22 +591,51 @@ pydocstyle = ">=6.1.1"
 pyflakes = ">=2.5.0"
 
 [package.extras]
-all = ["pylint", "eradicate", "radon", "mypy", "vulture"]
+all = ["eradicate", "mypy", "pylint", "radon", "vulture"]
 eradicate = ["eradicate"]
 mypy = ["mypy"]
 pylint = ["pylint"]
 radon = ["radon"]
-tests = ["pytest (>=7.1.2)", "pytest-mypy", "eradicate (>=2.0.0)", "radon (>=5.1.0)", "mypy", "pylint (>=2.11.1)", "pylama-quotes", "toml", "vulture", "types-setuptools", "types-toml"]
+tests = ["eradicate (>=2.0.0)", "mypy", "pylama-quotes", "pylint (>=2.11.1)", "pytest (>=7.1.2)", "pytest-mypy", "radon (>=5.1.0)", "toml", "types-setuptools", "types-toml", "vulture"]
 toml = ["toml (>=0.10.2)"]
 vulture = ["vulture"]
 
 [[package]]
 name = "pyrsistent"
-version = "0.19.1"
+version = "0.19.3"
 description = "Persistent/Functional/Immutable data structures"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pyrsistent-0.19.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:20460ac0ea439a3e79caa1dbd560344b64ed75e85d8703943e0b66c2a6150e4a"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c18264cb84b5e68e7085a43723f9e4c1fd1d935ab240ce02c0324a8e01ccb64"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b774f9288dda8d425adb6544e5903f1fb6c273ab3128a355c6b972b7df39dcf"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-win32.whl", hash = "sha256:5a474fb80f5e0d6c9394d8db0fc19e90fa540b82ee52dba7d246a7791712f74a"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-win_amd64.whl", hash = "sha256:49c32f216c17148695ca0e02a5c521e28a4ee6c5089f97e34fe24163113722da"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f0774bf48631f3a20471dd7c5989657b639fd2d285b861237ea9e82c36a415a9"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ab2204234c0ecd8b9368dbd6a53e83c3d4f3cab10ecaf6d0e772f456c442393"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e42296a09e83028b3476f7073fcb69ffebac0e66dbbfd1bd847d61f74db30f19"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-win32.whl", hash = "sha256:64220c429e42a7150f4bfd280f6f4bb2850f95956bde93c6fda1b70507af6ef3"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-win_amd64.whl", hash = "sha256:016ad1afadf318eb7911baa24b049909f7f3bb2c5b1ed7b6a8f21db21ea3faa8"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4db1bd596fefd66b296a3d5d943c94f4fac5bcd13e99bffe2ba6a759d959a28"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aeda827381f5e5d65cced3024126529ddc4289d944f75e090572c77ceb19adbf"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:42ac0b2f44607eb92ae88609eda931a4f0dfa03038c44c772e07f43e738bcac9"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-win32.whl", hash = "sha256:e8f2b814a3dc6225964fa03d8582c6e0b6650d68a232df41e3cc1b66a5d2f8d1"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-win_amd64.whl", hash = "sha256:c9bb60a40a0ab9aba40a59f68214eed5a29c6274c83b2cc206a359c4a89fa41b"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a2471f3f8693101975b1ff85ffd19bb7ca7dd7c38f8a81701f67d6b4f97b87d8"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cc5d149f31706762c1f8bda2e8c4f8fead6e80312e3692619a75301d3dbb819a"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3311cb4237a341aa52ab8448c27e3a9931e2ee09561ad150ba94e4cfd3fc888c"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-win32.whl", hash = "sha256:f0e7c4b2f77593871e918be000b96c8107da48444d57005b6a6bc61fb4331b2c"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-win_amd64.whl", hash = "sha256:c147257a92374fde8498491f53ffa8f4822cd70c0d85037e09028e478cababb7"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b735e538f74ec31378f5a1e3886a26d2ca6351106b4dfde376a26fc32a044edc"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99abb85579e2165bd8522f0c0138864da97847875ecbd45f3e7e2af569bfc6f2"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a8cb235fa6d3fd7aae6a4f1429bbb1fec1577d978098da1252f0489937786f3"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-win32.whl", hash = "sha256:c74bed51f9b41c48366a286395c67f4e894374306b197e62810e0fdaf2364da2"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-win_amd64.whl", hash = "sha256:878433581fc23e906d947a6814336eee031a00e6defba224234169ae3d3d6a98"},
+    {file = "pyrsistent-0.19.3-py3-none-any.whl", hash = "sha256:ccf0d6bd208f8111179f0c26fdf84ed7c3891982f2edaeae7422575f47e66b64"},
+    {file = "pyrsistent-0.19.3.tar.gz", hash = "sha256:1a2994773706bbb4995c31a97bc94f1418314923bd1048c6d964837040376440"},
+]
 
 [[package]]
 name = "pyyaml"
@@ -243,6 +644,70 @@ description = "YAML parser and emitter for Python"
 category = "main"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
+    {file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
+    {file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
+    {file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
+    {file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
+    {file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
+    {file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
+    {file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
+    {file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
+    {file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
+    {file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
+    {file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
+    {file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
+    {file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
+    {file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
+    {file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
+    {file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
+    {file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
+    {file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
+    {file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
+    {file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
+    {file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
+]
+
+[[package]]
+name = "requests"
+version = "2.28.2"
+description = "Python HTTP for Humans."
+category = "dev"
+optional = false
+python-versions = ">=3.7, <4"
+files = [
+    {file = "requests-2.28.2-py3-none-any.whl", hash = "sha256:64299f4909223da747622c030b781c0d7811e359c37124b4bd368fb8c6518baa"},
+    {file = "requests-2.28.2.tar.gz", hash = "sha256:98b1b2782e3c6c4904938b84c0eb932721069dfdb9134313beff7c83c2df24bf"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<1.27"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
 
 [[package]]
 name = "snowballstemmer"
@@ -251,6 +716,175 @@ description = "This package provides 29 stemmers for 28 languages generated from
 category = "dev"
 optional = false
 python-versions = "*"
+files = [
+    {file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
+    {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
+]
+
+[[package]]
+name = "sphinx"
+version = "6.1.3"
+description = "Python documentation generator"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.1.3.tar.gz", hash = "sha256:0dac3b698538ffef41716cf97ba26c1c7788dba73ce6f150c1ff5b4720786dd2"},
+    {file = "sphinx-6.1.3-py3-none-any.whl", hash = "sha256:807d1cb3d6be87eb78a381c3e70ebd8d346b9a25f3753e9947e866b2786865fc"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.0"
+description = "Read the Docs theme for Sphinx"
+category = "dev"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.0-py2.py3-none-any.whl", hash = "sha256:f823f7e71890abe0ac6aaa6013361ea2696fc8d3e1fa798f463e82bdb77eeff2"},
+    {file = "sphinx_rtd_theme-1.2.0.tar.gz", hash = "sha256:a0d8bd1a2ed52e0b338cbe19c4b2eef3c5e7a048769753dac6a9f059c7b641b8"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = {version = ">=2.0.0,<3.0.0 || >3.0.0", markers = "python_version > \"3\""}
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.4"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"},
+    {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.2"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
+    {file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.1"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"},
+    {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+category = "dev"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.3"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
+    {file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.5"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
+    {file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
 
 [[package]]
 name = "toml"
@@ -259,6 +893,10 @@ description = "Python Library for Tom's Obvious, Minimal Language"
 category = "dev"
 optional = false
 python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
+files = [
+    {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
+    {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
+]
 
 [[package]]
 name = "tomli"
@@ -267,22 +905,51 @@ description = "A lil' TOML parser"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
+    {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
+]
 
 [[package]]
 name = "types-pyyaml"
-version = "6.0.12.1"
+version = "6.0.12.8"
 description = "Typing stubs for PyYAML"
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "types-PyYAML-6.0.12.8.tar.gz", hash = "sha256:19304869a89d49af00be681e7b267414df213f4eb89634c4495fa62e8f942b9f"},
+    {file = "types_PyYAML-6.0.12.8-py3-none-any.whl", hash = "sha256:5314a4b2580999b2ea06b2e5f9a7763d860d6e09cdf21c0e9561daa9cbd60178"},
+]
 
 [[package]]
 name = "typing-extensions"
-version = "4.4.0"
+version = "4.5.0"
 description = "Backported and Experimental Type Hints for Python 3.7+"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "typing_extensions-4.5.0-py3-none-any.whl", hash = "sha256:fb33085c39dd998ac16d1431ebc293a8b3eedd00fd4a32de0ff79002c19511b4"},
+    {file = "typing_extensions-4.5.0.tar.gz", hash = "sha256:5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb"},
+]
+
+[[package]]
+name = "urllib3"
+version = "1.26.15"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+files = [
+    {file = "urllib3-1.26.15-py2.py3-none-any.whl", hash = "sha256:aa751d169e23c7479ce47a0cb0da579e3ede798f994f5816a74e4f4500dcea42"},
+    {file = "urllib3-1.26.15.tar.gz", hash = "sha256:8a388717b9476f934a21484e8c8e61875ab60644d29b9b39e11e4b9dc1c6b305"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
+secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
+socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
 
 [[package]]
 name = "warlock"
@@ -291,47 +958,16 @@ description = "Python object model built on JSON schema and JSON patch."
 category = "main"
 optional = false
 python-versions = ">=3.7,<4.0"
+files = [
+    {file = "warlock-2.0.1-py3-none-any.whl", hash = "sha256:448df959cec31904f686ac8c6b1dfab80f0cdabce3d303be517dd433eeebf012"},
+    {file = "warlock-2.0.1.tar.gz", hash = "sha256:99abbf9525b2a77f2cde896d3a9f18a5b4590db063db65e08207694d2e0137fc"},
+]
 
 [package.dependencies]
 jsonpatch = ">=1,<2"
 jsonschema = ">=4,<5"
 
 [metadata]
-lock-version = "1.1"
+lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "a0f040b07fc6ce4deb0be078b9a88c2a465cb6bccb9e260a67e92c2403e2319f"
-
-[metadata.files]
-attrs = []
-black = []
-click = []
-colorama = []
-isort = []
-jsonpatch = []
-jsonpointer = []
-jsonschema = []
-mccabe = []
-mypy = []
-mypy-extensions = []
-pathspec = []
-pexpect = [
-    {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
-    {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
-]
-platformdirs = [
-    {file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
-    {file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
-]
-ptyprocess = []
-pycodestyle = []
-pydocstyle = []
-pyflakes = []
-pylama = []
-pyrsistent = []
-pyyaml = []
-snowballstemmer = []
-toml = []
-tomli = []
-types-pyyaml = []
-typing-extensions = []
-warlock = []
+content-hash = "b3f428e987713d7875434c4b43cadadcb7d77dd3d62fd6855fb8e77ec946f082"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index a136c91e5e..c0fe323272 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -22,6 +22,13 @@ pylama = "^8.4.1"
 pyflakes = "2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+Sphinx = "^6.1.3"
+sphinx-rtd-theme = "^1.2.0"
+
 [tool.poetry.scripts]
 dts = "main:main"
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 3/4] dts: add doc generation
  2023-03-23 10:40 [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
  2023-03-23 10:40 ` [RFC PATCH v1 1/4] dts: code adjustments for sphinx Juraj Linkeš
  2023-03-23 10:40 ` [RFC PATCH v1 2/4] dts: add doc generation dependencies Juraj Linkeš
@ 2023-03-23 10:40 ` Juraj Linkeš
  2023-03-23 10:40 ` [RFC PATCH v1 4/4] dts: format docstrigs to google format Juraj Linkeš
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-03-23 10:40 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is sphinx, which is already
used in DPDK. The configuration is kept the same to preserve the style.

Sphinx generates the documentation from Python docstrings. The docstring
format most suitable for DTS seems to be the Google format [0] which
requires the sphinx.ext.napoleon extension.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx import the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 doc/api/meson.build      |  1 +
 doc/guides/conf.py       | 22 ++++++++++++++----
 doc/guides/meson.build   |  1 +
 doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
 doc/meson.build          |  5 ----
 dts/doc-index.rst        | 20 ++++++++++++++++
 dts/meson.build          | 50 ++++++++++++++++++++++++++++++++++++++++
 meson.build              |  6 +++++
 meson_options.txt        |  2 ++
 9 files changed, 126 insertions(+), 10 deletions(-)
 create mode 100644 dts/doc-index.rst
 create mode 100644 dts/meson.build

diff --git a/doc/api/meson.build b/doc/api/meson.build
index 2876a78a7e..ee70f09ef7 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index a55ce38800..04c842b67a 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,19 @@
           file=stderr)
     pass
 
+extensions = ['sphinx.ext.napoleon']
+
+# Python docstring options
+autodoc_member_order = 'bysource'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_use_ivar = True
+napoleon_use_rtype = False
+add_module_names = False
+toc_object_entries_show_parents = 'hide'
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +47,8 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+path.append(environ.setdefault('DTS_ROOT', '.'))
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..fed361060f 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index ebd6dceb6a..332e2187a6 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -282,3 +282,32 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+   .. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+
+Build commands
+~~~~~~~~~~~~~~
+
+The documentation is built using the standard DPDK build system.
+
+After entering Poetry's shell, build the documentation with:
+
+   .. code-block:: console
+
+   ninja -C build doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings.
diff --git a/doc/meson.build b/doc/meson.build
index 6f74706aa2..5e08bb7b80 100644
--- a/doc/meson.build
+++ b/doc/meson.build
@@ -1,11 +1,6 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
-doc_targets = []
-doc_target_names = []
-subdir('api')
-subdir('guides')
-
 if doc_targets.length() == 0
     message = 'No docs targets found'
 else
diff --git a/dts/doc-index.rst b/dts/doc-index.rst
new file mode 100644
index 0000000000..10151c6851
--- /dev/null
+++ b/dts/doc-index.rst
@@ -0,0 +1,20 @@
+.. DPDK Test Suite documentation master file, created by
+   sphinx-quickstart on Tue Mar 14 12:23:52 2023.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Welcome to DPDK Test Suite's documentation!
+===========================================
+
+.. toctree::
+   :maxdepth: 4
+   :caption: Contents:
+
+   modules
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..6ea7887f4b
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,50 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
+sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
+
+if sphinx.found() and sphinx_apidoc.found()
+endif
+
+dts_api_framework_dir = join_paths(meson.current_source_dir(), 'framework')
+dts_api_build_dir = join_paths(api_build_dir, 'dts')
+dts_api_src = custom_target('dts_api_src',
+        output: 'modules.rst',
+        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
+            sphinx_apidoc, '--append-syspath', '--force',
+            '--module-first', '--separate',
+            '--doc-project', 'DTS', '-V', meson.project_version(),
+            '-o', dts_api_build_dir,
+            dts_api_framework_dir],
+        build_by_default: get_option('enable_docs'))
+doc_targets += dts_api_src
+doc_target_names += 'DTS_API_sources'
+
+cp = find_program('cp', required: get_option('enable_docs'))
+cp_index = custom_target('cp_index',
+        input: 'doc-index.rst',
+        output: 'index.rst',
+        depends: dts_api_src,
+        command: [cp, '@INPUT@', join_paths(dts_api_build_dir, 'index.rst')],
+        build_by_default: get_option('enable_docs'))
+doc_targets += cp_index
+doc_target_names += 'DTS_API_cp_index'
+
+extra_sphinx_args = ['-a', '-c', guides_source_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        depends: cp_index,
+        command: ['DTS_ROOT=@0@'.format(meson.current_source_dir()),
+            sphinx_wrapper, sphinx, meson.project_version(),
+            dts_api_build_dir, dts_api_build_dir, extra_sphinx_args],
+        build_by_default: get_option('enable_docs'),
+        install: get_option('enable_docs'),
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_html'
diff --git a/meson.build b/meson.build
index f91d652bc5..48a4e12402 100644
--- a/meson.build
+++ b/meson.build
@@ -82,6 +82,12 @@ subdir('drivers')
 subdir('usertools')
 subdir('app')
 
+# define doc targets
+doc_targets = []
+doc_target_names = []
+subdir('doc/api')
+subdir('doc/guides')
+subdir('dts')
 # build docs
 subdir('doc')
 
diff --git a/meson_options.txt b/meson_options.txt
index 82c8297065..415b49fc78 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -16,6 +16,8 @@ option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', d
        'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
 option('enable_docs', type: 'boolean', value: false, description:
        'build documentation')
+option('enable_dts_docs', type: 'boolean', value: false, description:
+       'Build DTS developer documentation.')
 option('enable_apps', type: 'string', value: '', description:
        'Comma-separated list of apps to build. If unspecified, build all apps.')
 option('enable_drivers', type: 'string', value: '', description:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v1 4/4] dts: format docstrigs to google format
  2023-03-23 10:40 [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
                   ` (2 preceding siblings ...)
  2023-03-23 10:40 ` [RFC PATCH v1 3/4] dts: add doc generation Juraj Linkeš
@ 2023-03-23 10:40 ` Juraj Linkeš
  2023-04-28 19:33   ` Jeremy Spewock
  2023-04-03  9:17 ` [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
  5 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-03-23 10:40 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock
  Cc: dev, Juraj Linkeš

WIP: only one module is reformatted to serve as a demonstration.

The google format is documented here [0].

[0]: https://google.github.io/styleguide/pyguide.html

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 152 +++++++++++++++++++---------
 1 file changed, 103 insertions(+), 49 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 90467981c3..ad8ef442af 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+There's a base class, Node, that's supposed to be extended by other classes
+with functionality specific to that node type.
+The only part that can be used standalone is the Node.skip_setup static method,
+which is a decorator used to skip method execution
+if skip_setup is passed by the user on the cmdline or in an env variable.
 """
 
 from typing import Any, Callable
@@ -26,10 +31,25 @@
 
 
 class Node(object):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather extended.
+    It implements common methods to manage any node:
+
+       * connection to the node
+       * information gathering of CPU
+       * hugepages setup
+
+    Arguments:
+        node_config: The config from the input configuration file.
+
+    Attributes:
+        main_session: The primary OS-agnostic remote session used
+            to communicate with the node.
+        config: The configuration used to create the node.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and user configuration.
     """
 
     main_session: OSSession
@@ -56,65 +76,89 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Created node: {self.name}")
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call self._set_up_execution where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps
+        common to all DTS node types.
         """
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution teardown steps.
         """
 
     def set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps
+        common to all DTS node types.
+
+        Args:
+            build_target_config: The build target configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for derived classes.
+
+        Derived classes should optionally overwrite this
+        if they want to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps
+        common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-agnostic remote session.
+
+        The returned session won't be used by the object creating it.
+        Will be cleaned up automatically.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-agnostic remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -130,14 +174,24 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Logical cores that DTS can use are ones that are present on the node,
+        but filtered according to user config.
+        The filter_specifier will filter cores from those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies
+                the number of logical cores per core, cores per socket and
+                the number of sockets,
+                the other that specifies a logical core list.
+            ascending: If True, use cores with the lowest numerical id first
+                and continue in ascending order. If False, start with the highest
+                id and continue in descending order. This ordering affects which
+                sockets to consider first as well.
+
+        Returns:
+            A list of logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -147,17 +201,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self):
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the Node.
+
+        Configure the hugepages only if they're specified in user configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -165,9 +216,7 @@ def _setup_hugepages(self):
             )
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -176,6 +225,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """A decorator that skips the decorated function.
+
+        When used, the decorator executes an empty lambda function
+        instead of the decorated function.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 0/4] dts: add dts api docs
  2023-03-23 10:40 [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
                   ` (3 preceding siblings ...)
  2023-03-23 10:40 ` [RFC PATCH v1 4/4] dts: format docstrigs to google format Juraj Linkeš
@ 2023-04-03  9:17 ` Juraj Linkeš
  2023-04-03  9:42   ` Bruce Richardson
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
  5 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-04-03  9:17 UTC (permalink / raw)
  To: bruce.richardson; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 4508 bytes --]

Hi Bruce, Thomas,

The meson integration is kinda all over the place. I wanted to use the
existing conf.py Sphinx config file, but I also wanted to keep the docs
separated (because of extra DTS api docs dependencies), so the various
pieces are in different places (the config file in one place, meson code in
dts directory and generated Sphinx docs are in a new directory in the api
build dir, separate from the rest of the Sphinx html).

The big thing here is that I didn't figure out how to separate the dts api
build from the rest of the docs. I don't know how the -Denable_docs option
is supposed to work. I wanted to use -Denable_dts_docs in the same fashion
to decouple the builds, but it doesn't seem to work. Reading the code I
think the original option doesn't actually do anything - does it work? How
is it supposed to work?

Thanks,
Juraj

On Thu, Mar 23, 2023 at 11:40 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Augment the meson build system with dts api generation. The api docs are
> generated from Python docstrings in DTS using Sphinx. The format of
> choice is the Google format [0].
>
> The guide html sphinx configuration is used to preserve the same style.
>
> The build requires the same Python version and dependencies as DTS,
> because Sphinx imports the Python modules. Dependencies are installed
> using Poetry from the dts directory:
>
> poetry install --with docs
>
> After installing, enter the Poetry shell:
>
> poetry shell
>
> And then run the build:
> ninja -C <meson_build_dir> doc
>
> There's only one properly documented module that serves as a
> demonstration of the style - framework.testbed_model.node.
>
> I didn't figure out how to separate dts build from the rest of the docs,
> which I think is required because of the different dependencies.
> I thought the enable_docs option would do this, so I added
> enable_dts_docs, but it doesn't seem to be working. Requesting comment
> on this.
>
> [0]
> https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
>
> Juraj Linkeš (4):
>   dts: code adjustments for sphinx
>   dts: add doc generation dependencies
>   dts: add doc generation
>   dts: format docstrigs to google format
>
>  doc/api/meson.build                           |   1 +
>  doc/guides/conf.py                            |  22 +-
>  doc/guides/meson.build                        |   1 +
>  doc/guides/tools/dts.rst                      |  29 +
>  doc/meson.build                               |   5 -
>  dts/doc-index.rst                             |  20 +
>  dts/framework/config/__init__.py              |  11 +
>  .../{testbed_model/hw => config}/cpu.py       |  13 +
>  dts/framework/dts.py                          |   8 +-
>  dts/framework/remote_session/__init__.py      |   3 +-
>  dts/framework/remote_session/linux_session.py |   2 +-
>  dts/framework/remote_session/os_session.py    |  12 +-
>  .../remote_session/remote/__init__.py         |  16 -
>  .../{remote => }/remote_session.py            |   0
>  .../{remote => }/ssh_session.py               |   0
>  dts/framework/settings.py                     |  55 +-
>  dts/framework/testbed_model/__init__.py       |  10 +-
>  dts/framework/testbed_model/hw/__init__.py    |  27 -
>  dts/framework/testbed_model/node.py           | 164 ++--
>  dts/framework/testbed_model/sut_node.py       |   9 +-
>  .../testbed_model/{hw => }/virtual_device.py  |   0
>  dts/main.py                                   |   3 +-
>  dts/meson.build                               |  50 ++
>  dts/poetry.lock                               | 770 ++++++++++++++++--
>  dts/pyproject.toml                            |   7 +
>  dts/tests/TestSuite_hello_world.py            |   6 +-
>  meson.build                                   |   6 +
>  meson_options.txt                             |   2 +
>  28 files changed, 1027 insertions(+), 225 deletions(-)
>  create mode 100644 dts/doc-index.rst
>  rename dts/framework/{testbed_model/hw => config}/cpu.py (95%)
>  delete mode 100644 dts/framework/remote_session/remote/__init__.py
>  rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
>  rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
>  delete mode 100644 dts/framework/testbed_model/hw/__init__.py
>  rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
>  create mode 100644 dts/meson.build
>
> --
> 2.30.2
>
>

[-- Attachment #2: Type: text/html, Size: 5515 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 0/4] dts: add dts api docs
  2023-04-03  9:17 ` [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
@ 2023-04-03  9:42   ` Bruce Richardson
  2023-04-25  8:20     ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-04-03  9:42 UTC (permalink / raw)
  To: Juraj Linkeš; +Cc: dev

On Mon, Apr 03, 2023 at 11:17:06AM +0200, Juraj Linkeš wrote:
>    Hi Bruce, Thomas,
>    The meson integration is kinda all over the place. I wanted to use the
>    existing conf.py Sphinx config file, but I also wanted to keep the docs
>    separated (because of extra DTS api docs dependencies), so the various
>    pieces are in different places (the config file in one place, meson
>    code in dts directory and generated Sphinx docs are in a new directory
>    in the api build dir, separate from the rest of the Sphinx html).
>    The big thing here is that I didn't figure out how to separate the dts
>    api build from the rest of the docs. I don't know how the -Denable_docs
>    option is supposed to work. I wanted to use -Denable_dts_docs in the
>    same fashion to decouple the builds, but it doesn't seem to work.
>    Reading the code I think the original option doesn't actually do
>    anything - does it work? How is it supposed to work?
>    Thanks,
>    Juraj

The enable_docs option works by selectively enabling the doc build tasks
using the "build_by_default" parameter on them. 
See http://git.dpdk.org/dpdk/tree/doc/guides/meson.build#n23 for an
example. The custom_target for sphinx is not a dependency of any other
task, so whether it gets run or not depends entirely on whether the
"build_by_default" and/or "install" options are set.

As usual, there may be other stuff that needs cleaning up on this, but
that's how it works for now, anyway. [And it does actually work, last I
tested it :-)]

/Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 0/4] dts: add dts api docs
  2023-04-03  9:42   ` Bruce Richardson
@ 2023-04-25  8:20     ` Juraj Linkeš
  2023-04-25  8:44       ` Bruce Richardson
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-04-25  8:20 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

On Mon, Apr 3, 2023 at 11:42 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Apr 03, 2023 at 11:17:06AM +0200, Juraj Linkeš wrote:
> >    Hi Bruce, Thomas,
> >    The meson integration is kinda all over the place. I wanted to use the
> >    existing conf.py Sphinx config file, but I also wanted to keep the docs
> >    separated (because of extra DTS api docs dependencies), so the various
> >    pieces are in different places (the config file in one place, meson
> >    code in dts directory and generated Sphinx docs are in a new directory
> >    in the api build dir, separate from the rest of the Sphinx html).
> >    The big thing here is that I didn't figure out how to separate the dts
> >    api build from the rest of the docs. I don't know how the -Denable_docs
> >    option is supposed to work. I wanted to use -Denable_dts_docs in the
> >    same fashion to decouple the builds, but it doesn't seem to work.
> >    Reading the code I think the original option doesn't actually do
> >    anything - does it work? How is it supposed to work?
> >    Thanks,
> >    Juraj
>
> The enable_docs option works by selectively enabling the doc build tasks
> using the "build_by_default" parameter on them.
> See http://git.dpdk.org/dpdk/tree/doc/guides/meson.build#n23 for an
> example. The custom_target for sphinx is not a dependency of any other
> task, so whether it gets run or not depends entirely on whether the
> "build_by_default" and/or "install" options are set.
>
> As usual, there may be other stuff that needs cleaning up on this, but
> that's how it works for now, anyway. [And it does actually work, last I
> tested it :-)]

I looked into this and as is so frequently the case, we're both right. :-)

When running according to docs, that is with:
1. meson setup doc_build
2. ninja -C doc_build doc

it doesn't matter what enable_docs is set to, it always builds the docs.

But in the full build it does control whether docs are built, i.e.:

1. meson setup doc_build
2. ninja -C doc_build
doesn't build the docs, whereas:

1. meson setup doc_build -Denable_docs=true
2. ninja -C doc_build
builds the docs.

Now the problem in this version is when doing just the doc build
(ninja -C doc_build doc) both DPDK and DTS docs are built and I'd like
to separate those (because DTS doc build has additional dependencies).
I'm thinking the following would be a good solution within the current
paradigm:
1. The -Denable_docs=true and -Denable_dts_docs=true options to
separate doc builds for the full build.
2. Separate dts doc dir for the doc build ("ninja -C doc_build doc"
for DPDK docs and "ninja -C doc_build dts" (or maybe some other dir)
for DTS docs).

What do you think?
Juraj

>
> /Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 0/4] dts: add dts api docs
  2023-04-25  8:20     ` Juraj Linkeš
@ 2023-04-25  8:44       ` Bruce Richardson
  2023-04-25  8:57         ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-04-25  8:44 UTC (permalink / raw)
  To: Juraj Linkeš; +Cc: dev

On Tue, Apr 25, 2023 at 10:20:36AM +0200, Juraj Linkeš wrote:
> On Mon, Apr 3, 2023 at 11:42 AM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Mon, Apr 03, 2023 at 11:17:06AM +0200, Juraj Linkeš wrote:
> > >    Hi Bruce, Thomas,
> > >    The meson integration is kinda all over the place. I wanted to use the
> > >    existing conf.py Sphinx config file, but I also wanted to keep the docs
> > >    separated (because of extra DTS api docs dependencies), so the various
> > >    pieces are in different places (the config file in one place, meson
> > >    code in dts directory and generated Sphinx docs are in a new directory
> > >    in the api build dir, separate from the rest of the Sphinx html).
> > >    The big thing here is that I didn't figure out how to separate the dts
> > >    api build from the rest of the docs. I don't know how the -Denable_docs
> > >    option is supposed to work. I wanted to use -Denable_dts_docs in the
> > >    same fashion to decouple the builds, but it doesn't seem to work.
> > >    Reading the code I think the original option doesn't actually do
> > >    anything - does it work? How is it supposed to work?
> > >    Thanks,
> > >    Juraj
> >
> > The enable_docs option works by selectively enabling the doc build tasks
> > using the "build_by_default" parameter on them.
> > See http://git.dpdk.org/dpdk/tree/doc/guides/meson.build#n23 for an
> > example. The custom_target for sphinx is not a dependency of any other
> > task, so whether it gets run or not depends entirely on whether the
> > "build_by_default" and/or "install" options are set.
> >
> > As usual, there may be other stuff that needs cleaning up on this, but
> > that's how it works for now, anyway. [And it does actually work, last I
> > tested it :-)]
> 
> I looked into this and as is so frequently the case, we're both right. :-)
> 
> When running according to docs, that is with:
> 1. meson setup doc_build
> 2. ninja -C doc_build doc
> 
> it doesn't matter what enable_docs is set to, it always builds the docs.
> 

Yes, I'd forgotten that. That was deliberately done so one could always
request a doc build directly, without having to worry about DPDK config or
building the rest of DPDK.

> But in the full build it does control whether docs are built, i.e.:
> 
> 1. meson setup doc_build
> 2. ninja -C doc_build
> doesn't build the docs, whereas:
> 
> 1. meson setup doc_build -Denable_docs=true
> 2. ninja -C doc_build
> builds the docs.
> 
> Now the problem in this version is when doing just the doc build
> (ninja -C doc_build doc) both DPDK and DTS docs are built and I'd like
> to separate those (because DTS doc build has additional dependencies).
> I'm thinking the following would be a good solution within the current
> paradigm:
> 1. The -Denable_docs=true and -Denable_dts_docs=true options to
> separate doc builds for the full build.
> 2. Separate dts doc dir for the doc build ("ninja -C doc_build doc"
> for DPDK docs and "ninja -C doc_build dts" (or maybe some other dir)
> for DTS docs).

How important is it to separate out the dts docs from the regular docs?
What are the additional dependencies, and how hard are they to get? If
possible I'd rather not have an additional build config option added for
this.

If we are separating them out, I think the dts doc target should be
"dts_doc" rather than "dts" for clarity.

/Bruce


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 0/4] dts: add dts api docs
  2023-04-25  8:44       ` Bruce Richardson
@ 2023-04-25  8:57         ` Juraj Linkeš
  2023-04-25  9:43           ` Bruce Richardson
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-04-25  8:57 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

On Tue, Apr 25, 2023 at 10:44 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Tue, Apr 25, 2023 at 10:20:36AM +0200, Juraj Linkeš wrote:
> > On Mon, Apr 3, 2023 at 11:42 AM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Mon, Apr 03, 2023 at 11:17:06AM +0200, Juraj Linkeš wrote:
> > > >    Hi Bruce, Thomas,
> > > >    The meson integration is kinda all over the place. I wanted to use the
> > > >    existing conf.py Sphinx config file, but I also wanted to keep the docs
> > > >    separated (because of extra DTS api docs dependencies), so the various
> > > >    pieces are in different places (the config file in one place, meson
> > > >    code in dts directory and generated Sphinx docs are in a new directory
> > > >    in the api build dir, separate from the rest of the Sphinx html).
> > > >    The big thing here is that I didn't figure out how to separate the dts
> > > >    api build from the rest of the docs. I don't know how the -Denable_docs
> > > >    option is supposed to work. I wanted to use -Denable_dts_docs in the
> > > >    same fashion to decouple the builds, but it doesn't seem to work.
> > > >    Reading the code I think the original option doesn't actually do
> > > >    anything - does it work? How is it supposed to work?
> > > >    Thanks,
> > > >    Juraj
> > >
> > > The enable_docs option works by selectively enabling the doc build tasks
> > > using the "build_by_default" parameter on them.
> > > See http://git.dpdk.org/dpdk/tree/doc/guides/meson.build#n23 for an
> > > example. The custom_target for sphinx is not a dependency of any other
> > > task, so whether it gets run or not depends entirely on whether the
> > > "build_by_default" and/or "install" options are set.
> > >
> > > As usual, there may be other stuff that needs cleaning up on this, but
> > > that's how it works for now, anyway. [And it does actually work, last I
> > > tested it :-)]
> >
> > I looked into this and as is so frequently the case, we're both right. :-)
> >
> > When running according to docs, that is with:
> > 1. meson setup doc_build
> > 2. ninja -C doc_build doc
> >
> > it doesn't matter what enable_docs is set to, it always builds the docs.
> >
>
> Yes, I'd forgotten that. That was deliberately done so one could always
> request a doc build directly, without having to worry about DPDK config or
> building the rest of DPDK.
>
> > But in the full build it does control whether docs are built, i.e.:
> >
> > 1. meson setup doc_build
> > 2. ninja -C doc_build
> > doesn't build the docs, whereas:
> >
> > 1. meson setup doc_build -Denable_docs=true
> > 2. ninja -C doc_build
> > builds the docs.
> >
> > Now the problem in this version is when doing just the doc build
> > (ninja -C doc_build doc) both DPDK and DTS docs are built and I'd like
> > to separate those (because DTS doc build has additional dependencies).
> > I'm thinking the following would be a good solution within the current
> > paradigm:
> > 1. The -Denable_docs=true and -Denable_dts_docs=true options to
> > separate doc builds for the full build.
> > 2. Separate dts doc dir for the doc build ("ninja -C doc_build doc"
> > for DPDK docs and "ninja -C doc_build dts" (or maybe some other dir)
> > for DTS docs).
>
> How important is it to separate out the dts docs from the regular docs?

It is mostly a matter of dependencies.

> What are the additional dependencies, and how hard are they to get? If
> possible I'd rather not have an additional build config option added for
> this.

The same dependencies as for running DTS, which are the proper Python
version (3.10 and newer) with DTS depencies obtained with Poetry
(which is a matter of installing Poetry and running it). As is
standard with Python projects, this is all set up in a virtual
environment, which needs to be activated before running the doc build.
It's documented in more detail in the tools docs:
https://doc.dpdk.org/guides/tools/dts.html#setting-up-dts-environment

This may be too much of a hassle for DPDK devs to build non-DTS docs,
but I don't know whether DPDK devs actually build docs at all.

>
> If we are separating them out, I think the dts doc target should be
> "dts_doc" rather than "dts" for clarity.

Agreed, but "dts_doc" would be a new top-level dir. I think we could
do dts/doc (a dir inside dts).

Juraj

>
> /Bruce
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 0/4] dts: add dts api docs
  2023-04-25  8:57         ` Juraj Linkeš
@ 2023-04-25  9:43           ` Bruce Richardson
  2023-05-03 11:33             ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-04-25  9:43 UTC (permalink / raw)
  To: Juraj Linkeš; +Cc: dev

On Tue, Apr 25, 2023 at 10:57:12AM +0200, Juraj Linkeš wrote:
> On Tue, Apr 25, 2023 at 10:44 AM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Tue, Apr 25, 2023 at 10:20:36AM +0200, Juraj Linkeš wrote:
> > > On Mon, Apr 3, 2023 at 11:42 AM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > On Mon, Apr 03, 2023 at 11:17:06AM +0200, Juraj Linkeš wrote:
> > > > >    Hi Bruce, Thomas,
> > > > >    The meson integration is kinda all over the place. I wanted to use the
> > > > >    existing conf.py Sphinx config file, but I also wanted to keep the docs
> > > > >    separated (because of extra DTS api docs dependencies), so the various
> > > > >    pieces are in different places (the config file in one place, meson
> > > > >    code in dts directory and generated Sphinx docs are in a new directory
> > > > >    in the api build dir, separate from the rest of the Sphinx html).
> > > > >    The big thing here is that I didn't figure out how to separate the dts
> > > > >    api build from the rest of the docs. I don't know how the -Denable_docs
> > > > >    option is supposed to work. I wanted to use -Denable_dts_docs in the
> > > > >    same fashion to decouple the builds, but it doesn't seem to work.
> > > > >    Reading the code I think the original option doesn't actually do
> > > > >    anything - does it work? How is it supposed to work?
> > > > >    Thanks,
> > > > >    Juraj
> > > >
> > > > The enable_docs option works by selectively enabling the doc build tasks
> > > > using the "build_by_default" parameter on them.
> > > > See http://git.dpdk.org/dpdk/tree/doc/guides/meson.build#n23 for an
> > > > example. The custom_target for sphinx is not a dependency of any other
> > > > task, so whether it gets run or not depends entirely on whether the
> > > > "build_by_default" and/or "install" options are set.
> > > >
> > > > As usual, there may be other stuff that needs cleaning up on this, but
> > > > that's how it works for now, anyway. [And it does actually work, last I
> > > > tested it :-)]
> > >
> > > I looked into this and as is so frequently the case, we're both right. :-)
> > >
> > > When running according to docs, that is with:
> > > 1. meson setup doc_build
> > > 2. ninja -C doc_build doc
> > >
> > > it doesn't matter what enable_docs is set to, it always builds the docs.
> > >
> >
> > Yes, I'd forgotten that. That was deliberately done so one could always
> > request a doc build directly, without having to worry about DPDK config or
> > building the rest of DPDK.
> >
> > > But in the full build it does control whether docs are built, i.e.:
> > >
> > > 1. meson setup doc_build
> > > 2. ninja -C doc_build
> > > doesn't build the docs, whereas:
> > >
> > > 1. meson setup doc_build -Denable_docs=true
> > > 2. ninja -C doc_build
> > > builds the docs.
> > >
> > > Now the problem in this version is when doing just the doc build
> > > (ninja -C doc_build doc) both DPDK and DTS docs are built and I'd like
> > > to separate those (because DTS doc build has additional dependencies).
> > > I'm thinking the following would be a good solution within the current
> > > paradigm:
> > > 1. The -Denable_docs=true and -Denable_dts_docs=true options to
> > > separate doc builds for the full build.
> > > 2. Separate dts doc dir for the doc build ("ninja -C doc_build doc"
> > > for DPDK docs and "ninja -C doc_build dts" (or maybe some other dir)
> > > for DTS docs).
> >
> > How important is it to separate out the dts docs from the regular docs?
> 
> It is mostly a matter of dependencies.
> 
> > What are the additional dependencies, and how hard are they to get? If
> > possible I'd rather not have an additional build config option added for
> > this.
> 
> The same dependencies as for running DTS, which are the proper Python
> version (3.10 and newer) with DTS depencies obtained with Poetry
> (which is a matter of installing Poetry and running it). As is
> standard with Python projects, this is all set up in a virtual
> environment, which needs to be activated before running the doc build.
> It's documented in more detail in the tools docs:
> https://doc.dpdk.org/guides/tools/dts.html#setting-up-dts-environment
> 
> This may be too much of a hassle for DPDK devs to build non-DTS docs,
> but I don't know whether DPDK devs actually build docs at all.
> 

Can't really say for sure. I suspect most don't build them on a daily
basis, but would often need to build them before submitting patches with a
doc change included.

What format are the DTS docs in? I agree that as described above the
requirements are pretty different than those for the regular DPDK docs.

> >
> > If we are separating them out, I think the dts doc target should be
> > "dts_doc" rather than "dts" for clarity.
> 
> Agreed, but "dts_doc" would be a new top-level dir. I think we could
> do dts/doc (a dir inside dts).
> 
That path seems reasonable to me.

/Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 4/4] dts: format docstrigs to google format
  2023-03-23 10:40 ` [RFC PATCH v1 4/4] dts: format docstrigs to google format Juraj Linkeš
@ 2023-04-28 19:33   ` Jeremy Spewock
  0 siblings, 0 replies; 255+ messages in thread
From: Jeremy Spewock @ 2023-04-28 19:33 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, dev

[-- Attachment #1: Type: text/plain, Size: 10848 bytes --]

Acked-by: Jeremy Spweock <jspweock@iol.unh.edu>

On Thu, Mar 23, 2023 at 6:40 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> WIP: only one module is reformatted to serve as a demonstration.
>
> The google format is documented here [0].
>
> [0]: https://google.github.io/styleguide/pyguide.html
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/testbed_model/node.py | 152 +++++++++++++++++++---------
>  1 file changed, 103 insertions(+), 49 deletions(-)
>
> diff --git a/dts/framework/testbed_model/node.py
> b/dts/framework/testbed_model/node.py
> index 90467981c3..ad8ef442af 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -3,8 +3,13 @@
>  # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>  # Copyright(c) 2022-2023 University of New Hampshire
>
> -"""
> -A node is a generic host that DTS connects to and manages.
> +"""Common functionality for node management.
> +
> +There's a base class, Node, that's supposed to be extended by other
> classes
> +with functionality specific to that node type.
> +The only part that can be used standalone is the Node.skip_setup static
> method,
> +which is a decorator used to skip method execution
> +if skip_setup is passed by the user on the cmdline or in an env variable.
>  """
>
>  from typing import Any, Callable
> @@ -26,10 +31,25 @@
>
>
>  class Node(object):
> -    """
> -    Basic class for node management. This class implements methods that
> -    manage a node, such as information gathering (of CPU/PCI/NIC) and
> -    environment setup.
> +    """The base class for node management.
> +
> +    It shouldn't be instantiated, but rather extended.
> +    It implements common methods to manage any node:
> +
> +       * connection to the node
> +       * information gathering of CPU
> +       * hugepages setup
> +
> +    Arguments:
> +        node_config: The config from the input configuration file.
> +
> +    Attributes:
> +        main_session: The primary OS-agnostic remote session used
> +            to communicate with the node.
> +        config: The configuration used to create the node.
> +        name: The name of the node.
> +        lcores: The list of logical cores that DTS can use on the node.
> +            It's derived from logical cores present on the node and user
> configuration.
>      """
>
>      main_session: OSSession
> @@ -56,65 +76,89 @@ def __init__(self, node_config: NodeConfiguration):
>          self._logger.info(f"Created node: {self.name}")
>
>      def set_up_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> -        """
> -        Perform the execution setup that will be done for each execution
> -        this node is part of.
> +        """Execution setup steps.
> +
> +        Configure hugepages and call self._set_up_execution where
> +        the rest of the configuration steps (if any) are implemented.
> +
> +        Args:
> +            execution_config: The execution configuration according to
> which
> +                the setup steps will be taken.
>          """
>          self._setup_hugepages()
>          self._set_up_execution(execution_config)
>
>      def _set_up_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional execution setup steps for derived classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional execution setup steps.
>          """
>
>      def tear_down_execution(self) -> None:
> -        """
> -        Perform the execution teardown that will be done after each
> execution
> -        this node is part of concludes.
> +        """Execution teardown steps.
> +
> +        There are currently no common execution teardown steps
> +        common to all DTS node types.
>          """
>          self._tear_down_execution()
>
>      def _tear_down_execution(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional execution teardown steps for derived
> classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional execution teardown steps.
>          """
>
>      def set_up_build_target(
>          self, build_target_config: BuildTargetConfiguration
>      ) -> None:
> -        """
> -        Perform the build target setup that will be done for each build
> target
> -        tested on this node.
> +        """Build target setup steps.
> +
> +        There are currently no common build target setup steps
> +        common to all DTS node types.
> +
> +        Args:
> +            build_target_config: The build target configuration according
> to which
> +                the setup steps will be taken.
>          """
>          self._set_up_build_target(build_target_config)
>
>      def _set_up_build_target(
>          self, build_target_config: BuildTargetConfiguration
>      ) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional build target setup steps for derived
> classes.
> +
> +        Derived classes should optionally overwrite this
> +        if they want to add additional build target setup steps.
>          """
>
>      def tear_down_build_target(self) -> None:
> -        """
> -        Perform the build target teardown that will be done after each
> build target
> -        tested on this node.
> +        """Build target teardown steps.
> +
> +        There are currently no common build target teardown steps
> +        common to all DTS node types.
>          """
>          self._tear_down_build_target()
>
>      def _tear_down_build_target(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional build target teardown steps for derived
> classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional build target teardown steps.
>          """
>
>      def create_session(self, name: str) -> OSSession:
> -        """
> -        Create and return a new OSSession tailored to the remote OS.
> +        """Create and return a new OS-agnostic remote session.
> +
> +        The returned session won't be used by the object creating it.
> +        Will be cleaned up automatically.
> +
> +        Args:
> +            name: The name of the session.
> +
> +        Returns:
> +            A new OS-agnostic remote session.
>          """
>          session_name = f"{self.name} {name}"
>          connection = create_session(
> @@ -130,14 +174,24 @@ def filter_lcores(
>          filter_specifier: LogicalCoreCount | LogicalCoreList,
>          ascending: bool = True,
>      ) -> list[LogicalCore]:
> -        """
> -        Filter the LogicalCores found on the Node according to
> -        a LogicalCoreCount or a LogicalCoreList.
> +        """Filter the node's logical cores that DTS can use.
>
> -        If ascending is True, use cores with the lowest numerical id first
> -        and continue in ascending order. If False, start with the highest
> -        id and continue in descending order. This ordering affects which
> -        sockets to consider first as well.
> +        Logical cores that DTS can use are ones that are present on the
> node,
> +        but filtered according to user config.
> +        The filter_specifier will filter cores from those logical cores.
> +
> +        Args:
> +            filter_specifier: Two different filters can be used, one that
> specifies
> +                the number of logical cores per core, cores per socket and
> +                the number of sockets,
> +                the other that specifies a logical core list.
> +            ascending: If True, use cores with the lowest numerical id
> first
> +                and continue in ascending order. If False, start with the
> highest
> +                id and continue in descending order. This ordering
> affects which
> +                sockets to consider first as well.
> +
> +        Returns:
> +            A list of logical cores.
>          """
>          self._logger.debug(f"Filtering {filter_specifier} from
> {self.lcores}.")
>          return lcore_filter(
> @@ -147,17 +201,14 @@ def filter_lcores(
>          ).filter()
>
>      def _get_remote_cpus(self) -> None:
> -        """
> -        Scan CPUs in the remote OS and store a list of LogicalCores.
> -        """
> +        """Scan CPUs in the remote OS and store a list of LogicalCores."""
>          self._logger.info("Getting CPU information.")
>          self.lcores =
> self.main_session.get_remote_cpus(self.config.use_first_core)
>
>      def _setup_hugepages(self):
> -        """
> -        Setup hugepages on the Node. Different architectures can supply
> different
> -        amounts of memory for hugepages and numa-based hugepage
> allocation may need
> -        to be considered.
> +        """Setup hugepages on the Node.
> +
> +        Configure the hugepages only if they're specified in user
> configuration.
>          """
>          if self.config.hugepages:
>              self.main_session.setup_hugepages(
> @@ -165,9 +216,7 @@ def _setup_hugepages(self):
>              )
>
>      def close(self) -> None:
> -        """
> -        Close all connections and free other resources.
> -        """
> +        """Close all connections and free other resources."""
>          if self.main_session:
>              self.main_session.close()
>          for session in self._other_sessions:
> @@ -176,6 +225,11 @@ def close(self) -> None:
>
>      @staticmethod
>      def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> +        """A decorator that skips the decorated function.
> +
> +        When used, the decorator executes an empty lambda function
> +        instead of the decorated function.
> +        """
>          if SETTINGS.skip_setup:
>              return lambda *args: None
>          else:
> --
> 2.30.2
>
>

[-- Attachment #2: Type: text/html, Size: 13569 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v1 0/4] dts: add dts api docs
  2023-04-25  9:43           ` Bruce Richardson
@ 2023-05-03 11:33             ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-03 11:33 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

On Tue, Apr 25, 2023 at 11:43 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Tue, Apr 25, 2023 at 10:57:12AM +0200, Juraj Linkeš wrote:
> > On Tue, Apr 25, 2023 at 10:44 AM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Tue, Apr 25, 2023 at 10:20:36AM +0200, Juraj Linkeš wrote:
> > > > On Mon, Apr 3, 2023 at 11:42 AM Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > >
> > > > > On Mon, Apr 03, 2023 at 11:17:06AM +0200, Juraj Linkeš wrote:
> > > > > >    Hi Bruce, Thomas,
> > > > > >    The meson integration is kinda all over the place. I wanted to use the
> > > > > >    existing conf.py Sphinx config file, but I also wanted to keep the docs
> > > > > >    separated (because of extra DTS api docs dependencies), so the various
> > > > > >    pieces are in different places (the config file in one place, meson
> > > > > >    code in dts directory and generated Sphinx docs are in a new directory
> > > > > >    in the api build dir, separate from the rest of the Sphinx html).
> > > > > >    The big thing here is that I didn't figure out how to separate the dts
> > > > > >    api build from the rest of the docs. I don't know how the -Denable_docs
> > > > > >    option is supposed to work. I wanted to use -Denable_dts_docs in the
> > > > > >    same fashion to decouple the builds, but it doesn't seem to work.
> > > > > >    Reading the code I think the original option doesn't actually do
> > > > > >    anything - does it work? How is it supposed to work?
> > > > > >    Thanks,
> > > > > >    Juraj
> > > > >
> > > > > The enable_docs option works by selectively enabling the doc build tasks
> > > > > using the "build_by_default" parameter on them.
> > > > > See http://git.dpdk.org/dpdk/tree/doc/guides/meson.build#n23 for an
> > > > > example. The custom_target for sphinx is not a dependency of any other
> > > > > task, so whether it gets run or not depends entirely on whether the
> > > > > "build_by_default" and/or "install" options are set.
> > > > >
> > > > > As usual, there may be other stuff that needs cleaning up on this, but
> > > > > that's how it works for now, anyway. [And it does actually work, last I
> > > > > tested it :-)]
> > > >
> > > > I looked into this and as is so frequently the case, we're both right. :-)
> > > >
> > > > When running according to docs, that is with:
> > > > 1. meson setup doc_build
> > > > 2. ninja -C doc_build doc
> > > >
> > > > it doesn't matter what enable_docs is set to, it always builds the docs.
> > > >
> > >
> > > Yes, I'd forgotten that. That was deliberately done so one could always
> > > request a doc build directly, without having to worry about DPDK config or
> > > building the rest of DPDK.
> > >
> > > > But in the full build it does control whether docs are built, i.e.:
> > > >
> > > > 1. meson setup doc_build
> > > > 2. ninja -C doc_build
> > > > doesn't build the docs, whereas:
> > > >
> > > > 1. meson setup doc_build -Denable_docs=true
> > > > 2. ninja -C doc_build
> > > > builds the docs.
> > > >
> > > > Now the problem in this version is when doing just the doc build
> > > > (ninja -C doc_build doc) both DPDK and DTS docs are built and I'd like
> > > > to separate those (because DTS doc build has additional dependencies).
> > > > I'm thinking the following would be a good solution within the current
> > > > paradigm:
> > > > 1. The -Denable_docs=true and -Denable_dts_docs=true options to
> > > > separate doc builds for the full build.
> > > > 2. Separate dts doc dir for the doc build ("ninja -C doc_build doc"
> > > > for DPDK docs and "ninja -C doc_build dts" (or maybe some other dir)
> > > > for DTS docs).
> > >
> > > How important is it to separate out the dts docs from the regular docs?
> >
> > It is mostly a matter of dependencies.
> >
> > > What are the additional dependencies, and how hard are they to get? If
> > > possible I'd rather not have an additional build config option added for
> > > this.
> >
> > The same dependencies as for running DTS, which are the proper Python
> > version (3.10 and newer) with DTS depencies obtained with Poetry
> > (which is a matter of installing Poetry and running it). As is
> > standard with Python projects, this is all set up in a virtual
> > environment, which needs to be activated before running the doc build.
> > It's documented in more detail in the tools docs:
> > https://doc.dpdk.org/guides/tools/dts.html#setting-up-dts-environment
> >
> > This may be too much of a hassle for DPDK devs to build non-DTS docs,
> > but I don't know whether DPDK devs actually build docs at all.
> >
>
> Can't really say for sure. I suspect most don't build them on a daily
> basis, but would often need to build them before submitting patches with a
> doc change included.
>
> What format are the DTS docs in? I agree that as described above the
> requirements are pretty different than those for the regular DPDK docs.
>

The resulting html docs are using the same Sphinx conf file (with
extension configuration and two more config options - see patch 3/4)
as we're using for regular docs.

> > >
> > > If we are separating them out, I think the dts doc target should be
> > > "dts_doc" rather than "dts" for clarity.
> >
> > Agreed, but "dts_doc" would be a new top-level dir. I think we could
> > do dts/doc (a dir inside dts).
> >
> That path seems reasonable to me.
>
> /Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v2 0/4] dts: add dts api docs
  2023-03-23 10:40 [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
                   ` (4 preceding siblings ...)
  2023-04-03  9:17 ` [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
@ 2023-05-04 12:37 ` Juraj Linkeš
  2023-05-04 12:37   ` [RFC PATCH v2 1/4] dts: code adjustments for sphinx Juraj Linkeš
                     ` (5 more replies)
  5 siblings, 6 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-04 12:37 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

Augment the meson build system with dts api generation. The api docs are
generated from Python docstrings in DTS using Sphinx. The format of
choice is the Google format [0].

The guide html sphinx configuration is used to preserve the same style.

The build requires the same Python version and dependencies as DTS,
because Sphinx imports the Python modules. Dependencies are installed
using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts/doc

There's only one properly documented module that serves as a
demonstration of the style - framework.testbed_model.node.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes

Juraj Linkeš (4):
  dts: code adjustments for sphinx
  dts: add doc generation dependencies
  dts: add doc generation
  dts: format docstrigs to google format

 doc/api/meson.build                           |   1 +
 doc/guides/conf.py                            |  22 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      |  29 +
 dts/doc/doc-index.rst                         |  20 +
 dts/doc/meson.build                           |  50 ++
 dts/framework/config/__init__.py              |  11 +
 .../{testbed_model/hw => config}/cpu.py       |  13 +
 dts/framework/dts.py                          |   8 +-
 dts/framework/remote_session/__init__.py      |   3 +-
 dts/framework/remote_session/linux_session.py |   2 +-
 dts/framework/remote_session/os_session.py    |  12 +-
 .../remote_session/remote/__init__.py         |  16 -
 .../{remote => }/remote_session.py            |   0
 .../{remote => }/ssh_session.py               |   0
 dts/framework/settings.py                     |  55 +-
 dts/framework/testbed_model/__init__.py       |  10 +-
 dts/framework/testbed_model/hw/__init__.py    |  27 -
 dts/framework/testbed_model/node.py           | 164 ++--
 dts/framework/testbed_model/sut_node.py       |   9 +-
 .../testbed_model/{hw => }/virtual_device.py  |   0
 dts/main.py                                   |   3 +-
 dts/meson.build                               |  16 +
 dts/poetry.lock                               | 770 ++++++++++++++++--
 dts/pyproject.toml                            |   7 +
 dts/tests/TestSuite_hello_world.py            |   6 +-
 meson.build                                   |   1 +
 meson_options.txt                             |   2 +
 28 files changed, 1038 insertions(+), 220 deletions(-)
 create mode 100644 dts/doc/doc-index.rst
 create mode 100644 dts/doc/meson.build
 rename dts/framework/{testbed_model/hw => config}/cpu.py (95%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
 create mode 100644 dts/meson.build

-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v2 1/4] dts: code adjustments for sphinx
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
@ 2023-05-04 12:37   ` Juraj Linkeš
  2023-05-04 12:37   ` [RFC PATCH v2 2/4] dts: add doc generation dependencies Juraj Linkeš
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-04 12:37 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

sphinx-build only imports the Python modules when building the
documentation; it doesn't run DTS. This requires changes that make the
code importable without running it. This means:
* properly guarding argument parsing in the if __name__ == '__main__'
  block.
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create unnecessary log files.
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere.
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by more granular imports.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              | 11 ++++
 .../{testbed_model/hw => config}/cpu.py       | 13 +++++
 dts/framework/dts.py                          |  8 ++-
 dts/framework/remote_session/__init__.py      |  3 +-
 dts/framework/remote_session/linux_session.py |  2 +-
 dts/framework/remote_session/os_session.py    | 12 +++-
 .../remote_session/remote/__init__.py         | 16 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               |  0
 dts/framework/settings.py                     | 55 ++++++++++---------
 dts/framework/testbed_model/__init__.py       | 10 +---
 dts/framework/testbed_model/hw/__init__.py    | 27 ---------
 dts/framework/testbed_model/node.py           | 12 ++--
 dts/framework/testbed_model/sut_node.py       |  9 ++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/main.py                                   |  3 +-
 dts/tests/TestSuite_hello_world.py            |  6 +-
 17 files changed, 88 insertions(+), 99 deletions(-)
 rename dts/framework/{testbed_model/hw => config}/cpu.py (95%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ebb0823ff5..293c4cb15b 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -7,6 +7,8 @@
 Yaml config parsing methods
 """
 
+# pylama:ignore=W0611
+
 import json
 import os.path
 import pathlib
@@ -19,6 +21,15 @@
 
 from framework.settings import SETTINGS
 
+from .cpu import (
+    LogicalCore,
+    LogicalCoreCount,
+    LogicalCoreCountFilter,
+    LogicalCoreList,
+    LogicalCoreListFilter,
+    lcore_filter,
+)
+
 
 class StrEnum(Enum):
     @staticmethod
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/config/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/config/cpu.py
index d1918a12dc..8fe785dfe4 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/config/cpu.py
@@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 0502284580..22a09b7e34 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,7 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+import logging
 import sys
 
 from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
@@ -12,7 +13,8 @@
 from .testbed_model import SutNode
 from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG | logging.Logger = logging.getLogger("DTSRunner")
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -24,6 +26,10 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
     check_dts_python_version()
 
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index ee221503df..17ca1459f7 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,8 @@
 
 from .linux_session import LinuxSession
 from .os_session import OSSession
-from .remote import CommandResult, RemoteSession, SSHSession
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
 
 
 def create_session(
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index a1e3bc3a92..c8ce5fe6da 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,8 +2,8 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+from framework.config import LogicalCore
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
 from framework.utils import expand_range
 
 from .posix_session import PosixSession
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 4c48ae2567..246f0358ea 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -6,13 +6,13 @@
 from collections.abc import Iterable
 from pathlib import PurePath
 
-from framework.config import Architecture, NodeConfiguration
+from framework.config import Architecture, LogicalCore, NodeConfiguration
 from framework.logger import DTSLOG
 from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
 from framework.utils import EnvVarsDict, MesonArgs
 
-from .remote import CommandResult, RemoteSession, create_remote_session
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
 
 
 class OSSession(ABC):
@@ -173,3 +173,9 @@ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
         if needed and mount the hugepages if needed.
         If force_first_numa is True, configure hugepages just on the first socket.
         """
+
+
+def create_remote_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 8a1512210a..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 71955f4581..144f9dea62 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -59,15 +59,18 @@ def __call__(
 
 @dataclass(slots=True, frozen=True)
 class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+    config_file_path: Path = Path(Path(__file__).parent.parent, "conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: _Settings = _Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -81,7 +84,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
         "and targets.",
     )
@@ -90,7 +94,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -98,7 +102,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
         "compiling DPDK.",
@@ -108,7 +112,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
+        default=SETTINGS.verbose,
         help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
         "to the console.",
     )
@@ -117,7 +121,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
+        default=SETTINGS.skip_setup,
         help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
     )
 
@@ -125,7 +129,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--tarball",
         "--snapshot",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball "
         "which will be used in testing.",
@@ -134,7 +138,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -142,8 +146,9 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--test-cases",
         action=_env_arg("DTS_TESTCASES"),
-        default="",
-        help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
+        nargs="*",
+        default=SETTINGS.test_cases,
+        help="[DTS_TESTCASES] A list of test cases to execute. "
         "Unknown test cases will be silently ignored.",
     )
 
@@ -151,7 +156,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -165,10 +170,11 @@ def _check_tarball_path(parsed_args: argparse.Namespace) -> None:
         raise ConfigurationError(f"DPDK tarball '{parsed_args.tarball}' doesn't exist.")
 
 
-def _get_settings() -> _Settings:
+def set_settings() -> None:
+    global SETTINGS
     parsed_args = _get_parser().parse_args()
     _check_tarball_path(parsed_args)
-    return _Settings(
+    SETTINGS = _Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
@@ -176,9 +182,6 @@ def _get_settings() -> _Settings:
         skip_setup=(parsed_args.skip_setup == "Y"),
         dpdk_tarball_path=parsed_args.tarball,
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=parsed_args.test_cases,
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index f54a947051..148f81993d 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,14 +9,6 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
 from .node import Node
 from .sut_node import SutNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index d48fafe65d..90467981c3 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,19 +12,17 @@
 from framework.config import (
     BuildTargetConfiguration,
     ExecutionConfiguration,
-    NodeConfiguration,
-)
-from framework.logger import DTSLOG, getLogger
-from framework.remote_session import OSSession, create_session
-from framework.settings import SETTINGS
-
-from .hw import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
+    NodeConfiguration,
     lcore_filter,
 )
+from framework.logger import DTSLOG, getLogger
+from framework.remote_session import create_session
+from framework.remote_session.os_session import OSSession
+from framework.settings import SETTINGS
 
 
 class Node(object):
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 2b2b50d982..6db4a505bb 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -7,13 +7,18 @@
 import time
 from pathlib import PurePath
 
-from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.config import (
+    BuildTargetConfiguration,
+    LogicalCoreCount,
+    LogicalCoreList,
+    NodeConfiguration,
+)
 from framework.remote_session import CommandResult, OSSession
 from framework.settings import SETTINGS
 from framework.utils import EnvVarsDict, MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
 from .node import Node
+from .virtual_device import VirtualDevice
 
 
 class SutNode(Node):
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..060ff1b19a 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,11 @@
 
 import logging
 
-from framework import dts
+from framework import dts, settings
 
 
 def main() -> None:
+    settings.set_settings()
     dts.run_all()
 
 
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..96c31a6c8c 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -6,12 +6,8 @@
 No other EAL parameters apart from cores are used.
 """
 
+from framework.config import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from framework.test_suite import TestSuite
-from framework.testbed_model import (
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-)
 
 
 class TestHelloWorld(TestSuite):
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v2 2/4] dts: add doc generation dependencies
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
  2023-05-04 12:37   ` [RFC PATCH v2 1/4] dts: code adjustments for sphinx Juraj Linkeš
@ 2023-05-04 12:37   ` Juraj Linkeš
  2023-05-04 12:37   ` [RFC PATCH v2 3/4] dts: add doc generation Juraj Linkeš
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-04 12:37 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 770 +++++++++++++++++++++++++++++++++++++++++----
 dts/pyproject.toml |   7 +
 2 files changed, 710 insertions(+), 67 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index 0b2a007d4d..500f89dac1 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,24 +1,69 @@
+# This file is automatically @generated by Poetry and should not be changed by hand.
+
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
-version = "22.1.0"
+version = "22.2.0"
 description = "Classes Without Boilerplate"
 category = "main"
 optional = false
-python-versions = ">=3.5"
+python-versions = ">=3.6"
+files = [
+    {file = "attrs-22.2.0-py3-none-any.whl", hash = "sha256:29e95c7f6778868dbd49170f98f8818f78f3dc5e0e37c0b1f474e3561b240836"},
+    {file = "attrs-22.2.0.tar.gz", hash = "sha256:c9227bfc2f01993c03f68db37d1d15c9690188323c067c641f1a35ca58185f99"},
+]
 
 [package.extras]
-dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
-docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
-tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
-tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
+cov = ["attrs[tests]", "coverage-enable-subprocess", "coverage[toml] (>=5.3)"]
+dev = ["attrs[docs,tests]"]
+docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope.interface"]
+tests = ["attrs[tests-no-zope]", "zope.interface"]
+tests-no-zope = ["cloudpickle", "cloudpickle", "hypothesis", "hypothesis", "mypy (>=0.971,<0.990)", "mypy (>=0.971,<0.990)", "pympler", "pympler", "pytest (>=4.3.0)", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-mypy-plugins", "pytest-xdist[psutil]", "pytest-xdist[psutil]"]
+
+[[package]]
+name = "babel"
+version = "2.12.1"
+description = "Internationalization utilities"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"},
+    {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"},
+]
 
 [[package]]
 name = "black"
-version = "22.10.0"
+version = "22.12.0"
 description = "The uncompromising code formatter."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
+    {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
+    {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
+    {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
+    {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
+    {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
+    {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
+    {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
+    {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
+    {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
+    {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
+    {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
+]
 
 [package.dependencies]
 click = ">=8.0.0"
@@ -33,6 +78,103 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2022.12.7"
+description = "Python package for providing Mozilla's CA Bundle."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2022.12.7-py3-none-any.whl", hash = "sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18"},
+    {file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"},
+]
+
+[[package]]
+name = "charset-normalizer"
+version = "3.1.0"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+category = "dev"
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.1.0.tar.gz", hash = "sha256:34e0a2f9c370eb95597aae63bf85eb5e96826d81e3dcf88b8886012906f509b5"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e0ac8959c929593fee38da1c2b64ee9778733cdf03c482c9ff1d508b6b593b2b"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d7fc3fca01da18fbabe4625d64bb612b533533ed10045a2ac3dd194bfa656b60"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04eefcee095f58eaabe6dc3cc2262f3bcd776d2c67005880894f447b3f2cb9c1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20064ead0717cf9a73a6d1e779b23d149b53daf971169289ed2ed43a71e8d3b0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1435ae15108b1cb6fffbcea2af3d468683b7afed0169ad718451f8db5d1aff6f"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c84132a54c750fda57729d1e2599bb598f5fa0344085dbde5003ba429a4798c0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75f2568b4189dda1c567339b48cba4ac7384accb9c2a7ed655cd86b04055c795"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11d3bcb7be35e7b1bba2c23beedac81ee893ac9871d0ba79effc7fc01167db6c"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:891cf9b48776b5c61c700b55a598621fdb7b1e301a550365571e9624f270c203"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5f008525e02908b20e04707a4f704cd286d94718f48bb33edddc7d7b584dddc1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b06f0d3bf045158d2fb8837c5785fe9ff9b8c93358be64461a1089f5da983137"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:49919f8400b5e49e961f320c735388ee686a62327e773fa5b3ce6721f7e785ce"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22908891a380d50738e1f978667536f6c6b526a2064156203d418f4856d6e86a"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win32.whl", hash = "sha256:12d1a39aa6b8c6f6248bb54550efcc1c38ce0d8096a146638fd4738e42284448"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:65ed923f84a6844de5fd29726b888e58c62820e0769b76565480e1fdc3d062f8"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9a3267620866c9d17b959a84dd0bd2d45719b817245e49371ead79ed4f710d19"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6734e606355834f13445b6adc38b53c0fd45f1a56a9ba06c2058f86893ae8017"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8303414c7b03f794347ad062c0516cee0e15f7a612abd0ce1e25caf6ceb47df"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaf53a6cebad0eae578f062c7d462155eada9c172bd8c4d250b8c1d8eb7f916a"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3dc5b6a8ecfdc5748a7e429782598e4f17ef378e3e272eeb1340ea57c9109f41"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e1b25e3ad6c909f398df8921780d6a3d120d8c09466720226fc621605b6f92b1"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ca564606d2caafb0abe6d1b5311c2649e8071eb241b2d64e75a0d0065107e62"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b82fab78e0b1329e183a65260581de4375f619167478dddab510c6c6fb04d9b6"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bd7163182133c0c7701b25e604cf1611c0d87712e56e88e7ee5d72deab3e76b5"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:11d117e6c63e8f495412d37e7dc2e2fff09c34b2d09dbe2bee3c6229577818be"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:cf6511efa4801b9b38dc5546d7547d5b5c6ef4b081c60b23e4d941d0eba9cbeb"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:abc1185d79f47c0a7aaf7e2412a0eb2c03b724581139193d2d82b3ad8cbb00ac"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cb7b2ab0188829593b9de646545175547a70d9a6e2b63bf2cd87a0a391599324"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win32.whl", hash = "sha256:c36bcbc0d5174a80d6cccf43a0ecaca44e81d25be4b7f90f0ed7bcfbb5a00909"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:cca4def576f47a09a943666b8f829606bcb17e2bc2d5911a46c8f8da45f56755"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c95f12b74681e9ae127728f7e5409cbbef9cd914d5896ef238cc779b8152373"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fca62a8301b605b954ad2e9c3666f9d97f63872aa4efcae5492baca2056b74ab"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac0aa6cd53ab9a31d397f8303f92c42f534693528fafbdb997c82bae6e477ad9"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c3af8e0f07399d3176b179f2e2634c3ce9c1301379a6b8c9c9aeecd481da494f"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a5fc78f9e3f501a1614a98f7c54d3969f3ad9bba8ba3d9b438c3bc5d047dd28"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:628c985afb2c7d27a4800bfb609e03985aaecb42f955049957814e0491d4006d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:74db0052d985cf37fa111828d0dd230776ac99c740e1a758ad99094be4f1803d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1e8fcdd8f672a1c4fc8d0bd3a2b576b152d2a349782d1eb0f6b8e52e9954731d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:04afa6387e2b282cf78ff3dbce20f0cc071c12dc8f685bd40960cc68644cfea6"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dd5653e67b149503c68c4018bf07e42eeed6b4e956b24c00ccdf93ac79cdff84"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d2686f91611f9e17f4548dbf050e75b079bbc2a82be565832bc8ea9047b61c8c"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:4155b51ae05ed47199dc5b2a4e62abccb274cee6b01da5b895099b61b1982974"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322102cdf1ab682ecc7d9b1c5eed4ec59657a65e1c146a0da342b78f4112db23"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e633940f28c1e913615fd624fcdd72fdba807bf53ea6925d6a588e84e1151531"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3a06f32c9634a8705f4ca9946d667609f52cf130d5548881401f1eb2c39b1e2c"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7381c66e0561c5757ffe616af869b916c8b4e42b367ab29fedc98481d1e74e14"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3573d376454d956553c356df45bb824262c397c6e26ce43e8203c4c540ee0acb"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e89df2958e5159b811af9ff0f92614dabf4ff617c03a4c1c6ff53bf1c399e0e1"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:78cacd03e79d009d95635e7d6ff12c21eb89b894c354bd2b2ed0b4763373693b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de5695a6f1d8340b12a5d6d4484290ee74d61e467c39ff03b39e30df62cf83a0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c60b9c202d00052183c9be85e5eaf18a4ada0a47d188a83c8f5c5b23252f649"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f645caaf0008bacf349875a974220f1f1da349c5dbe7c4ec93048cdc785a3326"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea9f9c6034ea2d93d9147818f17c2a0860d41b71c38b9ce4d55f21b6f9165a11"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:80d1543d58bd3d6c271b66abf454d437a438dff01c3e62fdbcd68f2a11310d4b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:73dc03a6a7e30b7edc5b01b601e53e7fc924b04e1835e8e407c12c037e81adbd"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6f5c2e7bc8a4bf7c426599765b1bd33217ec84023033672c1e9a8b35eaeaaaf8"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win32.whl", hash = "sha256:12a2b561af122e3d94cdb97fe6fb2bb2b82cef0cdca131646fdb940a1eda04f0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:3160a0fd9754aab7d47f95a6b63ab355388d890163eb03b2d2b87ab0a30cfa59"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38e812a197bf8e71a59fe55b757a84c1f946d0ac114acafaafaf21667a7e169e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6baf0baf0d5d265fa7944feb9f7451cc316bfe30e8df1a61b1bb08577c554f31"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8f25e17ab3039b05f762b0a55ae0b3632b2e073d9c8fc88e89aca31a6198e88f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3747443b6a904001473370d7810aa19c3a180ccd52a7157aacc264a5ac79265e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b116502087ce8a6b7a5f1814568ccbd0e9f6cfd99948aa59b0e241dc57cf739f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d16fd5252f883eb074ca55cb622bc0bee49b979ae4e8639fff6ca3ff44f9f854"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fa558996782fc226b529fdd2ed7866c2c6ec91cee82735c98a197fae39f706"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f6c7a8a57e9405cad7485f4c9d3172ae486cfef1344b5ddd8e5239582d7355e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac3775e3311661d4adace3697a52ac0bab17edd166087d493b52d4f4f553f9f0"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:10c93628d7497c81686e8e5e557aafa78f230cd9e77dd0c40032ef90c18f2230"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:6f4f4668e1831850ebcc2fd0b1cd11721947b6dc7c00bf1c6bd3c929ae14f2c7"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0be65ccf618c1e7ac9b849c315cc2e8a8751d9cfdaa43027d4f6624bd587ab7e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:53d0a3fa5f8af98a1e261de6a3943ca631c526635eb5817a87a59d9a57ebf48f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win32.whl", hash = "sha256:a04f86f41a8916fe45ac5024ec477f41f886b3c435da2d4e3d2709b22ab02af1"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:830d2948a5ec37c386d3170c483063798d7879037492540f10a475e3fd6f244b"},
+    {file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.3"
@@ -40,6 +182,10 @@ description = "Composable command line interface toolkit"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
+    {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
+]
 
 [package.dependencies]
 colorama = {version = "*", markers = "platform_system == \"Windows\""}
@@ -51,20 +197,82 @@ description = "Cross-platform colored terminal text."
 category = "dev"
 optional = false
 python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
+files = [
+    {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
+    {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
+]
+
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
 
 [[package]]
 name = "isort"
-version = "5.10.1"
+version = "5.12.0"
 description = "A Python utility / library to sort Python imports."
 category = "dev"
 optional = false
-python-versions = ">=3.6.1,<4.0"
+python-versions = ">=3.8.0"
+files = [
+    {file = "isort-5.12.0-py3-none-any.whl", hash = "sha256:f84c2818376e66cf843d497486ea8fed8700b340f308f076c6fb1229dff318b6"},
+    {file = "isort-5.12.0.tar.gz", hash = "sha256:8bef7dde241278824a6d83f44a544709b065191b95b6e50894bdc722fcba0504"},
+]
 
 [package.extras]
-pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
-requirements_deprecated_finder = ["pipreqs", "pip-api"]
-colors = ["colorama (>=0.4.3,<0.5.0)"]
+colors = ["colorama (>=0.4.3)"]
+pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"]
 plugins = ["setuptools"]
+requirements-deprecated-finder = ["pip-api", "pipreqs"]
+
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
 
 [[package]]
 name = "jsonpatch"
@@ -73,6 +281,10 @@ description = "Apply JSON-Patches (RFC 6902)"
 category = "main"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "jsonpatch-1.32-py2.py3-none-any.whl", hash = "sha256:26ac385719ac9f54df8a2f0827bb8253aa3ea8ab7b3368457bcdb8c14595a397"},
+    {file = "jsonpatch-1.32.tar.gz", hash = "sha256:b6ddfe6c3db30d81a96aaeceb6baf916094ffa23d7dd5fa2c13e13f8b6e600c2"},
+]
 
 [package.dependencies]
 jsonpointer = ">=1.9"
@@ -84,14 +296,22 @@ description = "Identify specific nodes in a JSON document (RFC 6901)"
 category = "main"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "jsonpointer-2.3-py2.py3-none-any.whl", hash = "sha256:51801e558539b4e9cd268638c078c6c5746c9ac96bc38152d443400e4f3793e9"},
+    {file = "jsonpointer-2.3.tar.gz", hash = "sha256:97cba51526c829282218feb99dab1b1e6bdf8efd1c43dc9d57be093c0d69c99a"},
+]
 
 [[package]]
 name = "jsonschema"
-version = "4.17.0"
+version = "4.17.3"
 description = "An implementation of JSON Schema validation for Python"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "jsonschema-4.17.3-py3-none-any.whl", hash = "sha256:a870ad254da1a8ca84b6a2905cac29d265f805acc57af304784962a2aa6508f6"},
+    {file = "jsonschema-4.17.3.tar.gz", hash = "sha256:0f864437ab8b6076ba6707453ef8f98a6a0d512a80e93f8abdb676f737ecb60d"},
+]
 
 [package.dependencies]
 attrs = ">=17.4.0"
@@ -101,6 +321,66 @@ pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
 format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
 format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
 
+[[package]]
+name = "markupsafe"
+version = "2.1.2"
+description = "Safely add untrusted strings to HTML/XML markup."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:665a36ae6f8f20a4676b53224e33d456a6f5a72657d9c83c2aa00765072f31f7"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:340bea174e9761308703ae988e982005aedf427de816d1afe98147668cc03036"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22152d00bf4a9c7c83960521fc558f55a1adbc0631fbb00a9471e097b19d72e1"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28057e985dace2f478e042eaa15606c7efccb700797660629da387eb289b9323"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca244fa73f50a800cf8c3ebf7fd93149ec37f5cb9596aa8873ae2c1d23498601"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d9d971ec1e79906046aa3ca266de79eac42f1dbf3612a05dc9368125952bd1a1"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e007132af78ea9df29495dbf7b5824cb71648d7133cf7848a2a5dd00d36f9ff"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7313ce6a199651c4ed9d7e4cfb4aa56fe923b1adf9af3b420ee14e6d9a73df65"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-win32.whl", hash = "sha256:c4a549890a45f57f1ebf99c067a4ad0cb423a05544accaf2b065246827ed9603"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:835fb5e38fd89328e9c81067fd642b3593c33e1e17e2fdbf77f5676abb14a156"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2ec4f2d48ae59bbb9d1f9d7efb9236ab81429a764dedca114f5fdabbc3788013"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:608e7073dfa9e38a85d38474c082d4281f4ce276ac0010224eaba11e929dd53a"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65608c35bfb8a76763f37036547f7adfd09270fbdbf96608be2bead319728fcd"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2bfb563d0211ce16b63c7cb9395d2c682a23187f54c3d79bfec33e6705473c6"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da25303d91526aac3672ee6d49a2f3db2d9502a4a60b55519feb1a4c7714e07d"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9cad97ab29dfc3f0249b483412c85c8ef4766d96cdf9dcf5a1e3caa3f3661cf1"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:085fd3201e7b12809f9e6e9bc1e5c96a368c8523fad5afb02afe3c051ae4afcc"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1bea30e9bf331f3fef67e0a3877b2288593c98a21ccb2cf29b74c581a4eb3af0"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-win32.whl", hash = "sha256:7df70907e00c970c60b9ef2938d894a9381f38e6b9db73c5be35e59d92e06625"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:e55e40ff0cc8cc5c07996915ad367fa47da6b3fc091fdadca7f5403239c5fec3"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a6e40afa7f45939ca356f348c8e23048e02cb109ced1eb8420961b2f40fb373a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf877ab4ed6e302ec1d04952ca358b381a882fbd9d1b07cccbfd61783561f98a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63ba06c9941e46fa389d389644e2d8225e0e3e5ebcc4ff1ea8506dce646f8c8a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f1cd098434e83e656abf198f103a8207a8187c0fc110306691a2e94a78d0abb2"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:55f44b440d491028addb3b88f72207d71eeebfb7b5dbf0643f7c023ae1fba619"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a6f2fcca746e8d5910e18782f976489939d54a91f9411c32051b4aab2bd7c513"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:0b462104ba25f1ac006fdab8b6a01ebbfbce9ed37fd37fd4acd70c67c973e460"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-win32.whl", hash = "sha256:7668b52e102d0ed87cb082380a7e2e1e78737ddecdde129acadb0eccc5423859"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6d6607f98fcf17e534162f0709aaad3ab7a96032723d8ac8750ffe17ae5a0666"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a806db027852538d2ad7555b203300173dd1b77ba116de92da9afbc3a3be3eed"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a4abaec6ca3ad8660690236d11bfe28dfd707778e2442b45addd2f086d6ef094"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f03a532d7dee1bed20bc4884194a16160a2de9ffc6354b3878ec9682bb623c54"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cf06cdc1dda95223e9d2d3c58d3b178aa5dacb35ee7e3bbac10e4e1faacb419"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22731d79ed2eb25059ae3df1dfc9cb1546691cc41f4e3130fe6bfbc3ecbbecfa"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f8ffb705ffcf5ddd0e80b65ddf7bed7ee4f5a441ea7d3419e861a12eaf41af58"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8db032bf0ce9022a8e41a22598eefc802314e81b879ae093f36ce9ddf39ab1ba"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2298c859cfc5463f1b64bd55cb3e602528db6fa0f3cfd568d3605c50678f8f03"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-win32.whl", hash = "sha256:50c42830a633fa0cf9e7d27664637532791bfc31c731a87b202d2d8ac40c3ea2"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:bb06feb762bade6bf3c8b844462274db0c76acc95c52abe8dbed28ae3d44a147"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99625a92da8229df6d44335e6fcc558a5037dd0a760e11d84be2260e6f37002f"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bca7e26c1dd751236cfb0c6c72d4ad61d986e9a41bbf76cb445f69488b2a2bd"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40627dcf047dadb22cd25ea7ecfe9cbf3bbbad0482ee5920b582f3809c97654f"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40dfd3fefbef579ee058f139733ac336312663c6706d1163b82b3003fb1925c4"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:090376d812fb6ac5f171e5938e82e7f2d7adc2b629101cec0db8b267815c85e2"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2e7821bffe00aa6bd07a23913b7f4e01328c3d5cc0b40b36c0bd81d362faeb65"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c0a33bc9f02c2b17c3ea382f91b4db0e6cde90b63b296422a939886a7a80de1c"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:b8526c6d437855442cdd3d87eede9c425c4445ea011ca38d937db299382e6fa3"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-win32.whl", hash = "sha256:137678c63c977754abe9086a3ec011e8fd985ab90631145dfb9294ad09c102a7"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:0576fe974b40a400449768941d5d0858cc624e3249dfd1e0c33674e5c7ca7aed"},
+    {file = "MarkupSafe-2.1.2.tar.gz", hash = "sha256:abcabc8c2b26036d62d4c746381a6f7cf60aafcc653198ad678306986b09450d"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -108,6 +388,10 @@ description = "McCabe checker, plugin for flake8"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"},
+    {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
+]
 
 [[package]]
 name = "mypy"
@@ -116,6 +400,31 @@ description = "Optional static typing for Python"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "mypy-0.961-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:697540876638ce349b01b6786bc6094ccdaba88af446a9abb967293ce6eaa2b0"},
+    {file = "mypy-0.961-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b117650592e1782819829605a193360a08aa99f1fc23d1d71e1a75a142dc7e15"},
+    {file = "mypy-0.961-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bdd5ca340beffb8c44cb9dc26697628d1b88c6bddf5c2f6eb308c46f269bb6f3"},
+    {file = "mypy-0.961-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3e09f1f983a71d0672bbc97ae33ee3709d10c779beb613febc36805a6e28bb4e"},
+    {file = "mypy-0.961-cp310-cp310-win_amd64.whl", hash = "sha256:e999229b9f3198c0c880d5e269f9f8129c8862451ce53a011326cad38b9ccd24"},
+    {file = "mypy-0.961-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b24be97351084b11582fef18d79004b3e4db572219deee0212078f7cf6352723"},
+    {file = "mypy-0.961-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f4a21d01fc0ba4e31d82f0fff195682e29f9401a8bdb7173891070eb260aeb3b"},
+    {file = "mypy-0.961-cp36-cp36m-win_amd64.whl", hash = "sha256:439c726a3b3da7ca84a0199a8ab444cd8896d95012c4a6c4a0d808e3147abf5d"},
+    {file = "mypy-0.961-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5a0b53747f713f490affdceef835d8f0cb7285187a6a44c33821b6d1f46ed813"},
+    {file = "mypy-0.961-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0e9f70df36405c25cc530a86eeda1e0867863d9471fe76d1273c783df3d35c2e"},
+    {file = "mypy-0.961-cp37-cp37m-win_amd64.whl", hash = "sha256:b88f784e9e35dcaa075519096dc947a388319cb86811b6af621e3523980f1c8a"},
+    {file = "mypy-0.961-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:d5aaf1edaa7692490f72bdb9fbd941fbf2e201713523bdb3f4038be0af8846c6"},
+    {file = "mypy-0.961-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9f5f5a74085d9a81a1f9c78081d60a0040c3efb3f28e5c9912b900adf59a16e6"},
+    {file = "mypy-0.961-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f4b794db44168a4fc886e3450201365c9526a522c46ba089b55e1f11c163750d"},
+    {file = "mypy-0.961-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:64759a273d590040a592e0f4186539858c948302c653c2eac840c7a3cd29e51b"},
+    {file = "mypy-0.961-cp38-cp38-win_amd64.whl", hash = "sha256:63e85a03770ebf403291ec50097954cc5caf2a9205c888ce3a61bd3f82e17569"},
+    {file = "mypy-0.961-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5f1332964963d4832a94bebc10f13d3279be3ce8f6c64da563d6ee6e2eeda932"},
+    {file = "mypy-0.961-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:006be38474216b833eca29ff6b73e143386f352e10e9c2fbe76aa8549e5554f5"},
+    {file = "mypy-0.961-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9940e6916ed9371809b35b2154baf1f684acba935cd09928952310fbddaba648"},
+    {file = "mypy-0.961-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a5ea0875a049de1b63b972456542f04643daf320d27dc592d7c3d9cd5d9bf950"},
+    {file = "mypy-0.961-cp39-cp39-win_amd64.whl", hash = "sha256:1ece702f29270ec6af25db8cf6185c04c02311c6bb21a69f423d40e527b75c56"},
+    {file = "mypy-0.961-py3-none-any.whl", hash = "sha256:03c6cc893e7563e7b2949b969e63f02c000b32502a1b4d1314cabe391aa87d66"},
+    {file = "mypy-0.961.tar.gz", hash = "sha256:f730d56cb924d371c26b8eaddeea3cc07d78ff51c521c6d04899ac6904b75492"},
+]
 
 [package.dependencies]
 mypy-extensions = ">=0.4.3"
@@ -129,19 +438,39 @@ reports = ["lxml"]
 
 [[package]]
 name = "mypy-extensions"
-version = "0.4.3"
-description = "Experimental type system extensions for programs checked with the mypy typechecker."
+version = "1.0.0"
+description = "Type system extensions for programs checked with the mypy type checker."
 category = "dev"
 optional = false
-python-versions = "*"
+python-versions = ">=3.5"
+files = [
+    {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
+    {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
+]
+
+[[package]]
+name = "packaging"
+version = "23.0"
+description = "Core utilities for Python packages"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.0-py3-none-any.whl", hash = "sha256:714ac14496c3e68c99c29b00845f7a2b85f3bb6f1078fd9f72fd20f0570002b2"},
+    {file = "packaging-23.0.tar.gz", hash = "sha256:b6ad297f8907de0fa2fe1ccbd26fdaf387f5f47c7275fedf8cce89f99446cf97"},
+]
 
 [[package]]
 name = "pathspec"
-version = "0.10.1"
+version = "0.11.1"
 description = "Utility library for gitignore style pattern matching of file paths."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pathspec-0.11.1-py3-none-any.whl", hash = "sha256:d8af70af76652554bd134c22b3e8a1cc46ed7d91edcdd721ef1a0c51a84a5293"},
+    {file = "pathspec-0.11.1.tar.gz", hash = "sha256:2798de800fa92780e33acca925945e9a19a133b715067cf165b8866c15a31687"},
+]
 
 [[package]]
 name = "pexpect"
@@ -150,21 +479,29 @@ description = "Pexpect allows easy control of interactive console applications."
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
+    {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
+]
 
 [package.dependencies]
 ptyprocess = ">=0.5"
 
 [[package]]
 name = "platformdirs"
-version = "2.5.2"
-description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
+version = "3.1.1"
+description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "platformdirs-3.1.1-py3-none-any.whl", hash = "sha256:e5986afb596e4bb5bde29a79ac9061aa955b94fca2399b7aaac4090860920dd8"},
+    {file = "platformdirs-3.1.1.tar.gz", hash = "sha256:024996549ee88ec1a9aa99ff7f8fc819bb59e2c3477b410d90a16d32d6e707aa"},
+]
 
 [package.extras]
-docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
-test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
+docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.22,!=1.23.4)"]
+test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
 
 [[package]]
 name = "ptyprocess"
@@ -173,28 +510,40 @@ description = "Run a subprocess in a pseudo terminal"
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
+    {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
+]
 
 [[package]]
 name = "pycodestyle"
-version = "2.9.1"
+version = "2.10.0"
 description = "Python style guide checker"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pycodestyle-2.10.0-py2.py3-none-any.whl", hash = "sha256:8a4eaf0d0495c7395bdab3589ac2db602797d76207242c17d470186815706610"},
+    {file = "pycodestyle-2.10.0.tar.gz", hash = "sha256:347187bdb476329d98f695c213d7295a846d1152ff4fe9bacb8a9590b8ee7053"},
+]
 
 [[package]]
 name = "pydocstyle"
-version = "6.1.1"
+version = "6.3.0"
 description = "Python docstring style checker"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
+    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+]
 
 [package.dependencies]
-snowballstemmer = "*"
+snowballstemmer = ">=2.2.0"
 
 [package.extras]
-toml = ["toml"]
+toml = ["tomli (>=1.2.3)"]
 
 [[package]]
 name = "pyflakes"
@@ -203,6 +552,25 @@ description = "passive checker of Python programs"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pyflakes-2.5.0-py2.py3-none-any.whl", hash = "sha256:4579f67d887f804e67edb544428f264b7b24f435b263c4614f384135cea553d2"},
+    {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
+]
+
+[[package]]
+name = "pygments"
+version = "2.14.0"
+description = "Pygments is a syntax highlighting package written in Python."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "Pygments-2.14.0-py3-none-any.whl", hash = "sha256:fa7bd7bd2771287c0de303af8bfdfc731f51bd2c6a47ab69d117138893b82717"},
+    {file = "Pygments-2.14.0.tar.gz", hash = "sha256:b3ed06a9e8ac9a9aae5a6f5dbe78a8a58655d17b43b93c078f094ddc476ae297"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
 
 [[package]]
 name = "pylama"
@@ -211,6 +579,10 @@ description = "Code audit tool for python"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pylama-8.4.1-py3-none-any.whl", hash = "sha256:5bbdbf5b620aba7206d688ed9fc917ecd3d73e15ec1a89647037a09fa3a86e60"},
+    {file = "pylama-8.4.1.tar.gz", hash = "sha256:2d4f7aecfb5b7466216d48610c7d6bad1c3990c29cdd392ad08259b161e486f6"},
+]
 
 [package.dependencies]
 mccabe = ">=0.7.0"
@@ -219,22 +591,51 @@ pydocstyle = ">=6.1.1"
 pyflakes = ">=2.5.0"
 
 [package.extras]
-all = ["pylint", "eradicate", "radon", "mypy", "vulture"]
+all = ["eradicate", "mypy", "pylint", "radon", "vulture"]
 eradicate = ["eradicate"]
 mypy = ["mypy"]
 pylint = ["pylint"]
 radon = ["radon"]
-tests = ["pytest (>=7.1.2)", "pytest-mypy", "eradicate (>=2.0.0)", "radon (>=5.1.0)", "mypy", "pylint (>=2.11.1)", "pylama-quotes", "toml", "vulture", "types-setuptools", "types-toml"]
+tests = ["eradicate (>=2.0.0)", "mypy", "pylama-quotes", "pylint (>=2.11.1)", "pytest (>=7.1.2)", "pytest-mypy", "radon (>=5.1.0)", "toml", "types-setuptools", "types-toml", "vulture"]
 toml = ["toml (>=0.10.2)"]
 vulture = ["vulture"]
 
 [[package]]
 name = "pyrsistent"
-version = "0.19.1"
+version = "0.19.3"
 description = "Persistent/Functional/Immutable data structures"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pyrsistent-0.19.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:20460ac0ea439a3e79caa1dbd560344b64ed75e85d8703943e0b66c2a6150e4a"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c18264cb84b5e68e7085a43723f9e4c1fd1d935ab240ce02c0324a8e01ccb64"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b774f9288dda8d425adb6544e5903f1fb6c273ab3128a355c6b972b7df39dcf"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-win32.whl", hash = "sha256:5a474fb80f5e0d6c9394d8db0fc19e90fa540b82ee52dba7d246a7791712f74a"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-win_amd64.whl", hash = "sha256:49c32f216c17148695ca0e02a5c521e28a4ee6c5089f97e34fe24163113722da"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f0774bf48631f3a20471dd7c5989657b639fd2d285b861237ea9e82c36a415a9"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ab2204234c0ecd8b9368dbd6a53e83c3d4f3cab10ecaf6d0e772f456c442393"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e42296a09e83028b3476f7073fcb69ffebac0e66dbbfd1bd847d61f74db30f19"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-win32.whl", hash = "sha256:64220c429e42a7150f4bfd280f6f4bb2850f95956bde93c6fda1b70507af6ef3"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-win_amd64.whl", hash = "sha256:016ad1afadf318eb7911baa24b049909f7f3bb2c5b1ed7b6a8f21db21ea3faa8"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4db1bd596fefd66b296a3d5d943c94f4fac5bcd13e99bffe2ba6a759d959a28"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aeda827381f5e5d65cced3024126529ddc4289d944f75e090572c77ceb19adbf"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:42ac0b2f44607eb92ae88609eda931a4f0dfa03038c44c772e07f43e738bcac9"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-win32.whl", hash = "sha256:e8f2b814a3dc6225964fa03d8582c6e0b6650d68a232df41e3cc1b66a5d2f8d1"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-win_amd64.whl", hash = "sha256:c9bb60a40a0ab9aba40a59f68214eed5a29c6274c83b2cc206a359c4a89fa41b"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a2471f3f8693101975b1ff85ffd19bb7ca7dd7c38f8a81701f67d6b4f97b87d8"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cc5d149f31706762c1f8bda2e8c4f8fead6e80312e3692619a75301d3dbb819a"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3311cb4237a341aa52ab8448c27e3a9931e2ee09561ad150ba94e4cfd3fc888c"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-win32.whl", hash = "sha256:f0e7c4b2f77593871e918be000b96c8107da48444d57005b6a6bc61fb4331b2c"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-win_amd64.whl", hash = "sha256:c147257a92374fde8498491f53ffa8f4822cd70c0d85037e09028e478cababb7"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b735e538f74ec31378f5a1e3886a26d2ca6351106b4dfde376a26fc32a044edc"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99abb85579e2165bd8522f0c0138864da97847875ecbd45f3e7e2af569bfc6f2"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a8cb235fa6d3fd7aae6a4f1429bbb1fec1577d978098da1252f0489937786f3"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-win32.whl", hash = "sha256:c74bed51f9b41c48366a286395c67f4e894374306b197e62810e0fdaf2364da2"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-win_amd64.whl", hash = "sha256:878433581fc23e906d947a6814336eee031a00e6defba224234169ae3d3d6a98"},
+    {file = "pyrsistent-0.19.3-py3-none-any.whl", hash = "sha256:ccf0d6bd208f8111179f0c26fdf84ed7c3891982f2edaeae7422575f47e66b64"},
+    {file = "pyrsistent-0.19.3.tar.gz", hash = "sha256:1a2994773706bbb4995c31a97bc94f1418314923bd1048c6d964837040376440"},
+]
 
 [[package]]
 name = "pyyaml"
@@ -243,6 +644,70 @@ description = "YAML parser and emitter for Python"
 category = "main"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
+    {file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
+    {file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
+    {file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
+    {file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
+    {file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
+    {file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
+    {file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
+    {file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
+    {file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
+    {file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
+    {file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
+    {file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
+    {file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
+    {file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
+    {file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
+    {file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
+    {file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
+    {file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
+    {file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
+    {file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
+    {file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
+]
+
+[[package]]
+name = "requests"
+version = "2.28.2"
+description = "Python HTTP for Humans."
+category = "dev"
+optional = false
+python-versions = ">=3.7, <4"
+files = [
+    {file = "requests-2.28.2-py3-none-any.whl", hash = "sha256:64299f4909223da747622c030b781c0d7811e359c37124b4bd368fb8c6518baa"},
+    {file = "requests-2.28.2.tar.gz", hash = "sha256:98b1b2782e3c6c4904938b84c0eb932721069dfdb9134313beff7c83c2df24bf"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<1.27"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
 
 [[package]]
 name = "snowballstemmer"
@@ -251,6 +716,175 @@ description = "This package provides 29 stemmers for 28 languages generated from
 category = "dev"
 optional = false
 python-versions = "*"
+files = [
+    {file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
+    {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
+]
+
+[[package]]
+name = "sphinx"
+version = "6.1.3"
+description = "Python documentation generator"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.1.3.tar.gz", hash = "sha256:0dac3b698538ffef41716cf97ba26c1c7788dba73ce6f150c1ff5b4720786dd2"},
+    {file = "sphinx-6.1.3-py3-none-any.whl", hash = "sha256:807d1cb3d6be87eb78a381c3e70ebd8d346b9a25f3753e9947e866b2786865fc"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.0"
+description = "Read the Docs theme for Sphinx"
+category = "dev"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.0-py2.py3-none-any.whl", hash = "sha256:f823f7e71890abe0ac6aaa6013361ea2696fc8d3e1fa798f463e82bdb77eeff2"},
+    {file = "sphinx_rtd_theme-1.2.0.tar.gz", hash = "sha256:a0d8bd1a2ed52e0b338cbe19c4b2eef3c5e7a048769753dac6a9f059c7b641b8"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = {version = ">=2.0.0,<3.0.0 || >3.0.0", markers = "python_version > \"3\""}
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.4"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"},
+    {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.2"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
+    {file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.1"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"},
+    {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+category = "dev"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.3"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
+    {file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.5"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
+    {file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
 
 [[package]]
 name = "toml"
@@ -259,6 +893,10 @@ description = "Python Library for Tom's Obvious, Minimal Language"
 category = "dev"
 optional = false
 python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
+files = [
+    {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
+    {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
+]
 
 [[package]]
 name = "tomli"
@@ -267,22 +905,51 @@ description = "A lil' TOML parser"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
+    {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
+]
 
 [[package]]
 name = "types-pyyaml"
-version = "6.0.12.1"
+version = "6.0.12.8"
 description = "Typing stubs for PyYAML"
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "types-PyYAML-6.0.12.8.tar.gz", hash = "sha256:19304869a89d49af00be681e7b267414df213f4eb89634c4495fa62e8f942b9f"},
+    {file = "types_PyYAML-6.0.12.8-py3-none-any.whl", hash = "sha256:5314a4b2580999b2ea06b2e5f9a7763d860d6e09cdf21c0e9561daa9cbd60178"},
+]
 
 [[package]]
 name = "typing-extensions"
-version = "4.4.0"
+version = "4.5.0"
 description = "Backported and Experimental Type Hints for Python 3.7+"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "typing_extensions-4.5.0-py3-none-any.whl", hash = "sha256:fb33085c39dd998ac16d1431ebc293a8b3eedd00fd4a32de0ff79002c19511b4"},
+    {file = "typing_extensions-4.5.0.tar.gz", hash = "sha256:5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb"},
+]
+
+[[package]]
+name = "urllib3"
+version = "1.26.15"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+files = [
+    {file = "urllib3-1.26.15-py2.py3-none-any.whl", hash = "sha256:aa751d169e23c7479ce47a0cb0da579e3ede798f994f5816a74e4f4500dcea42"},
+    {file = "urllib3-1.26.15.tar.gz", hash = "sha256:8a388717b9476f934a21484e8c8e61875ab60644d29b9b39e11e4b9dc1c6b305"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
+secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
+socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
 
 [[package]]
 name = "warlock"
@@ -291,47 +958,16 @@ description = "Python object model built on JSON schema and JSON patch."
 category = "main"
 optional = false
 python-versions = ">=3.7,<4.0"
+files = [
+    {file = "warlock-2.0.1-py3-none-any.whl", hash = "sha256:448df959cec31904f686ac8c6b1dfab80f0cdabce3d303be517dd433eeebf012"},
+    {file = "warlock-2.0.1.tar.gz", hash = "sha256:99abbf9525b2a77f2cde896d3a9f18a5b4590db063db65e08207694d2e0137fc"},
+]
 
 [package.dependencies]
 jsonpatch = ">=1,<2"
 jsonschema = ">=4,<5"
 
 [metadata]
-lock-version = "1.1"
+lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "a0f040b07fc6ce4deb0be078b9a88c2a465cb6bccb9e260a67e92c2403e2319f"
-
-[metadata.files]
-attrs = []
-black = []
-click = []
-colorama = []
-isort = []
-jsonpatch = []
-jsonpointer = []
-jsonschema = []
-mccabe = []
-mypy = []
-mypy-extensions = []
-pathspec = []
-pexpect = [
-    {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
-    {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
-]
-platformdirs = [
-    {file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
-    {file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
-]
-ptyprocess = []
-pycodestyle = []
-pydocstyle = []
-pyflakes = []
-pylama = []
-pyrsistent = []
-pyyaml = []
-snowballstemmer = []
-toml = []
-tomli = []
-types-pyyaml = []
-typing-extensions = []
-warlock = []
+content-hash = "b3f428e987713d7875434c4b43cadadcb7d77dd3d62fd6855fb8e77ec946f082"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index a136c91e5e..c0fe323272 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -22,6 +22,13 @@ pylama = "^8.4.1"
 pyflakes = "2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+Sphinx = "^6.1.3"
+sphinx-rtd-theme = "^1.2.0"
+
 [tool.poetry.scripts]
 dts = "main:main"
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
  2023-05-04 12:37   ` [RFC PATCH v2 1/4] dts: code adjustments for sphinx Juraj Linkeš
  2023-05-04 12:37   ` [RFC PATCH v2 2/4] dts: add doc generation dependencies Juraj Linkeš
@ 2023-05-04 12:37   ` Juraj Linkeš
  2023-05-04 12:45     ` Bruce Richardson
  2023-05-05 10:56     ` Bruce Richardson
  2023-05-04 12:37   ` [RFC PATCH v2 4/4] dts: format docstrigs to google format Juraj Linkeš
                     ` (2 subsequent siblings)
  5 siblings, 2 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-04 12:37 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is sphinx, which is already
used in DPDK. The configuration is kept the same to preserve the style.

Sphinx generates the documentation from Python docstrings. The docstring
format most suitable for DTS seems to be the Google format [0] which
requires the sphinx.ext.napoleon extension.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx import the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 doc/api/meson.build      |  1 +
 doc/guides/conf.py       | 22 ++++++++++++++----
 doc/guides/meson.build   |  1 +
 doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
 dts/doc/doc-index.rst    | 20 ++++++++++++++++
 dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
 dts/meson.build          | 16 +++++++++++++
 meson.build              |  1 +
 meson_options.txt        |  2 ++
 9 files changed, 137 insertions(+), 5 deletions(-)
 create mode 100644 dts/doc/doc-index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/doc/api/meson.build b/doc/api/meson.build
index 2876a78a7e..1f0c725a94 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index a55ce38800..04c842b67a 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,19 @@
           file=stderr)
     pass
 
+extensions = ['sphinx.ext.napoleon']
+
+# Python docstring options
+autodoc_member_order = 'bysource'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_use_ivar = True
+napoleon_use_rtype = False
+add_module_names = False
+toc_object_entries_show_parents = 'hide'
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +47,8 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+path.append(environ.setdefault('DTS_ROOT', '.'))
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index ebd6dceb6a..a547da2017 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -282,3 +282,32 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+   .. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+
+Build commands
+~~~~~~~~~~~~~~
+
+The documentation is built using the standard DPDK build system.
+
+After entering Poetry's shell, build the documentation with:
+
+   .. code-block:: console
+
+   ninja -C build dts/doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings.
diff --git a/dts/doc/doc-index.rst b/dts/doc/doc-index.rst
new file mode 100644
index 0000000000..10151c6851
--- /dev/null
+++ b/dts/doc/doc-index.rst
@@ -0,0 +1,20 @@
+.. DPDK Test Suite documentation master file, created by
+   sphinx-quickstart on Tue Mar 14 12:23:52 2023.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Welcome to DPDK Test Suite's documentation!
+===========================================
+
+.. toctree::
+   :maxdepth: 4
+   :caption: Contents:
+
+   modules
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..db2bb0bed9
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,50 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
+sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
+
+if sphinx.found() and sphinx_apidoc.found()
+endif
+
+dts_api_framework_dir = join_paths(dts_dir, 'framework')
+dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+dts_api_src = custom_target('dts_api_src',
+        output: 'modules.rst',
+        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
+            sphinx_apidoc, '--append-syspath', '--force',
+            '--module-first', '--separate',
+            '--doc-project', 'DTS', '-V', meson.project_version(),
+            '-o', dts_api_build_dir,
+            dts_api_framework_dir],
+        build_by_default: get_option('enable_dts_docs'))
+doc_targets += dts_api_src
+doc_target_names += 'DTS_API_sphinx_sources'
+
+cp = find_program('cp', required: get_option('enable_docs'))
+cp_index = custom_target('cp_index',
+        input: 'doc-index.rst',
+        output: 'index.rst',
+        depends: dts_api_src,
+        command: [cp, '@INPUT@', join_paths(dts_api_build_dir, 'index.rst')],
+        build_by_default: get_option('enable_dts_docs'))
+doc_targets += cp_index
+doc_target_names += 'DTS_API_sphinx_index'
+
+extra_sphinx_args = ['-a', '-c', doc_guides_source_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        depends: cp_index,
+        command: ['DTS_ROOT=@0@'.format(dts_dir),
+            sphinx_wrapper, sphinx, meson.project_version(),
+            dts_api_build_dir, dts_api_build_dir, extra_sphinx_args],
+        build_by_default: get_option('enable_dts_docs'),
+        install: get_option('enable_dts_docs'),
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..17bda07636
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts/doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index f91d652bc5..7820f334bb 100644
--- a/meson.build
+++ b/meson.build
@@ -84,6 +84,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
diff --git a/meson_options.txt b/meson_options.txt
index 82c8297065..267f1b3ef7 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -16,6 +16,8 @@ option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', d
        'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
 option('enable_docs', type: 'boolean', value: false, description:
        'build documentation')
+option('enable_dts_docs', type: 'boolean', value: false, description:
+       'Build DTS API documentation.')
 option('enable_apps', type: 'string', value: '', description:
        'Comma-separated list of apps to build. If unspecified, build all apps.')
 option('enable_drivers', type: 'string', value: '', description:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v2 4/4] dts: format docstrigs to google format
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
                     ` (2 preceding siblings ...)
  2023-05-04 12:37   ` [RFC PATCH v2 3/4] dts: add doc generation Juraj Linkeš
@ 2023-05-04 12:37   ` Juraj Linkeš
  2023-05-05 14:06   ` [RFC PATCH v2 0/4] dts: add dts api docs Bruce Richardson
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-04 12:37 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

WIP: only one module is reformatted to serve as a demonstration.

The google format is documented here [0].

[0]: https://google.github.io/styleguide/pyguide.html

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 152 +++++++++++++++++++---------
 1 file changed, 103 insertions(+), 49 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 90467981c3..ad8ef442af 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+There's a base class, Node, that's supposed to be extended by other classes
+with functionality specific to that node type.
+The only part that can be used standalone is the Node.skip_setup static method,
+which is a decorator used to skip method execution
+if skip_setup is passed by the user on the cmdline or in an env variable.
 """
 
 from typing import Any, Callable
@@ -26,10 +31,25 @@
 
 
 class Node(object):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather extended.
+    It implements common methods to manage any node:
+
+       * connection to the node
+       * information gathering of CPU
+       * hugepages setup
+
+    Arguments:
+        node_config: The config from the input configuration file.
+
+    Attributes:
+        main_session: The primary OS-agnostic remote session used
+            to communicate with the node.
+        config: The configuration used to create the node.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and user configuration.
     """
 
     main_session: OSSession
@@ -56,65 +76,89 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Created node: {self.name}")
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call self._set_up_execution where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps
+        common to all DTS node types.
         """
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution teardown steps.
         """
 
     def set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps
+        common to all DTS node types.
+
+        Args:
+            build_target_config: The build target configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for derived classes.
+
+        Derived classes should optionally overwrite this
+        if they want to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps
+        common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-agnostic remote session.
+
+        The returned session won't be used by the object creating it.
+        Will be cleaned up automatically.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-agnostic remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -130,14 +174,24 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Logical cores that DTS can use are ones that are present on the node,
+        but filtered according to user config.
+        The filter_specifier will filter cores from those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies
+                the number of logical cores per core, cores per socket and
+                the number of sockets,
+                the other that specifies a logical core list.
+            ascending: If True, use cores with the lowest numerical id first
+                and continue in ascending order. If False, start with the highest
+                id and continue in descending order. This ordering affects which
+                sockets to consider first as well.
+
+        Returns:
+            A list of logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -147,17 +201,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self):
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the Node.
+
+        Configure the hugepages only if they're specified in user configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -165,9 +216,7 @@ def _setup_hugepages(self):
             )
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -176,6 +225,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """A decorator that skips the decorated function.
+
+        When used, the decorator executes an empty lambda function
+        instead of the decorated function.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-04 12:37   ` [RFC PATCH v2 3/4] dts: add doc generation Juraj Linkeš
@ 2023-05-04 12:45     ` Bruce Richardson
  2023-05-05  7:53       ` Juraj Linkeš
  2023-05-05 10:56     ` Bruce Richardson
  1 sibling, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-05-04 12:45 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> The tool used to generate developer docs is sphinx, which is already
> used in DPDK. The configuration is kept the same to preserve the style.
> 
> Sphinx generates the documentation from Python docstrings. The docstring
> format most suitable for DTS seems to be the Google format [0] which
> requires the sphinx.ext.napoleon extension.
> 
> There are two requirements for building DTS docs:
> * The same Python version as DTS or higher, because Sphinx import the
>   code.
> * Also the same Python packages as DTS, for the same reason.
> 
> [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  doc/api/meson.build      |  1 +
>  doc/guides/conf.py       | 22 ++++++++++++++----
>  doc/guides/meson.build   |  1 +
>  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
>  dts/doc/doc-index.rst    | 20 ++++++++++++++++
>  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
>  dts/meson.build          | 16 +++++++++++++
>  meson.build              |  1 +
>  meson_options.txt        |  2 ++
>  9 files changed, 137 insertions(+), 5 deletions(-)
>  create mode 100644 dts/doc/doc-index.rst
>  create mode 100644 dts/doc/meson.build
>  create mode 100644 dts/meson.build
> 

<snip>

> diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> new file mode 100644
> index 0000000000..db2bb0bed9
> --- /dev/null
> +++ b/dts/doc/meson.build
> @@ -0,0 +1,50 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> +
> +if sphinx.found() and sphinx_apidoc.found()
> +endif
> +
> +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> +dts_api_src = custom_target('dts_api_src',
> +        output: 'modules.rst',
> +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> +            sphinx_apidoc, '--append-syspath', '--force',
> +            '--module-first', '--separate',
> +            '--doc-project', 'DTS', '-V', meson.project_version(),
> +            '-o', dts_api_build_dir,
> +            dts_api_framework_dir],
> +        build_by_default: get_option('enable_dts_docs'))
> +doc_targets += dts_api_src
> +doc_target_names += 'DTS_API_sphinx_sources'
> +
> +cp = find_program('cp', required: get_option('enable_docs'))

This should probably be "enable_dts_docs"


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-04 12:45     ` Bruce Richardson
@ 2023-05-05  7:53       ` Juraj Linkeš
  2023-05-05 10:24         ` Bruce Richardson
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-05  7:53 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Thu, May 4, 2023 at 2:45 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > The tool used to generate developer docs is sphinx, which is already
> > used in DPDK. The configuration is kept the same to preserve the style.
> >
> > Sphinx generates the documentation from Python docstrings. The docstring
> > format most suitable for DTS seems to be the Google format [0] which
> > requires the sphinx.ext.napoleon extension.
> >
> > There are two requirements for building DTS docs:
> > * The same Python version as DTS or higher, because Sphinx import the
> >   code.
> > * Also the same Python packages as DTS, for the same reason.
> >
> > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >  doc/api/meson.build      |  1 +
> >  doc/guides/conf.py       | 22 ++++++++++++++----
> >  doc/guides/meson.build   |  1 +
> >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> >  dts/meson.build          | 16 +++++++++++++
> >  meson.build              |  1 +
> >  meson_options.txt        |  2 ++
> >  9 files changed, 137 insertions(+), 5 deletions(-)
> >  create mode 100644 dts/doc/doc-index.rst
> >  create mode 100644 dts/doc/meson.build
> >  create mode 100644 dts/meson.build
> >
>
> <snip>
>
> > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > new file mode 100644
> > index 0000000000..db2bb0bed9
> > --- /dev/null
> > +++ b/dts/doc/meson.build
> > @@ -0,0 +1,50 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > +
> > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > +
> > +if sphinx.found() and sphinx_apidoc.found()
> > +endif
> > +
> > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > +dts_api_src = custom_target('dts_api_src',
> > +        output: 'modules.rst',
> > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> > +            sphinx_apidoc, '--append-syspath', '--force',
> > +            '--module-first', '--separate',
> > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > +            '-o', dts_api_build_dir,
> > +            dts_api_framework_dir],
> > +        build_by_default: get_option('enable_dts_docs'))
> > +doc_targets += dts_api_src
> > +doc_target_names += 'DTS_API_sphinx_sources'
> > +
> > +cp = find_program('cp', required: get_option('enable_docs'))
>
> This should probably be "enable_dts_docs"
>

Right, I overlooked that.
What do you think of the implementation in general?

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-05  7:53       ` Juraj Linkeš
@ 2023-05-05 10:24         ` Bruce Richardson
  2023-05-05 10:41           ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-05-05 10:24 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Fri, May 05, 2023 at 09:53:50AM +0200, Juraj Linkeš wrote:
> On Thu, May 4, 2023 at 2:45 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > > The tool used to generate developer docs is sphinx, which is already
> > > used in DPDK. The configuration is kept the same to preserve the style.
> > >
> > > Sphinx generates the documentation from Python docstrings. The docstring
> > > format most suitable for DTS seems to be the Google format [0] which
> > > requires the sphinx.ext.napoleon extension.
> > >
> > > There are two requirements for building DTS docs:
> > > * The same Python version as DTS or higher, because Sphinx import the
> > >   code.
> > > * Also the same Python packages as DTS, for the same reason.
> > >
> > > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> > >
> > > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > ---
> > >  doc/api/meson.build      |  1 +
> > >  doc/guides/conf.py       | 22 ++++++++++++++----
> > >  doc/guides/meson.build   |  1 +
> > >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> > >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> > >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> > >  dts/meson.build          | 16 +++++++++++++
> > >  meson.build              |  1 +
> > >  meson_options.txt        |  2 ++
> > >  9 files changed, 137 insertions(+), 5 deletions(-)
> > >  create mode 100644 dts/doc/doc-index.rst
> > >  create mode 100644 dts/doc/meson.build
> > >  create mode 100644 dts/meson.build
> > >
> >
> > <snip>
> >
> > > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > > new file mode 100644
> > > index 0000000000..db2bb0bed9
> > > --- /dev/null
> > > +++ b/dts/doc/meson.build
> > > @@ -0,0 +1,50 @@
> > > +# SPDX-License-Identifier: BSD-3-Clause
> > > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > > +
> > > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > > +
> > > +if sphinx.found() and sphinx_apidoc.found()
> > > +endif
> > > +
> > > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > > +dts_api_src = custom_target('dts_api_src',
> > > +        output: 'modules.rst',
> > > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> > > +            sphinx_apidoc, '--append-syspath', '--force',
> > > +            '--module-first', '--separate',
> > > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > > +            '-o', dts_api_build_dir,
> > > +            dts_api_framework_dir],
> > > +        build_by_default: get_option('enable_dts_docs'))
> > > +doc_targets += dts_api_src
> > > +doc_target_names += 'DTS_API_sphinx_sources'
> > > +
> > > +cp = find_program('cp', required: get_option('enable_docs'))
> >
> > This should probably be "enable_dts_docs"
> >
> 
> Right, I overlooked that.
> What do you think of the implementation in general?

I need to download and apply the patches to test out before I comment on
that. I only gave them a quick scan thus far. I'll try and test them today
if I can.

/Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-05 10:24         ` Bruce Richardson
@ 2023-05-05 10:41           ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-05 10:41 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Fri, May 5, 2023 at 12:24 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Fri, May 05, 2023 at 09:53:50AM +0200, Juraj Linkeš wrote:
> > On Thu, May 4, 2023 at 2:45 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > > > The tool used to generate developer docs is sphinx, which is already
> > > > used in DPDK. The configuration is kept the same to preserve the style.
> > > >
> > > > Sphinx generates the documentation from Python docstrings. The docstring
> > > > format most suitable for DTS seems to be the Google format [0] which
> > > > requires the sphinx.ext.napoleon extension.
> > > >
> > > > There are two requirements for building DTS docs:
> > > > * The same Python version as DTS or higher, because Sphinx import the
> > > >   code.
> > > > * Also the same Python packages as DTS, for the same reason.
> > > >
> > > > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> > > >
> > > > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > > ---
> > > >  doc/api/meson.build      |  1 +
> > > >  doc/guides/conf.py       | 22 ++++++++++++++----
> > > >  doc/guides/meson.build   |  1 +
> > > >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> > > >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> > > >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> > > >  dts/meson.build          | 16 +++++++++++++
> > > >  meson.build              |  1 +
> > > >  meson_options.txt        |  2 ++
> > > >  9 files changed, 137 insertions(+), 5 deletions(-)
> > > >  create mode 100644 dts/doc/doc-index.rst
> > > >  create mode 100644 dts/doc/meson.build
> > > >  create mode 100644 dts/meson.build
> > > >
> > >
> > > <snip>
> > >
> > > > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > > > new file mode 100644
> > > > index 0000000000..db2bb0bed9
> > > > --- /dev/null
> > > > +++ b/dts/doc/meson.build
> > > > @@ -0,0 +1,50 @@
> > > > +# SPDX-License-Identifier: BSD-3-Clause
> > > > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > > > +
> > > > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > > > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > > > +
> > > > +if sphinx.found() and sphinx_apidoc.found()
> > > > +endif
> > > > +
> > > > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > > > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > > > +dts_api_src = custom_target('dts_api_src',
> > > > +        output: 'modules.rst',
> > > > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> > > > +            sphinx_apidoc, '--append-syspath', '--force',
> > > > +            '--module-first', '--separate',
> > > > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > > > +            '-o', dts_api_build_dir,
> > > > +            dts_api_framework_dir],
> > > > +        build_by_default: get_option('enable_dts_docs'))
> > > > +doc_targets += dts_api_src
> > > > +doc_target_names += 'DTS_API_sphinx_sources'
> > > > +
> > > > +cp = find_program('cp', required: get_option('enable_docs'))
> > >
> > > This should probably be "enable_dts_docs"
> > >
> >
> > Right, I overlooked that.
> > What do you think of the implementation in general?
>
> I need to download and apply the patches to test out before I comment on
> that. I only gave them a quick scan thus far. I'll try and test them today
> if I can.
>

Great, thanks.

Juraj

> /Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-04 12:37   ` [RFC PATCH v2 3/4] dts: add doc generation Juraj Linkeš
  2023-05-04 12:45     ` Bruce Richardson
@ 2023-05-05 10:56     ` Bruce Richardson
  2023-05-05 11:13       ` Juraj Linkeš
  1 sibling, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-05-05 10:56 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> The tool used to generate developer docs is sphinx, which is already
> used in DPDK. The configuration is kept the same to preserve the style.
> 
> Sphinx generates the documentation from Python docstrings. The docstring
> format most suitable for DTS seems to be the Google format [0] which
> requires the sphinx.ext.napoleon extension.
> 
> There are two requirements for building DTS docs:
> * The same Python version as DTS or higher, because Sphinx import the
>   code.
> * Also the same Python packages as DTS, for the same reason.
> 
> [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  doc/api/meson.build      |  1 +
>  doc/guides/conf.py       | 22 ++++++++++++++----
>  doc/guides/meson.build   |  1 +
>  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
>  dts/doc/doc-index.rst    | 20 ++++++++++++++++
>  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
>  dts/meson.build          | 16 +++++++++++++
>  meson.build              |  1 +
>  meson_options.txt        |  2 ++
>  9 files changed, 137 insertions(+), 5 deletions(-)
>  create mode 100644 dts/doc/doc-index.rst
>  create mode 100644 dts/doc/meson.build
>  create mode 100644 dts/meson.build
> 
<snip>

> diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> new file mode 100644
> index 0000000000..db2bb0bed9
> --- /dev/null
> +++ b/dts/doc/meson.build
> @@ -0,0 +1,50 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> +
> +if sphinx.found() and sphinx_apidoc.found()
> +endif
> +
> +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> +dts_api_src = custom_target('dts_api_src',
> +        output: 'modules.rst',
> +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',

This gives errors when I try to configure a build, even without docs
enabled.

	~/dpdk.org$ meson setup build-test
	The Meson build system
	Version: 1.0.1
	Source dir: /home/bruce/dpdk.org
	...
	Program sphinx-build found: YES (/usr/bin/sphinx-build)
	Program sphinx-build found: YES (/usr/bin/sphinx-build)
	Program sphinx-apidoc found: YES (/usr/bin/sphinx-apidoc)

	dts/doc/meson.build:12:0: ERROR: Program 'SPHINX_APIDOC_OPTIONS=members,show-inheritance' not found or not executable

	A full log can be found at /home/bruce/dpdk.org/build-test/meson-logs/meson-log.txt

Assuming these need to be set in the environment, I think you can use the
"env" parameter to custom target instead.

> +            sphinx_apidoc, '--append-syspath', '--force',
> +            '--module-first', '--separate',
> +            '--doc-project', 'DTS', '-V', meson.project_version(),
> +            '-o', dts_api_build_dir,
> +            dts_api_framework_dir],
> +        build_by_default: get_option('enable_dts_docs'))
> +doc_targets += dts_api_src
> +doc_target_names += 'DTS_API_sphinx_sources'
> +

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-05 10:56     ` Bruce Richardson
@ 2023-05-05 11:13       ` Juraj Linkeš
  2023-05-05 13:28         ` Bruce Richardson
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-05 11:13 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Fri, May 5, 2023 at 12:57 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > The tool used to generate developer docs is sphinx, which is already
> > used in DPDK. The configuration is kept the same to preserve the style.
> >
> > Sphinx generates the documentation from Python docstrings. The docstring
> > format most suitable for DTS seems to be the Google format [0] which
> > requires the sphinx.ext.napoleon extension.
> >
> > There are two requirements for building DTS docs:
> > * The same Python version as DTS or higher, because Sphinx import the
> >   code.
> > * Also the same Python packages as DTS, for the same reason.
> >
> > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >  doc/api/meson.build      |  1 +
> >  doc/guides/conf.py       | 22 ++++++++++++++----
> >  doc/guides/meson.build   |  1 +
> >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> >  dts/meson.build          | 16 +++++++++++++
> >  meson.build              |  1 +
> >  meson_options.txt        |  2 ++
> >  9 files changed, 137 insertions(+), 5 deletions(-)
> >  create mode 100644 dts/doc/doc-index.rst
> >  create mode 100644 dts/doc/meson.build
> >  create mode 100644 dts/meson.build
> >
> <snip>
>
> > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > new file mode 100644
> > index 0000000000..db2bb0bed9
> > --- /dev/null
> > +++ b/dts/doc/meson.build
> > @@ -0,0 +1,50 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > +
> > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > +
> > +if sphinx.found() and sphinx_apidoc.found()
> > +endif
> > +
> > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > +dts_api_src = custom_target('dts_api_src',
> > +        output: 'modules.rst',
> > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
>
> This gives errors when I try to configure a build, even without docs
> enabled.
>
>         ~/dpdk.org$ meson setup build-test
>         The Meson build system
>         Version: 1.0.1
>         Source dir: /home/bruce/dpdk.org
>         ...
>         Program sphinx-build found: YES (/usr/bin/sphinx-build)
>         Program sphinx-build found: YES (/usr/bin/sphinx-build)
>         Program sphinx-apidoc found: YES (/usr/bin/sphinx-apidoc)
>
>         dts/doc/meson.build:12:0: ERROR: Program 'SPHINX_APIDOC_OPTIONS=members,show-inheritance' not found or not executable
>
>         A full log can be found at /home/bruce/dpdk.org/build-test/meson-logs/meson-log.txt
>
> Assuming these need to be set in the environment, I think you can use the
> "env" parameter to custom target instead.
>

I used meson 0.53.2 as that seemed to be the version I should target
(it's used in .ci/linux-setup.sh) which doesn't support the argument
(I originally wanted to use it, but they added it in 0.57.0). I didn't
see the error with 0.53.2.

Which version should I target? 1.0.1?

> > +            sphinx_apidoc, '--append-syspath', '--force',
> > +            '--module-first', '--separate',
> > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > +            '-o', dts_api_build_dir,
> > +            dts_api_framework_dir],
> > +        build_by_default: get_option('enable_dts_docs'))
> > +doc_targets += dts_api_src
> > +doc_target_names += 'DTS_API_sphinx_sources'
> > +

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-05 11:13       ` Juraj Linkeš
@ 2023-05-05 13:28         ` Bruce Richardson
  2023-05-09  9:23           ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-05-05 13:28 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Fri, May 05, 2023 at 01:13:34PM +0200, Juraj Linkeš wrote:
> On Fri, May 5, 2023 at 12:57 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > > The tool used to generate developer docs is sphinx, which is already
> > > used in DPDK. The configuration is kept the same to preserve the style.
> > >
> > > Sphinx generates the documentation from Python docstrings. The docstring
> > > format most suitable for DTS seems to be the Google format [0] which
> > > requires the sphinx.ext.napoleon extension.
> > >
> > > There are two requirements for building DTS docs:
> > > * The same Python version as DTS or higher, because Sphinx import the
> > >   code.
> > > * Also the same Python packages as DTS, for the same reason.
> > >
> > > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> > >
> > > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > ---
> > >  doc/api/meson.build      |  1 +
> > >  doc/guides/conf.py       | 22 ++++++++++++++----
> > >  doc/guides/meson.build   |  1 +
> > >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> > >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> > >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> > >  dts/meson.build          | 16 +++++++++++++
> > >  meson.build              |  1 +
> > >  meson_options.txt        |  2 ++
> > >  9 files changed, 137 insertions(+), 5 deletions(-)
> > >  create mode 100644 dts/doc/doc-index.rst
> > >  create mode 100644 dts/doc/meson.build
> > >  create mode 100644 dts/meson.build
> > >
> > <snip>
> >
> > > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > > new file mode 100644
> > > index 0000000000..db2bb0bed9
> > > --- /dev/null
> > > +++ b/dts/doc/meson.build
> > > @@ -0,0 +1,50 @@
> > > +# SPDX-License-Identifier: BSD-3-Clause
> > > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > > +
> > > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > > +
> > > +if sphinx.found() and sphinx_apidoc.found()
> > > +endif
> > > +
> > > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > > +dts_api_src = custom_target('dts_api_src',
> > > +        output: 'modules.rst',
> > > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> >
> > This gives errors when I try to configure a build, even without docs
> > enabled.
> >
> >         ~/dpdk.org$ meson setup build-test
> >         The Meson build system
> >         Version: 1.0.1
> >         Source dir: /home/bruce/dpdk.org
> >         ...
> >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> >         Program sphinx-apidoc found: YES (/usr/bin/sphinx-apidoc)
> >
> >         dts/doc/meson.build:12:0: ERROR: Program 'SPHINX_APIDOC_OPTIONS=members,show-inheritance' not found or not executable
> >
> >         A full log can be found at /home/bruce/dpdk.org/build-test/meson-logs/meson-log.txt
> >
> > Assuming these need to be set in the environment, I think you can use the
> > "env" parameter to custom target instead.
> >
> 
> I used meson 0.53.2 as that seemed to be the version I should target
> (it's used in .ci/linux-setup.sh) which doesn't support the argument
> (I originally wanted to use it, but they added it in 0.57.0). I didn't
> see the error with 0.53.2.
> 
> Which version should I target? 1.0.1?
> 
> > > +            sphinx_apidoc, '--append-syspath', '--force',
> > > +            '--module-first', '--separate',
> > > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > > +            '-o', dts_api_build_dir,
> > > +            dts_api_framework_dir],
> > > +        build_by_default: get_option('enable_dts_docs'))
> > > +doc_targets += dts_api_src
> > > +doc_target_names += 'DTS_API_sphinx_sources'
> > > +

I didn't try with 0.53.2 - let me test that, see if the error goes away. We
may need different calls based on the meson version.

Is there no other way to pass this data rather than via the environment?

/Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 0/4] dts: add dts api docs
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
                     ` (3 preceding siblings ...)
  2023-05-04 12:37   ` [RFC PATCH v2 4/4] dts: format docstrigs to google format Juraj Linkeš
@ 2023-05-05 14:06   ` Bruce Richardson
  2023-05-09 15:28     ` Juraj Linkeš
  2023-05-11  8:55     ` Juraj Linkeš
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
  5 siblings, 2 replies; 255+ messages in thread
From: Bruce Richardson @ 2023-05-05 14:06 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Thu, May 04, 2023 at 02:37:45PM +0200, Juraj Linkeš wrote:
> Augment the meson build system with dts api generation. The api docs are
> generated from Python docstrings in DTS using Sphinx. The format of
> choice is the Google format [0].
> 
> The guide html sphinx configuration is used to preserve the same style.
> 
> The build requires the same Python version and dependencies as DTS,
> because Sphinx imports the Python modules. Dependencies are installed
> using Poetry from the dts directory:
> 
> poetry install --with docs
> 
> After installing, enter the Poetry shell:
> 
> poetry shell
> 
> And then run the build:
> ninja -C <meson_build_dir> dts/doc
> 
> There's only one properly documented module that serves as a
> demonstration of the style - framework.testbed_model.node.
> 
> [0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
> 
> Juraj Linkeš (4):
>   dts: code adjustments for sphinx
>   dts: add doc generation dependencies
>   dts: add doc generation
>   dts: format docstrigs to google format
> 

I find the requirement to use poetry to build the docs, and the need to run
specific commands in specific directories quite awkward. With this patchset
there is no ability to just turn on the build option for the DTS doc and
have the docs built on the next rebuild. [Also, with every build I've tried
I can't get it to build without warnings about missing "warlock" module.]

From what I understand from the patchset, the doc building here using
sphinx is primarily concerned with building the API docs. The rest of DPDK
uses doxygen for this, and since doxygen supports python can the same
tooling be used for the DTS docs?

/Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-05 13:28         ` Bruce Richardson
@ 2023-05-09  9:23           ` Juraj Linkeš
  2023-05-09  9:40             ` Bruce Richardson
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-09  9:23 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Fri, May 5, 2023 at 3:29 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Fri, May 05, 2023 at 01:13:34PM +0200, Juraj Linkeš wrote:
> > On Fri, May 5, 2023 at 12:57 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > > > The tool used to generate developer docs is sphinx, which is already
> > > > used in DPDK. The configuration is kept the same to preserve the style.
> > > >
> > > > Sphinx generates the documentation from Python docstrings. The docstring
> > > > format most suitable for DTS seems to be the Google format [0] which
> > > > requires the sphinx.ext.napoleon extension.
> > > >
> > > > There are two requirements for building DTS docs:
> > > > * The same Python version as DTS or higher, because Sphinx import the
> > > >   code.
> > > > * Also the same Python packages as DTS, for the same reason.
> > > >
> > > > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> > > >
> > > > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > > ---
> > > >  doc/api/meson.build      |  1 +
> > > >  doc/guides/conf.py       | 22 ++++++++++++++----
> > > >  doc/guides/meson.build   |  1 +
> > > >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> > > >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> > > >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> > > >  dts/meson.build          | 16 +++++++++++++
> > > >  meson.build              |  1 +
> > > >  meson_options.txt        |  2 ++
> > > >  9 files changed, 137 insertions(+), 5 deletions(-)
> > > >  create mode 100644 dts/doc/doc-index.rst
> > > >  create mode 100644 dts/doc/meson.build
> > > >  create mode 100644 dts/meson.build
> > > >
> > > <snip>
> > >
> > > > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > > > new file mode 100644
> > > > index 0000000000..db2bb0bed9
> > > > --- /dev/null
> > > > +++ b/dts/doc/meson.build
> > > > @@ -0,0 +1,50 @@
> > > > +# SPDX-License-Identifier: BSD-3-Clause
> > > > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > > > +
> > > > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > > > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > > > +
> > > > +if sphinx.found() and sphinx_apidoc.found()
> > > > +endif
> > > > +
> > > > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > > > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > > > +dts_api_src = custom_target('dts_api_src',
> > > > +        output: 'modules.rst',
> > > > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> > >
> > > This gives errors when I try to configure a build, even without docs
> > > enabled.
> > >
> > >         ~/dpdk.org$ meson setup build-test
> > >         The Meson build system
> > >         Version: 1.0.1
> > >         Source dir: /home/bruce/dpdk.org
> > >         ...
> > >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> > >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> > >         Program sphinx-apidoc found: YES (/usr/bin/sphinx-apidoc)
> > >
> > >         dts/doc/meson.build:12:0: ERROR: Program 'SPHINX_APIDOC_OPTIONS=members,show-inheritance' not found or not executable
> > >
> > >         A full log can be found at /home/bruce/dpdk.org/build-test/meson-logs/meson-log.txt
> > >
> > > Assuming these need to be set in the environment, I think you can use the
> > > "env" parameter to custom target instead.
> > >
> >
> > I used meson 0.53.2 as that seemed to be the version I should target
> > (it's used in .ci/linux-setup.sh) which doesn't support the argument
> > (I originally wanted to use it, but they added it in 0.57.0). I didn't
> > see the error with 0.53.2.
> >
> > Which version should I target? 1.0.1?
> >
> > > > +            sphinx_apidoc, '--append-syspath', '--force',
> > > > +            '--module-first', '--separate',
> > > > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > > > +            '-o', dts_api_build_dir,
> > > > +            dts_api_framework_dir],
> > > > +        build_by_default: get_option('enable_dts_docs'))
> > > > +doc_targets += dts_api_src
> > > > +doc_target_names += 'DTS_API_sphinx_sources'
> > > > +
>
> I didn't try with 0.53.2 - let me test that, see if the error goes away. We
> may need different calls based on the meson version.
>
> Is there no other way to pass this data rather than via the environment?
>

Certainly. For background, I wanted to do the same thing we do for
DPDK_VERSION, but that would require adding an additional parameter to
call-sphinx-build.py, which wouldn't be used by the other call of
call-sphinx-build.py (the one that builds doc guides), so I skipped
the parameter and set the env var before the call.

There are a few options that come to mind:
1. Use the parameter. There are two sub-options here, either make the
parameter positional and mandatory and then we'd have an awkward call
that builds the guides or I could make it optional, but that would
make the script a bit more complex (some argparse logic would be
needed).
2. Hard-code the path into conf.py.
3. Have separate conf.py files. Maybe we could make this work with symlinks.

There could be something else. I like adding the optional parameter. I
don't know the policy on buildtools script complexity so let me know
what you think.

Juraj

> /Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-09  9:23           ` Juraj Linkeš
@ 2023-05-09  9:40             ` Bruce Richardson
  2023-05-10 12:19               ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-05-09  9:40 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Tue, May 09, 2023 at 11:23:50AM +0200, Juraj Linkeš wrote:
> On Fri, May 5, 2023 at 3:29 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Fri, May 05, 2023 at 01:13:34PM +0200, Juraj Linkeš wrote:
> > > On Fri, May 5, 2023 at 12:57 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > > > > The tool used to generate developer docs is sphinx, which is already
> > > > > used in DPDK. The configuration is kept the same to preserve the style.
> > > > >
> > > > > Sphinx generates the documentation from Python docstrings. The docstring
> > > > > format most suitable for DTS seems to be the Google format [0] which
> > > > > requires the sphinx.ext.napoleon extension.
> > > > >
> > > > > There are two requirements for building DTS docs:
> > > > > * The same Python version as DTS or higher, because Sphinx import the
> > > > >   code.
> > > > > * Also the same Python packages as DTS, for the same reason.
> > > > >
> > > > > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> > > > >
> > > > > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > > > ---
> > > > >  doc/api/meson.build      |  1 +
> > > > >  doc/guides/conf.py       | 22 ++++++++++++++----
> > > > >  doc/guides/meson.build   |  1 +
> > > > >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> > > > >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> > > > >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> > > > >  dts/meson.build          | 16 +++++++++++++
> > > > >  meson.build              |  1 +
> > > > >  meson_options.txt        |  2 ++
> > > > >  9 files changed, 137 insertions(+), 5 deletions(-)
> > > > >  create mode 100644 dts/doc/doc-index.rst
> > > > >  create mode 100644 dts/doc/meson.build
> > > > >  create mode 100644 dts/meson.build
> > > > >
> > > > <snip>
> > > >
> > > > > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > > > > new file mode 100644
> > > > > index 0000000000..db2bb0bed9
> > > > > --- /dev/null
> > > > > +++ b/dts/doc/meson.build
> > > > > @@ -0,0 +1,50 @@
> > > > > +# SPDX-License-Identifier: BSD-3-Clause
> > > > > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > > > > +
> > > > > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > > > > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > > > > +
> > > > > +if sphinx.found() and sphinx_apidoc.found()
> > > > > +endif
> > > > > +
> > > > > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > > > > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > > > > +dts_api_src = custom_target('dts_api_src',
> > > > > +        output: 'modules.rst',
> > > > > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> > > >
> > > > This gives errors when I try to configure a build, even without docs
> > > > enabled.
> > > >
> > > >         ~/dpdk.org$ meson setup build-test
> > > >         The Meson build system
> > > >         Version: 1.0.1
> > > >         Source dir: /home/bruce/dpdk.org
> > > >         ...
> > > >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> > > >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> > > >         Program sphinx-apidoc found: YES (/usr/bin/sphinx-apidoc)
> > > >
> > > >         dts/doc/meson.build:12:0: ERROR: Program 'SPHINX_APIDOC_OPTIONS=members,show-inheritance' not found or not executable
> > > >
> > > >         A full log can be found at /home/bruce/dpdk.org/build-test/meson-logs/meson-log.txt
> > > >
> > > > Assuming these need to be set in the environment, I think you can use the
> > > > "env" parameter to custom target instead.
> > > >
> > >
> > > I used meson 0.53.2 as that seemed to be the version I should target
> > > (it's used in .ci/linux-setup.sh) which doesn't support the argument
> > > (I originally wanted to use it, but they added it in 0.57.0). I didn't
> > > see the error with 0.53.2.
> > >
> > > Which version should I target? 1.0.1?
> > >
> > > > > +            sphinx_apidoc, '--append-syspath', '--force',
> > > > > +            '--module-first', '--separate',
> > > > > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > > > > +            '-o', dts_api_build_dir,
> > > > > +            dts_api_framework_dir],
> > > > > +        build_by_default: get_option('enable_dts_docs'))
> > > > > +doc_targets += dts_api_src
> > > > > +doc_target_names += 'DTS_API_sphinx_sources'
> > > > > +
> >
> > I didn't try with 0.53.2 - let me test that, see if the error goes away. We
> > may need different calls based on the meson version.
> >
> > Is there no other way to pass this data rather than via the environment?
> >
> 
> Certainly. For background, I wanted to do the same thing we do for
> DPDK_VERSION, but that would require adding an additional parameter to
> call-sphinx-build.py, which wouldn't be used by the other call of
> call-sphinx-build.py (the one that builds doc guides), so I skipped
> the parameter and set the env var before the call.
> 
> There are a few options that come to mind:
> 1. Use the parameter. There are two sub-options here, either make the
> parameter positional and mandatory and then we'd have an awkward call
> that builds the guides or I could make it optional, but that would
> make the script a bit more complex (some argparse logic would be
> needed).
> 2. Hard-code the path into conf.py.
> 3. Have separate conf.py files. Maybe we could make this work with symlinks.
> 
> There could be something else. I like adding the optional parameter. I
> don't know the policy on buildtools script complexity so let me know
> what you think.
> 
If the parameter would be just unused for the main doc build, and cause no
issues, I don't see why we can't just put it into the main conf.py file. We
can add a comment explaining that it only applies for the DTS doc. However,
option 1, with an extra optional parameter doesn't seem so bad to me
either. Using argparse in the build script doesn't seem like a problem
either, if it's necessary. Maybe others have other opinions, though.

/Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 0/4] dts: add dts api docs
  2023-05-05 14:06   ` [RFC PATCH v2 0/4] dts: add dts api docs Bruce Richardson
@ 2023-05-09 15:28     ` Juraj Linkeš
  2023-05-11  8:55     ` Juraj Linkeš
  1 sibling, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-09 15:28 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Fri, May 5, 2023 at 4:07 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, May 04, 2023 at 02:37:45PM +0200, Juraj Linkeš wrote:
> > Augment the meson build system with dts api generation. The api docs are
> > generated from Python docstrings in DTS using Sphinx. The format of
> > choice is the Google format [0].
> >
> > The guide html sphinx configuration is used to preserve the same style.
> >
> > The build requires the same Python version and dependencies as DTS,
> > because Sphinx imports the Python modules. Dependencies are installed
> > using Poetry from the dts directory:
> >
> > poetry install --with docs
> >
> > After installing, enter the Poetry shell:
> >
> > poetry shell
> >
> > And then run the build:
> > ninja -C <meson_build_dir> dts/doc
> >
> > There's only one properly documented module that serves as a
> > demonstration of the style - framework.testbed_model.node.
> >
> > [0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
> >
> > Juraj Linkeš (4):
> >   dts: code adjustments for sphinx
> >   dts: add doc generation dependencies
> >   dts: add doc generation
> >   dts: format docstrigs to google format
> >
>
> I find the requirement to use poetry to build the docs, and the need to run
> specific commands in specific directories quite awkward. With this patchset
> there is no ability to just turn on the build option for the DTS doc and
> have the docs built on the next rebuild. [Also, with every build I've tried
> I can't get it to build without warnings about missing "warlock" module.]
>
> From what I understand from the patchset, the doc building here using
> sphinx is primarily concerned with building the API docs. The rest of DPDK
> uses doxygen for this, and since doxygen supports python can the same
> tooling be used for the DTS docs?
>

I don't think any tool for python API docs would be able to do it
without the dependencies. The standard way to document python code is
in Python docstrings which are accessible during runtime (which is why
the dependencies are needed). Doxygen says that as well:
For Python there is a standard way of documenting the code using so
called documentation strings ("""). Such strings are stored in __doc__
and can be retrieved at runtime. Doxygen will extract such comments
and assume they have to be represented in a preformatted way.

There may be a tool that doesn't use the __doc__ attribute accessible
during runtime (I don't think anyone would implement something like
that though), but that would likely be much worse than Sphinx.

Juraj

> /Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 3/4] dts: add doc generation
  2023-05-09  9:40             ` Bruce Richardson
@ 2023-05-10 12:19               ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-10 12:19 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Tue, May 9, 2023 at 11:40 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Tue, May 09, 2023 at 11:23:50AM +0200, Juraj Linkeš wrote:
> > On Fri, May 5, 2023 at 3:29 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Fri, May 05, 2023 at 01:13:34PM +0200, Juraj Linkeš wrote:
> > > > On Fri, May 5, 2023 at 12:57 PM Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > >
> > > > > On Thu, May 04, 2023 at 02:37:48PM +0200, Juraj Linkeš wrote:
> > > > > > The tool used to generate developer docs is sphinx, which is already
> > > > > > used in DPDK. The configuration is kept the same to preserve the style.
> > > > > >
> > > > > > Sphinx generates the documentation from Python docstrings. The docstring
> > > > > > format most suitable for DTS seems to be the Google format [0] which
> > > > > > requires the sphinx.ext.napoleon extension.
> > > > > >
> > > > > > There are two requirements for building DTS docs:
> > > > > > * The same Python version as DTS or higher, because Sphinx import the
> > > > > >   code.
> > > > > > * Also the same Python packages as DTS, for the same reason.
> > > > > >
> > > > > > [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> > > > > >
> > > > > > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > > > > ---
> > > > > >  doc/api/meson.build      |  1 +
> > > > > >  doc/guides/conf.py       | 22 ++++++++++++++----
> > > > > >  doc/guides/meson.build   |  1 +
> > > > > >  doc/guides/tools/dts.rst | 29 +++++++++++++++++++++++
> > > > > >  dts/doc/doc-index.rst    | 20 ++++++++++++++++
> > > > > >  dts/doc/meson.build      | 50 ++++++++++++++++++++++++++++++++++++++++
> > > > > >  dts/meson.build          | 16 +++++++++++++
> > > > > >  meson.build              |  1 +
> > > > > >  meson_options.txt        |  2 ++
> > > > > >  9 files changed, 137 insertions(+), 5 deletions(-)
> > > > > >  create mode 100644 dts/doc/doc-index.rst
> > > > > >  create mode 100644 dts/doc/meson.build
> > > > > >  create mode 100644 dts/meson.build
> > > > > >
> > > > > <snip>
> > > > >
> > > > > > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > > > > > new file mode 100644
> > > > > > index 0000000000..db2bb0bed9
> > > > > > --- /dev/null
> > > > > > +++ b/dts/doc/meson.build
> > > > > > @@ -0,0 +1,50 @@
> > > > > > +# SPDX-License-Identifier: BSD-3-Clause
> > > > > > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > > > > > +
> > > > > > +sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
> > > > > > +sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
> > > > > > +
> > > > > > +if sphinx.found() and sphinx_apidoc.found()
> > > > > > +endif
> > > > > > +
> > > > > > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > > > > > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > > > > > +dts_api_src = custom_target('dts_api_src',
> > > > > > +        output: 'modules.rst',
> > > > > > +        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
> > > > >
> > > > > This gives errors when I try to configure a build, even without docs
> > > > > enabled.
> > > > >
> > > > >         ~/dpdk.org$ meson setup build-test
> > > > >         The Meson build system
> > > > >         Version: 1.0.1
> > > > >         Source dir: /home/bruce/dpdk.org
> > > > >         ...
> > > > >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> > > > >         Program sphinx-build found: YES (/usr/bin/sphinx-build)
> > > > >         Program sphinx-apidoc found: YES (/usr/bin/sphinx-apidoc)
> > > > >
> > > > >         dts/doc/meson.build:12:0: ERROR: Program 'SPHINX_APIDOC_OPTIONS=members,show-inheritance' not found or not executable
> > > > >
> > > > >         A full log can be found at /home/bruce/dpdk.org/build-test/meson-logs/meson-log.txt
> > > > >
> > > > > Assuming these need to be set in the environment, I think you can use the
> > > > > "env" parameter to custom target instead.
> > > > >
> > > >
> > > > I used meson 0.53.2 as that seemed to be the version I should target
> > > > (it's used in .ci/linux-setup.sh) which doesn't support the argument
> > > > (I originally wanted to use it, but they added it in 0.57.0). I didn't
> > > > see the error with 0.53.2.
> > > >
> > > > Which version should I target? 1.0.1?
> > > >
> > > > > > +            sphinx_apidoc, '--append-syspath', '--force',
> > > > > > +            '--module-first', '--separate',
> > > > > > +            '--doc-project', 'DTS', '-V', meson.project_version(),
> > > > > > +            '-o', dts_api_build_dir,
> > > > > > +            dts_api_framework_dir],
> > > > > > +        build_by_default: get_option('enable_dts_docs'))
> > > > > > +doc_targets += dts_api_src
> > > > > > +doc_target_names += 'DTS_API_sphinx_sources'
> > > > > > +
> > >
> > > I didn't try with 0.53.2 - let me test that, see if the error goes away. We
> > > may need different calls based on the meson version.
> > >
> > > Is there no other way to pass this data rather than via the environment?
> > >
> >
> > Certainly. For background, I wanted to do the same thing we do for
> > DPDK_VERSION, but that would require adding an additional parameter to
> > call-sphinx-build.py, which wouldn't be used by the other call of
> > call-sphinx-build.py (the one that builds doc guides), so I skipped
> > the parameter and set the env var before the call.
> >
> > There are a few options that come to mind:
> > 1. Use the parameter. There are two sub-options here, either make the
> > parameter positional and mandatory and then we'd have an awkward call
> > that builds the guides or I could make it optional, but that would
> > make the script a bit more complex (some argparse logic would be
> > needed).
> > 2. Hard-code the path into conf.py.
> > 3. Have separate conf.py files. Maybe we could make this work with symlinks.
> >
> > There could be something else. I like adding the optional parameter. I
> > don't know the policy on buildtools script complexity so let me know
> > what you think.
> >
> If the parameter would be just unused for the main doc build, and cause no
> issues, I don't see why we can't just put it into the main conf.py file. We
> can add a comment explaining that it only applies for the DTS doc. However,
> option 1, with an extra optional parameter doesn't seem so bad to me
> either. Using argparse in the build script doesn't seem like a problem
> either, if it's necessary. Maybe others have other opinions, though.
>

I'll just submit the version with argparse and we'll see how it looks.

> /Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v2 0/4] dts: add dts api docs
  2023-05-05 14:06   ` [RFC PATCH v2 0/4] dts: add dts api docs Bruce Richardson
  2023-05-09 15:28     ` Juraj Linkeš
@ 2023-05-11  8:55     ` Juraj Linkeš
  1 sibling, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-11  8:55 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Fri, May 5, 2023 at 4:07 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, May 04, 2023 at 02:37:45PM +0200, Juraj Linkeš wrote:
> > Augment the meson build system with dts api generation. The api docs are
> > generated from Python docstrings in DTS using Sphinx. The format of
> > choice is the Google format [0].
> >
> > The guide html sphinx configuration is used to preserve the same style.
> >
> > The build requires the same Python version and dependencies as DTS,
> > because Sphinx imports the Python modules. Dependencies are installed
> > using Poetry from the dts directory:
> >
> > poetry install --with docs
> >
> > After installing, enter the Poetry shell:
> >
> > poetry shell
> >
> > And then run the build:
> > ninja -C <meson_build_dir> dts/doc
> >
> > There's only one properly documented module that serves as a
> > demonstration of the style - framework.testbed_model.node.
> >
> > [0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
> >
> > Juraj Linkeš (4):
> >   dts: code adjustments for sphinx
> >   dts: add doc generation dependencies
> >   dts: add doc generation
> >   dts: format docstrigs to google format
> >
>
> I find the requirement to use poetry to build the docs, and the need to run
> specific commands in specific directories quite awkward. With this patchset
> there is no ability to just turn on the build option for the DTS doc and
> have the docs built on the next rebuild. [Also, with every build I've tried
> I can't get it to build without warnings about missing "warlock" module.]
>

I want to ask about the warnings. This suggests a problem with
dependencies, have you entered the Poetry shell? We may need to add
some steps to docs, which are currently:

poetry install --with docs
poetry shell

And then:
ninja -C build dts/doc

Maybe the problem is with Poetry version (1.4.2 and higher should
work), which is not specified in the docs yet. I need to update
http://patches.dpdk.org/project/dpdk/patch/20230331091355.1224059-1-juraj.linkes@pantheon.tech/
with it.

Which are your exact steps for building the docs?

Juraj

> From what I understand from the patchset, the doc building here using
> sphinx is primarily concerned with building the API docs. The rest of DPDK
> uses doxygen for this, and since doxygen supports python can the same
> tooling be used for the DTS docs?
>
> /Bruce

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v3 0/4] dts: add dts api docs
  2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
                     ` (4 preceding siblings ...)
  2023-05-05 14:06   ` [RFC PATCH v2 0/4] dts: add dts api docs Bruce Richardson
@ 2023-05-11  9:14   ` Juraj Linkeš
  2023-05-11  9:14     ` [RFC PATCH v3 1/4] dts: code adjustments for sphinx Juraj Linkeš
                       ` (5 more replies)
  5 siblings, 6 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-11  9:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

Augment the meson build system with dts api generation. The api docs are
generated from Python docstrings in DTS using Sphinx. The format of
choice is the Google format [0].

The guides html sphinx configuration is used to preserve the same style.

The build requires the same Python version and dependencies as DTS,
because Sphinx imports the Python modules. Dependencies are installed
using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts/doc

There's only one properly documented module that serves as a
demonstration of the style - framework.testbed_model.node. When we agree
on the docstring format, all docstrings will be reformatted.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes

Juraj Linkeš (4):
  dts: code adjustments for sphinx
  dts: add doc generation dependencies
  dts: add doc generation
  dts: format docstrigs to google format

 buildtools/call-sphinx-build.py               |  29 +-
 doc/api/meson.build                           |   1 +
 doc/guides/conf.py                            |  22 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      |  29 +
 dts/doc/doc-index.rst                         |  20 +
 dts/doc/meson.build                           |  51 ++
 dts/framework/config/__init__.py              |  11 +
 .../{testbed_model/hw => config}/cpu.py       |  13 +
 dts/framework/dts.py                          |   8 +-
 dts/framework/remote_session/__init__.py      |   3 +-
 dts/framework/remote_session/linux_session.py |   2 +-
 dts/framework/remote_session/os_session.py    |  12 +-
 .../remote_session/remote/__init__.py         |  16 -
 .../{remote => }/remote_session.py            |   0
 .../{remote => }/ssh_session.py               |   0
 dts/framework/settings.py                     |  55 +-
 dts/framework/testbed_model/__init__.py       |  10 +-
 dts/framework/testbed_model/hw/__init__.py    |  27 -
 dts/framework/testbed_model/node.py           | 164 ++--
 dts/framework/testbed_model/sut_node.py       |   9 +-
 .../testbed_model/{hw => }/virtual_device.py  |   0
 dts/main.py                                   |   3 +-
 dts/meson.build                               |  16 +
 dts/poetry.lock                               | 770 ++++++++++++++++--
 dts/pyproject.toml                            |   7 +
 dts/tests/TestSuite_hello_world.py            |   6 +-
 meson.build                                   |   1 +
 meson_options.txt                             |   2 +
 29 files changed, 1058 insertions(+), 230 deletions(-)
 create mode 100644 dts/doc/doc-index.rst
 create mode 100644 dts/doc/meson.build
 rename dts/framework/{testbed_model/hw => config}/cpu.py (95%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
 create mode 100644 dts/meson.build

-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v3 1/4] dts: code adjustments for sphinx
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
@ 2023-05-11  9:14     ` Juraj Linkeš
  2023-05-11  9:14     ` [RFC PATCH v3 2/4] dts: add doc generation dependencies Juraj Linkeš
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-11  9:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

sphinx-build only imports the Python modules when building the
documentation; it doesn't run DTS. This requires changes that make the
code importable without running it. This means:
* properly guarding argument parsing in the if __name__ == '__main__'
  block.
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create unnecessary log files.
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere.
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by more granular imports.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              | 11 ++++
 .../{testbed_model/hw => config}/cpu.py       | 13 +++++
 dts/framework/dts.py                          |  8 ++-
 dts/framework/remote_session/__init__.py      |  3 +-
 dts/framework/remote_session/linux_session.py |  2 +-
 dts/framework/remote_session/os_session.py    | 12 +++-
 .../remote_session/remote/__init__.py         | 16 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               |  0
 dts/framework/settings.py                     | 55 ++++++++++---------
 dts/framework/testbed_model/__init__.py       | 10 +---
 dts/framework/testbed_model/hw/__init__.py    | 27 ---------
 dts/framework/testbed_model/node.py           | 12 ++--
 dts/framework/testbed_model/sut_node.py       |  9 ++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/main.py                                   |  3 +-
 dts/tests/TestSuite_hello_world.py            |  6 +-
 17 files changed, 88 insertions(+), 99 deletions(-)
 rename dts/framework/{testbed_model/hw => config}/cpu.py (95%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ebb0823ff5..293c4cb15b 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -7,6 +7,8 @@
 Yaml config parsing methods
 """
 
+# pylama:ignore=W0611
+
 import json
 import os.path
 import pathlib
@@ -19,6 +21,15 @@
 
 from framework.settings import SETTINGS
 
+from .cpu import (
+    LogicalCore,
+    LogicalCoreCount,
+    LogicalCoreCountFilter,
+    LogicalCoreList,
+    LogicalCoreListFilter,
+    lcore_filter,
+)
+
 
 class StrEnum(Enum):
     @staticmethod
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/config/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/config/cpu.py
index d1918a12dc..8fe785dfe4 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/config/cpu.py
@@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 0502284580..22a09b7e34 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,7 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+import logging
 import sys
 
 from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration
@@ -12,7 +13,8 @@
 from .testbed_model import SutNode
 from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG | logging.Logger = logging.getLogger("DTSRunner")
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -24,6 +26,10 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
     check_dts_python_version()
 
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index ee221503df..17ca1459f7 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,8 @@
 
 from .linux_session import LinuxSession
 from .os_session import OSSession
-from .remote import CommandResult, RemoteSession, SSHSession
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
 
 
 def create_session(
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index a1e3bc3a92..c8ce5fe6da 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,8 +2,8 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+from framework.config import LogicalCore
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
 from framework.utils import expand_range
 
 from .posix_session import PosixSession
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 4c48ae2567..246f0358ea 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -6,13 +6,13 @@
 from collections.abc import Iterable
 from pathlib import PurePath
 
-from framework.config import Architecture, NodeConfiguration
+from framework.config import Architecture, LogicalCore, NodeConfiguration
 from framework.logger import DTSLOG
 from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
 from framework.utils import EnvVarsDict, MesonArgs
 
-from .remote import CommandResult, RemoteSession, create_remote_session
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
 
 
 class OSSession(ABC):
@@ -173,3 +173,9 @@ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
         if needed and mount the hugepages if needed.
         If force_first_numa is True, configure hugepages just on the first socket.
         """
+
+
+def create_remote_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 8a1512210a..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 71955f4581..144f9dea62 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -59,15 +59,18 @@ def __call__(
 
 @dataclass(slots=True, frozen=True)
 class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+    config_file_path: Path = Path(Path(__file__).parent.parent, "conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: _Settings = _Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -81,7 +84,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
         "and targets.",
     )
@@ -90,7 +94,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -98,7 +102,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
         "compiling DPDK.",
@@ -108,7 +112,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
+        default=SETTINGS.verbose,
         help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
         "to the console.",
     )
@@ -117,7 +121,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
+        default=SETTINGS.skip_setup,
         help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
     )
 
@@ -125,7 +129,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--tarball",
         "--snapshot",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball "
         "which will be used in testing.",
@@ -134,7 +138,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -142,8 +146,9 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--test-cases",
         action=_env_arg("DTS_TESTCASES"),
-        default="",
-        help="[DTS_TESTCASES] Comma-separated list of test cases to execute. "
+        nargs="*",
+        default=SETTINGS.test_cases,
+        help="[DTS_TESTCASES] A list of test cases to execute. "
         "Unknown test cases will be silently ignored.",
     )
 
@@ -151,7 +156,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -165,10 +170,11 @@ def _check_tarball_path(parsed_args: argparse.Namespace) -> None:
         raise ConfigurationError(f"DPDK tarball '{parsed_args.tarball}' doesn't exist.")
 
 
-def _get_settings() -> _Settings:
+def set_settings() -> None:
+    global SETTINGS
     parsed_args = _get_parser().parse_args()
     _check_tarball_path(parsed_args)
-    return _Settings(
+    SETTINGS = _Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
@@ -176,9 +182,6 @@ def _get_settings() -> _Settings:
         skip_setup=(parsed_args.skip_setup == "Y"),
         dpdk_tarball_path=parsed_args.tarball,
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=parsed_args.test_cases,
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index f54a947051..148f81993d 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,14 +9,6 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
 from .node import Node
 from .sut_node import SutNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index d48fafe65d..90467981c3 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,19 +12,17 @@
 from framework.config import (
     BuildTargetConfiguration,
     ExecutionConfiguration,
-    NodeConfiguration,
-)
-from framework.logger import DTSLOG, getLogger
-from framework.remote_session import OSSession, create_session
-from framework.settings import SETTINGS
-
-from .hw import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
+    NodeConfiguration,
     lcore_filter,
 )
+from framework.logger import DTSLOG, getLogger
+from framework.remote_session import create_session
+from framework.remote_session.os_session import OSSession
+from framework.settings import SETTINGS
 
 
 class Node(object):
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 2b2b50d982..6db4a505bb 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -7,13 +7,18 @@
 import time
 from pathlib import PurePath
 
-from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.config import (
+    BuildTargetConfiguration,
+    LogicalCoreCount,
+    LogicalCoreList,
+    NodeConfiguration,
+)
 from framework.remote_session import CommandResult, OSSession
 from framework.settings import SETTINGS
 from framework.utils import EnvVarsDict, MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
 from .node import Node
+from .virtual_device import VirtualDevice
 
 
 class SutNode(Node):
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..060ff1b19a 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,11 @@
 
 import logging
 
-from framework import dts
+from framework import dts, settings
 
 
 def main() -> None:
+    settings.set_settings()
     dts.run_all()
 
 
diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..96c31a6c8c 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -6,12 +6,8 @@
 No other EAL parameters apart from cores are used.
 """
 
+from framework.config import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from framework.test_suite import TestSuite
-from framework.testbed_model import (
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-)
 
 
 class TestHelloWorld(TestSuite):
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v3 2/4] dts: add doc generation dependencies
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
  2023-05-11  9:14     ` [RFC PATCH v3 1/4] dts: code adjustments for sphinx Juraj Linkeš
@ 2023-05-11  9:14     ` Juraj Linkeš
  2023-05-11  9:14     ` [RFC PATCH v3 3/4] dts: add doc generation Juraj Linkeš
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-11  9:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 770 +++++++++++++++++++++++++++++++++++++++++----
 dts/pyproject.toml |   7 +
 2 files changed, 710 insertions(+), 67 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index 0b2a007d4d..500f89dac1 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,24 +1,69 @@
+# This file is automatically @generated by Poetry and should not be changed by hand.
+
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
-version = "22.1.0"
+version = "22.2.0"
 description = "Classes Without Boilerplate"
 category = "main"
 optional = false
-python-versions = ">=3.5"
+python-versions = ">=3.6"
+files = [
+    {file = "attrs-22.2.0-py3-none-any.whl", hash = "sha256:29e95c7f6778868dbd49170f98f8818f78f3dc5e0e37c0b1f474e3561b240836"},
+    {file = "attrs-22.2.0.tar.gz", hash = "sha256:c9227bfc2f01993c03f68db37d1d15c9690188323c067c641f1a35ca58185f99"},
+]
 
 [package.extras]
-dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit", "cloudpickle"]
-docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
-tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "zope.interface", "cloudpickle"]
-tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "mypy (>=0.900,!=0.940)", "pytest-mypy-plugins", "cloudpickle"]
+cov = ["attrs[tests]", "coverage-enable-subprocess", "coverage[toml] (>=5.3)"]
+dev = ["attrs[docs,tests]"]
+docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope.interface"]
+tests = ["attrs[tests-no-zope]", "zope.interface"]
+tests-no-zope = ["cloudpickle", "cloudpickle", "hypothesis", "hypothesis", "mypy (>=0.971,<0.990)", "mypy (>=0.971,<0.990)", "pympler", "pympler", "pytest (>=4.3.0)", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-mypy-plugins", "pytest-xdist[psutil]", "pytest-xdist[psutil]"]
+
+[[package]]
+name = "babel"
+version = "2.12.1"
+description = "Internationalization utilities"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"},
+    {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"},
+]
 
 [[package]]
 name = "black"
-version = "22.10.0"
+version = "22.12.0"
 description = "The uncompromising code formatter."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
+    {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
+    {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
+    {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
+    {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
+    {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
+    {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
+    {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
+    {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
+    {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
+    {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
+    {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
+]
 
 [package.dependencies]
 click = ">=8.0.0"
@@ -33,6 +78,103 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2022.12.7"
+description = "Python package for providing Mozilla's CA Bundle."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2022.12.7-py3-none-any.whl", hash = "sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18"},
+    {file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"},
+]
+
+[[package]]
+name = "charset-normalizer"
+version = "3.1.0"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+category = "dev"
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.1.0.tar.gz", hash = "sha256:34e0a2f9c370eb95597aae63bf85eb5e96826d81e3dcf88b8886012906f509b5"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e0ac8959c929593fee38da1c2b64ee9778733cdf03c482c9ff1d508b6b593b2b"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d7fc3fca01da18fbabe4625d64bb612b533533ed10045a2ac3dd194bfa656b60"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04eefcee095f58eaabe6dc3cc2262f3bcd776d2c67005880894f447b3f2cb9c1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20064ead0717cf9a73a6d1e779b23d149b53daf971169289ed2ed43a71e8d3b0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1435ae15108b1cb6fffbcea2af3d468683b7afed0169ad718451f8db5d1aff6f"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c84132a54c750fda57729d1e2599bb598f5fa0344085dbde5003ba429a4798c0"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75f2568b4189dda1c567339b48cba4ac7384accb9c2a7ed655cd86b04055c795"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11d3bcb7be35e7b1bba2c23beedac81ee893ac9871d0ba79effc7fc01167db6c"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:891cf9b48776b5c61c700b55a598621fdb7b1e301a550365571e9624f270c203"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5f008525e02908b20e04707a4f704cd286d94718f48bb33edddc7d7b584dddc1"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b06f0d3bf045158d2fb8837c5785fe9ff9b8c93358be64461a1089f5da983137"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:49919f8400b5e49e961f320c735388ee686a62327e773fa5b3ce6721f7e785ce"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22908891a380d50738e1f978667536f6c6b526a2064156203d418f4856d6e86a"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win32.whl", hash = "sha256:12d1a39aa6b8c6f6248bb54550efcc1c38ce0d8096a146638fd4738e42284448"},
+    {file = "charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:65ed923f84a6844de5fd29726b888e58c62820e0769b76565480e1fdc3d062f8"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9a3267620866c9d17b959a84dd0bd2d45719b817245e49371ead79ed4f710d19"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6734e606355834f13445b6adc38b53c0fd45f1a56a9ba06c2058f86893ae8017"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8303414c7b03f794347ad062c0516cee0e15f7a612abd0ce1e25caf6ceb47df"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaf53a6cebad0eae578f062c7d462155eada9c172bd8c4d250b8c1d8eb7f916a"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3dc5b6a8ecfdc5748a7e429782598e4f17ef378e3e272eeb1340ea57c9109f41"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e1b25e3ad6c909f398df8921780d6a3d120d8c09466720226fc621605b6f92b1"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ca564606d2caafb0abe6d1b5311c2649e8071eb241b2d64e75a0d0065107e62"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b82fab78e0b1329e183a65260581de4375f619167478dddab510c6c6fb04d9b6"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bd7163182133c0c7701b25e604cf1611c0d87712e56e88e7ee5d72deab3e76b5"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:11d117e6c63e8f495412d37e7dc2e2fff09c34b2d09dbe2bee3c6229577818be"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:cf6511efa4801b9b38dc5546d7547d5b5c6ef4b081c60b23e4d941d0eba9cbeb"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:abc1185d79f47c0a7aaf7e2412a0eb2c03b724581139193d2d82b3ad8cbb00ac"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cb7b2ab0188829593b9de646545175547a70d9a6e2b63bf2cd87a0a391599324"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win32.whl", hash = "sha256:c36bcbc0d5174a80d6cccf43a0ecaca44e81d25be4b7f90f0ed7bcfbb5a00909"},
+    {file = "charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:cca4def576f47a09a943666b8f829606bcb17e2bc2d5911a46c8f8da45f56755"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c95f12b74681e9ae127728f7e5409cbbef9cd914d5896ef238cc779b8152373"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fca62a8301b605b954ad2e9c3666f9d97f63872aa4efcae5492baca2056b74ab"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac0aa6cd53ab9a31d397f8303f92c42f534693528fafbdb997c82bae6e477ad9"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c3af8e0f07399d3176b179f2e2634c3ce9c1301379a6b8c9c9aeecd481da494f"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a5fc78f9e3f501a1614a98f7c54d3969f3ad9bba8ba3d9b438c3bc5d047dd28"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:628c985afb2c7d27a4800bfb609e03985aaecb42f955049957814e0491d4006d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:74db0052d985cf37fa111828d0dd230776ac99c740e1a758ad99094be4f1803d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1e8fcdd8f672a1c4fc8d0bd3a2b576b152d2a349782d1eb0f6b8e52e9954731d"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:04afa6387e2b282cf78ff3dbce20f0cc071c12dc8f685bd40960cc68644cfea6"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dd5653e67b149503c68c4018bf07e42eeed6b4e956b24c00ccdf93ac79cdff84"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d2686f91611f9e17f4548dbf050e75b079bbc2a82be565832bc8ea9047b61c8c"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:4155b51ae05ed47199dc5b2a4e62abccb274cee6b01da5b895099b61b1982974"},
+    {file = "charset_normalizer-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322102cdf1ab682ecc7d9b1c5eed4ec59657a65e1c146a0da342b78f4112db23"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e633940f28c1e913615fd624fcdd72fdba807bf53ea6925d6a588e84e1151531"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3a06f32c9634a8705f4ca9946d667609f52cf130d5548881401f1eb2c39b1e2c"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7381c66e0561c5757ffe616af869b916c8b4e42b367ab29fedc98481d1e74e14"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3573d376454d956553c356df45bb824262c397c6e26ce43e8203c4c540ee0acb"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e89df2958e5159b811af9ff0f92614dabf4ff617c03a4c1c6ff53bf1c399e0e1"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:78cacd03e79d009d95635e7d6ff12c21eb89b894c354bd2b2ed0b4763373693b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de5695a6f1d8340b12a5d6d4484290ee74d61e467c39ff03b39e30df62cf83a0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c60b9c202d00052183c9be85e5eaf18a4ada0a47d188a83c8f5c5b23252f649"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f645caaf0008bacf349875a974220f1f1da349c5dbe7c4ec93048cdc785a3326"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea9f9c6034ea2d93d9147818f17c2a0860d41b71c38b9ce4d55f21b6f9165a11"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:80d1543d58bd3d6c271b66abf454d437a438dff01c3e62fdbcd68f2a11310d4b"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:73dc03a6a7e30b7edc5b01b601e53e7fc924b04e1835e8e407c12c037e81adbd"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6f5c2e7bc8a4bf7c426599765b1bd33217ec84023033672c1e9a8b35eaeaaaf8"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win32.whl", hash = "sha256:12a2b561af122e3d94cdb97fe6fb2bb2b82cef0cdca131646fdb940a1eda04f0"},
+    {file = "charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:3160a0fd9754aab7d47f95a6b63ab355388d890163eb03b2d2b87ab0a30cfa59"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38e812a197bf8e71a59fe55b757a84c1f946d0ac114acafaafaf21667a7e169e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6baf0baf0d5d265fa7944feb9f7451cc316bfe30e8df1a61b1bb08577c554f31"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8f25e17ab3039b05f762b0a55ae0b3632b2e073d9c8fc88e89aca31a6198e88f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3747443b6a904001473370d7810aa19c3a180ccd52a7157aacc264a5ac79265e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b116502087ce8a6b7a5f1814568ccbd0e9f6cfd99948aa59b0e241dc57cf739f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d16fd5252f883eb074ca55cb622bc0bee49b979ae4e8639fff6ca3ff44f9f854"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fa558996782fc226b529fdd2ed7866c2c6ec91cee82735c98a197fae39f706"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f6c7a8a57e9405cad7485f4c9d3172ae486cfef1344b5ddd8e5239582d7355e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac3775e3311661d4adace3697a52ac0bab17edd166087d493b52d4f4f553f9f0"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:10c93628d7497c81686e8e5e557aafa78f230cd9e77dd0c40032ef90c18f2230"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:6f4f4668e1831850ebcc2fd0b1cd11721947b6dc7c00bf1c6bd3c929ae14f2c7"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0be65ccf618c1e7ac9b849c315cc2e8a8751d9cfdaa43027d4f6624bd587ab7e"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:53d0a3fa5f8af98a1e261de6a3943ca631c526635eb5817a87a59d9a57ebf48f"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win32.whl", hash = "sha256:a04f86f41a8916fe45ac5024ec477f41f886b3c435da2d4e3d2709b22ab02af1"},
+    {file = "charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:830d2948a5ec37c386d3170c483063798d7879037492540f10a475e3fd6f244b"},
+    {file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.3"
@@ -40,6 +182,10 @@ description = "Composable command line interface toolkit"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
+    {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
+]
 
 [package.dependencies]
 colorama = {version = "*", markers = "platform_system == \"Windows\""}
@@ -51,20 +197,82 @@ description = "Cross-platform colored terminal text."
 category = "dev"
 optional = false
 python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
+files = [
+    {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
+    {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
+]
+
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
 
 [[package]]
 name = "isort"
-version = "5.10.1"
+version = "5.12.0"
 description = "A Python utility / library to sort Python imports."
 category = "dev"
 optional = false
-python-versions = ">=3.6.1,<4.0"
+python-versions = ">=3.8.0"
+files = [
+    {file = "isort-5.12.0-py3-none-any.whl", hash = "sha256:f84c2818376e66cf843d497486ea8fed8700b340f308f076c6fb1229dff318b6"},
+    {file = "isort-5.12.0.tar.gz", hash = "sha256:8bef7dde241278824a6d83f44a544709b065191b95b6e50894bdc722fcba0504"},
+]
 
 [package.extras]
-pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
-requirements_deprecated_finder = ["pipreqs", "pip-api"]
-colors = ["colorama (>=0.4.3,<0.5.0)"]
+colors = ["colorama (>=0.4.3)"]
+pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"]
 plugins = ["setuptools"]
+requirements-deprecated-finder = ["pip-api", "pipreqs"]
+
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
 
 [[package]]
 name = "jsonpatch"
@@ -73,6 +281,10 @@ description = "Apply JSON-Patches (RFC 6902)"
 category = "main"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "jsonpatch-1.32-py2.py3-none-any.whl", hash = "sha256:26ac385719ac9f54df8a2f0827bb8253aa3ea8ab7b3368457bcdb8c14595a397"},
+    {file = "jsonpatch-1.32.tar.gz", hash = "sha256:b6ddfe6c3db30d81a96aaeceb6baf916094ffa23d7dd5fa2c13e13f8b6e600c2"},
+]
 
 [package.dependencies]
 jsonpointer = ">=1.9"
@@ -84,14 +296,22 @@ description = "Identify specific nodes in a JSON document (RFC 6901)"
 category = "main"
 optional = false
 python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "jsonpointer-2.3-py2.py3-none-any.whl", hash = "sha256:51801e558539b4e9cd268638c078c6c5746c9ac96bc38152d443400e4f3793e9"},
+    {file = "jsonpointer-2.3.tar.gz", hash = "sha256:97cba51526c829282218feb99dab1b1e6bdf8efd1c43dc9d57be093c0d69c99a"},
+]
 
 [[package]]
 name = "jsonschema"
-version = "4.17.0"
+version = "4.17.3"
 description = "An implementation of JSON Schema validation for Python"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "jsonschema-4.17.3-py3-none-any.whl", hash = "sha256:a870ad254da1a8ca84b6a2905cac29d265f805acc57af304784962a2aa6508f6"},
+    {file = "jsonschema-4.17.3.tar.gz", hash = "sha256:0f864437ab8b6076ba6707453ef8f98a6a0d512a80e93f8abdb676f737ecb60d"},
+]
 
 [package.dependencies]
 attrs = ">=17.4.0"
@@ -101,6 +321,66 @@ pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
 format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
 format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
 
+[[package]]
+name = "markupsafe"
+version = "2.1.2"
+description = "Safely add untrusted strings to HTML/XML markup."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:665a36ae6f8f20a4676b53224e33d456a6f5a72657d9c83c2aa00765072f31f7"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:340bea174e9761308703ae988e982005aedf427de816d1afe98147668cc03036"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22152d00bf4a9c7c83960521fc558f55a1adbc0631fbb00a9471e097b19d72e1"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28057e985dace2f478e042eaa15606c7efccb700797660629da387eb289b9323"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca244fa73f50a800cf8c3ebf7fd93149ec37f5cb9596aa8873ae2c1d23498601"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d9d971ec1e79906046aa3ca266de79eac42f1dbf3612a05dc9368125952bd1a1"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e007132af78ea9df29495dbf7b5824cb71648d7133cf7848a2a5dd00d36f9ff"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7313ce6a199651c4ed9d7e4cfb4aa56fe923b1adf9af3b420ee14e6d9a73df65"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-win32.whl", hash = "sha256:c4a549890a45f57f1ebf99c067a4ad0cb423a05544accaf2b065246827ed9603"},
+    {file = "MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:835fb5e38fd89328e9c81067fd642b3593c33e1e17e2fdbf77f5676abb14a156"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2ec4f2d48ae59bbb9d1f9d7efb9236ab81429a764dedca114f5fdabbc3788013"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:608e7073dfa9e38a85d38474c082d4281f4ce276ac0010224eaba11e929dd53a"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65608c35bfb8a76763f37036547f7adfd09270fbdbf96608be2bead319728fcd"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2bfb563d0211ce16b63c7cb9395d2c682a23187f54c3d79bfec33e6705473c6"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da25303d91526aac3672ee6d49a2f3db2d9502a4a60b55519feb1a4c7714e07d"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9cad97ab29dfc3f0249b483412c85c8ef4766d96cdf9dcf5a1e3caa3f3661cf1"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:085fd3201e7b12809f9e6e9bc1e5c96a368c8523fad5afb02afe3c051ae4afcc"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1bea30e9bf331f3fef67e0a3877b2288593c98a21ccb2cf29b74c581a4eb3af0"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-win32.whl", hash = "sha256:7df70907e00c970c60b9ef2938d894a9381f38e6b9db73c5be35e59d92e06625"},
+    {file = "MarkupSafe-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:e55e40ff0cc8cc5c07996915ad367fa47da6b3fc091fdadca7f5403239c5fec3"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a6e40afa7f45939ca356f348c8e23048e02cb109ced1eb8420961b2f40fb373a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf877ab4ed6e302ec1d04952ca358b381a882fbd9d1b07cccbfd61783561f98a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63ba06c9941e46fa389d389644e2d8225e0e3e5ebcc4ff1ea8506dce646f8c8a"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f1cd098434e83e656abf198f103a8207a8187c0fc110306691a2e94a78d0abb2"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:55f44b440d491028addb3b88f72207d71eeebfb7b5dbf0643f7c023ae1fba619"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a6f2fcca746e8d5910e18782f976489939d54a91f9411c32051b4aab2bd7c513"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:0b462104ba25f1ac006fdab8b6a01ebbfbce9ed37fd37fd4acd70c67c973e460"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-win32.whl", hash = "sha256:7668b52e102d0ed87cb082380a7e2e1e78737ddecdde129acadb0eccc5423859"},
+    {file = "MarkupSafe-2.1.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6d6607f98fcf17e534162f0709aaad3ab7a96032723d8ac8750ffe17ae5a0666"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a806db027852538d2ad7555b203300173dd1b77ba116de92da9afbc3a3be3eed"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a4abaec6ca3ad8660690236d11bfe28dfd707778e2442b45addd2f086d6ef094"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f03a532d7dee1bed20bc4884194a16160a2de9ffc6354b3878ec9682bb623c54"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cf06cdc1dda95223e9d2d3c58d3b178aa5dacb35ee7e3bbac10e4e1faacb419"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22731d79ed2eb25059ae3df1dfc9cb1546691cc41f4e3130fe6bfbc3ecbbecfa"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f8ffb705ffcf5ddd0e80b65ddf7bed7ee4f5a441ea7d3419e861a12eaf41af58"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8db032bf0ce9022a8e41a22598eefc802314e81b879ae093f36ce9ddf39ab1ba"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2298c859cfc5463f1b64bd55cb3e602528db6fa0f3cfd568d3605c50678f8f03"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-win32.whl", hash = "sha256:50c42830a633fa0cf9e7d27664637532791bfc31c731a87b202d2d8ac40c3ea2"},
+    {file = "MarkupSafe-2.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:bb06feb762bade6bf3c8b844462274db0c76acc95c52abe8dbed28ae3d44a147"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99625a92da8229df6d44335e6fcc558a5037dd0a760e11d84be2260e6f37002f"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bca7e26c1dd751236cfb0c6c72d4ad61d986e9a41bbf76cb445f69488b2a2bd"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40627dcf047dadb22cd25ea7ecfe9cbf3bbbad0482ee5920b582f3809c97654f"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40dfd3fefbef579ee058f139733ac336312663c6706d1163b82b3003fb1925c4"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:090376d812fb6ac5f171e5938e82e7f2d7adc2b629101cec0db8b267815c85e2"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2e7821bffe00aa6bd07a23913b7f4e01328c3d5cc0b40b36c0bd81d362faeb65"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c0a33bc9f02c2b17c3ea382f91b4db0e6cde90b63b296422a939886a7a80de1c"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:b8526c6d437855442cdd3d87eede9c425c4445ea011ca38d937db299382e6fa3"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-win32.whl", hash = "sha256:137678c63c977754abe9086a3ec011e8fd985ab90631145dfb9294ad09c102a7"},
+    {file = "MarkupSafe-2.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:0576fe974b40a400449768941d5d0858cc624e3249dfd1e0c33674e5c7ca7aed"},
+    {file = "MarkupSafe-2.1.2.tar.gz", hash = "sha256:abcabc8c2b26036d62d4c746381a6f7cf60aafcc653198ad678306986b09450d"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -108,6 +388,10 @@ description = "McCabe checker, plugin for flake8"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "mccabe-0.7.0-py2.py3-none-any.whl", hash = "sha256:6c2d30ab6be0e4a46919781807b4f0d834ebdd6c6e3dca0bda5a15f863427b6e"},
+    {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
+]
 
 [[package]]
 name = "mypy"
@@ -116,6 +400,31 @@ description = "Optional static typing for Python"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "mypy-0.961-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:697540876638ce349b01b6786bc6094ccdaba88af446a9abb967293ce6eaa2b0"},
+    {file = "mypy-0.961-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b117650592e1782819829605a193360a08aa99f1fc23d1d71e1a75a142dc7e15"},
+    {file = "mypy-0.961-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bdd5ca340beffb8c44cb9dc26697628d1b88c6bddf5c2f6eb308c46f269bb6f3"},
+    {file = "mypy-0.961-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3e09f1f983a71d0672bbc97ae33ee3709d10c779beb613febc36805a6e28bb4e"},
+    {file = "mypy-0.961-cp310-cp310-win_amd64.whl", hash = "sha256:e999229b9f3198c0c880d5e269f9f8129c8862451ce53a011326cad38b9ccd24"},
+    {file = "mypy-0.961-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b24be97351084b11582fef18d79004b3e4db572219deee0212078f7cf6352723"},
+    {file = "mypy-0.961-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f4a21d01fc0ba4e31d82f0fff195682e29f9401a8bdb7173891070eb260aeb3b"},
+    {file = "mypy-0.961-cp36-cp36m-win_amd64.whl", hash = "sha256:439c726a3b3da7ca84a0199a8ab444cd8896d95012c4a6c4a0d808e3147abf5d"},
+    {file = "mypy-0.961-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5a0b53747f713f490affdceef835d8f0cb7285187a6a44c33821b6d1f46ed813"},
+    {file = "mypy-0.961-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0e9f70df36405c25cc530a86eeda1e0867863d9471fe76d1273c783df3d35c2e"},
+    {file = "mypy-0.961-cp37-cp37m-win_amd64.whl", hash = "sha256:b88f784e9e35dcaa075519096dc947a388319cb86811b6af621e3523980f1c8a"},
+    {file = "mypy-0.961-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:d5aaf1edaa7692490f72bdb9fbd941fbf2e201713523bdb3f4038be0af8846c6"},
+    {file = "mypy-0.961-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9f5f5a74085d9a81a1f9c78081d60a0040c3efb3f28e5c9912b900adf59a16e6"},
+    {file = "mypy-0.961-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f4b794db44168a4fc886e3450201365c9526a522c46ba089b55e1f11c163750d"},
+    {file = "mypy-0.961-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:64759a273d590040a592e0f4186539858c948302c653c2eac840c7a3cd29e51b"},
+    {file = "mypy-0.961-cp38-cp38-win_amd64.whl", hash = "sha256:63e85a03770ebf403291ec50097954cc5caf2a9205c888ce3a61bd3f82e17569"},
+    {file = "mypy-0.961-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5f1332964963d4832a94bebc10f13d3279be3ce8f6c64da563d6ee6e2eeda932"},
+    {file = "mypy-0.961-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:006be38474216b833eca29ff6b73e143386f352e10e9c2fbe76aa8549e5554f5"},
+    {file = "mypy-0.961-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9940e6916ed9371809b35b2154baf1f684acba935cd09928952310fbddaba648"},
+    {file = "mypy-0.961-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a5ea0875a049de1b63b972456542f04643daf320d27dc592d7c3d9cd5d9bf950"},
+    {file = "mypy-0.961-cp39-cp39-win_amd64.whl", hash = "sha256:1ece702f29270ec6af25db8cf6185c04c02311c6bb21a69f423d40e527b75c56"},
+    {file = "mypy-0.961-py3-none-any.whl", hash = "sha256:03c6cc893e7563e7b2949b969e63f02c000b32502a1b4d1314cabe391aa87d66"},
+    {file = "mypy-0.961.tar.gz", hash = "sha256:f730d56cb924d371c26b8eaddeea3cc07d78ff51c521c6d04899ac6904b75492"},
+]
 
 [package.dependencies]
 mypy-extensions = ">=0.4.3"
@@ -129,19 +438,39 @@ reports = ["lxml"]
 
 [[package]]
 name = "mypy-extensions"
-version = "0.4.3"
-description = "Experimental type system extensions for programs checked with the mypy typechecker."
+version = "1.0.0"
+description = "Type system extensions for programs checked with the mypy type checker."
 category = "dev"
 optional = false
-python-versions = "*"
+python-versions = ">=3.5"
+files = [
+    {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
+    {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
+]
+
+[[package]]
+name = "packaging"
+version = "23.0"
+description = "Core utilities for Python packages"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.0-py3-none-any.whl", hash = "sha256:714ac14496c3e68c99c29b00845f7a2b85f3bb6f1078fd9f72fd20f0570002b2"},
+    {file = "packaging-23.0.tar.gz", hash = "sha256:b6ad297f8907de0fa2fe1ccbd26fdaf387f5f47c7275fedf8cce89f99446cf97"},
+]
 
 [[package]]
 name = "pathspec"
-version = "0.10.1"
+version = "0.11.1"
 description = "Utility library for gitignore style pattern matching of file paths."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pathspec-0.11.1-py3-none-any.whl", hash = "sha256:d8af70af76652554bd134c22b3e8a1cc46ed7d91edcdd721ef1a0c51a84a5293"},
+    {file = "pathspec-0.11.1.tar.gz", hash = "sha256:2798de800fa92780e33acca925945e9a19a133b715067cf165b8866c15a31687"},
+]
 
 [[package]]
 name = "pexpect"
@@ -150,21 +479,29 @@ description = "Pexpect allows easy control of interactive console applications."
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
+    {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
+]
 
 [package.dependencies]
 ptyprocess = ">=0.5"
 
 [[package]]
 name = "platformdirs"
-version = "2.5.2"
-description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
+version = "3.1.1"
+description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "platformdirs-3.1.1-py3-none-any.whl", hash = "sha256:e5986afb596e4bb5bde29a79ac9061aa955b94fca2399b7aaac4090860920dd8"},
+    {file = "platformdirs-3.1.1.tar.gz", hash = "sha256:024996549ee88ec1a9aa99ff7f8fc819bb59e2c3477b410d90a16d32d6e707aa"},
+]
 
 [package.extras]
-docs = ["furo (>=2021.7.5b38)", "proselint (>=0.10.2)", "sphinx-autodoc-typehints (>=1.12)", "sphinx (>=4)"]
-test = ["appdirs (==1.4.4)", "pytest-cov (>=2.7)", "pytest-mock (>=3.6)", "pytest (>=6)"]
+docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.22,!=1.23.4)"]
+test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
 
 [[package]]
 name = "ptyprocess"
@@ -173,28 +510,40 @@ description = "Run a subprocess in a pseudo terminal"
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
+    {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
+]
 
 [[package]]
 name = "pycodestyle"
-version = "2.9.1"
+version = "2.10.0"
 description = "Python style guide checker"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pycodestyle-2.10.0-py2.py3-none-any.whl", hash = "sha256:8a4eaf0d0495c7395bdab3589ac2db602797d76207242c17d470186815706610"},
+    {file = "pycodestyle-2.10.0.tar.gz", hash = "sha256:347187bdb476329d98f695c213d7295a846d1152ff4fe9bacb8a9590b8ee7053"},
+]
 
 [[package]]
 name = "pydocstyle"
-version = "6.1.1"
+version = "6.3.0"
 description = "Python docstring style checker"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
+    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+]
 
 [package.dependencies]
-snowballstemmer = "*"
+snowballstemmer = ">=2.2.0"
 
 [package.extras]
-toml = ["toml"]
+toml = ["tomli (>=1.2.3)"]
 
 [[package]]
 name = "pyflakes"
@@ -203,6 +552,25 @@ description = "passive checker of Python programs"
 category = "dev"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "pyflakes-2.5.0-py2.py3-none-any.whl", hash = "sha256:4579f67d887f804e67edb544428f264b7b24f435b263c4614f384135cea553d2"},
+    {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
+]
+
+[[package]]
+name = "pygments"
+version = "2.14.0"
+description = "Pygments is a syntax highlighting package written in Python."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "Pygments-2.14.0-py3-none-any.whl", hash = "sha256:fa7bd7bd2771287c0de303af8bfdfc731f51bd2c6a47ab69d117138893b82717"},
+    {file = "Pygments-2.14.0.tar.gz", hash = "sha256:b3ed06a9e8ac9a9aae5a6f5dbe78a8a58655d17b43b93c078f094ddc476ae297"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
 
 [[package]]
 name = "pylama"
@@ -211,6 +579,10 @@ description = "Code audit tool for python"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pylama-8.4.1-py3-none-any.whl", hash = "sha256:5bbdbf5b620aba7206d688ed9fc917ecd3d73e15ec1a89647037a09fa3a86e60"},
+    {file = "pylama-8.4.1.tar.gz", hash = "sha256:2d4f7aecfb5b7466216d48610c7d6bad1c3990c29cdd392ad08259b161e486f6"},
+]
 
 [package.dependencies]
 mccabe = ">=0.7.0"
@@ -219,22 +591,51 @@ pydocstyle = ">=6.1.1"
 pyflakes = ">=2.5.0"
 
 [package.extras]
-all = ["pylint", "eradicate", "radon", "mypy", "vulture"]
+all = ["eradicate", "mypy", "pylint", "radon", "vulture"]
 eradicate = ["eradicate"]
 mypy = ["mypy"]
 pylint = ["pylint"]
 radon = ["radon"]
-tests = ["pytest (>=7.1.2)", "pytest-mypy", "eradicate (>=2.0.0)", "radon (>=5.1.0)", "mypy", "pylint (>=2.11.1)", "pylama-quotes", "toml", "vulture", "types-setuptools", "types-toml"]
+tests = ["eradicate (>=2.0.0)", "mypy", "pylama-quotes", "pylint (>=2.11.1)", "pytest (>=7.1.2)", "pytest-mypy", "radon (>=5.1.0)", "toml", "types-setuptools", "types-toml", "vulture"]
 toml = ["toml (>=0.10.2)"]
 vulture = ["vulture"]
 
 [[package]]
 name = "pyrsistent"
-version = "0.19.1"
+version = "0.19.3"
 description = "Persistent/Functional/Immutable data structures"
 category = "main"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "pyrsistent-0.19.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:20460ac0ea439a3e79caa1dbd560344b64ed75e85d8703943e0b66c2a6150e4a"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c18264cb84b5e68e7085a43723f9e4c1fd1d935ab240ce02c0324a8e01ccb64"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b774f9288dda8d425adb6544e5903f1fb6c273ab3128a355c6b972b7df39dcf"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-win32.whl", hash = "sha256:5a474fb80f5e0d6c9394d8db0fc19e90fa540b82ee52dba7d246a7791712f74a"},
+    {file = "pyrsistent-0.19.3-cp310-cp310-win_amd64.whl", hash = "sha256:49c32f216c17148695ca0e02a5c521e28a4ee6c5089f97e34fe24163113722da"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f0774bf48631f3a20471dd7c5989657b639fd2d285b861237ea9e82c36a415a9"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ab2204234c0ecd8b9368dbd6a53e83c3d4f3cab10ecaf6d0e772f456c442393"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e42296a09e83028b3476f7073fcb69ffebac0e66dbbfd1bd847d61f74db30f19"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-win32.whl", hash = "sha256:64220c429e42a7150f4bfd280f6f4bb2850f95956bde93c6fda1b70507af6ef3"},
+    {file = "pyrsistent-0.19.3-cp311-cp311-win_amd64.whl", hash = "sha256:016ad1afadf318eb7911baa24b049909f7f3bb2c5b1ed7b6a8f21db21ea3faa8"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4db1bd596fefd66b296a3d5d943c94f4fac5bcd13e99bffe2ba6a759d959a28"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aeda827381f5e5d65cced3024126529ddc4289d944f75e090572c77ceb19adbf"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:42ac0b2f44607eb92ae88609eda931a4f0dfa03038c44c772e07f43e738bcac9"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-win32.whl", hash = "sha256:e8f2b814a3dc6225964fa03d8582c6e0b6650d68a232df41e3cc1b66a5d2f8d1"},
+    {file = "pyrsistent-0.19.3-cp37-cp37m-win_amd64.whl", hash = "sha256:c9bb60a40a0ab9aba40a59f68214eed5a29c6274c83b2cc206a359c4a89fa41b"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a2471f3f8693101975b1ff85ffd19bb7ca7dd7c38f8a81701f67d6b4f97b87d8"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cc5d149f31706762c1f8bda2e8c4f8fead6e80312e3692619a75301d3dbb819a"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3311cb4237a341aa52ab8448c27e3a9931e2ee09561ad150ba94e4cfd3fc888c"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-win32.whl", hash = "sha256:f0e7c4b2f77593871e918be000b96c8107da48444d57005b6a6bc61fb4331b2c"},
+    {file = "pyrsistent-0.19.3-cp38-cp38-win_amd64.whl", hash = "sha256:c147257a92374fde8498491f53ffa8f4822cd70c0d85037e09028e478cababb7"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b735e538f74ec31378f5a1e3886a26d2ca6351106b4dfde376a26fc32a044edc"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99abb85579e2165bd8522f0c0138864da97847875ecbd45f3e7e2af569bfc6f2"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a8cb235fa6d3fd7aae6a4f1429bbb1fec1577d978098da1252f0489937786f3"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-win32.whl", hash = "sha256:c74bed51f9b41c48366a286395c67f4e894374306b197e62810e0fdaf2364da2"},
+    {file = "pyrsistent-0.19.3-cp39-cp39-win_amd64.whl", hash = "sha256:878433581fc23e906d947a6814336eee031a00e6defba224234169ae3d3d6a98"},
+    {file = "pyrsistent-0.19.3-py3-none-any.whl", hash = "sha256:ccf0d6bd208f8111179f0c26fdf84ed7c3891982f2edaeae7422575f47e66b64"},
+    {file = "pyrsistent-0.19.3.tar.gz", hash = "sha256:1a2994773706bbb4995c31a97bc94f1418314923bd1048c6d964837040376440"},
+]
 
 [[package]]
 name = "pyyaml"
@@ -243,6 +644,70 @@ description = "YAML parser and emitter for Python"
 category = "main"
 optional = false
 python-versions = ">=3.6"
+files = [
+    {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
+    {file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
+    {file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
+    {file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
+    {file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
+    {file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
+    {file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
+    {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
+    {file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
+    {file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
+    {file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
+    {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
+    {file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
+    {file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
+    {file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
+    {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
+    {file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
+    {file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
+    {file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
+    {file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
+    {file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
+    {file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
+    {file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
+    {file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
+    {file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
+    {file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
+    {file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
+    {file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
+]
+
+[[package]]
+name = "requests"
+version = "2.28.2"
+description = "Python HTTP for Humans."
+category = "dev"
+optional = false
+python-versions = ">=3.7, <4"
+files = [
+    {file = "requests-2.28.2-py3-none-any.whl", hash = "sha256:64299f4909223da747622c030b781c0d7811e359c37124b4bd368fb8c6518baa"},
+    {file = "requests-2.28.2.tar.gz", hash = "sha256:98b1b2782e3c6c4904938b84c0eb932721069dfdb9134313beff7c83c2df24bf"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<1.27"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
 
 [[package]]
 name = "snowballstemmer"
@@ -251,6 +716,175 @@ description = "This package provides 29 stemmers for 28 languages generated from
 category = "dev"
 optional = false
 python-versions = "*"
+files = [
+    {file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
+    {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
+]
+
+[[package]]
+name = "sphinx"
+version = "6.1.3"
+description = "Python documentation generator"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.1.3.tar.gz", hash = "sha256:0dac3b698538ffef41716cf97ba26c1c7788dba73ce6f150c1ff5b4720786dd2"},
+    {file = "sphinx-6.1.3-py3-none-any.whl", hash = "sha256:807d1cb3d6be87eb78a381c3e70ebd8d346b9a25f3753e9947e866b2786865fc"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.0"
+description = "Read the Docs theme for Sphinx"
+category = "dev"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.0-py2.py3-none-any.whl", hash = "sha256:f823f7e71890abe0ac6aaa6013361ea2696fc8d3e1fa798f463e82bdb77eeff2"},
+    {file = "sphinx_rtd_theme-1.2.0.tar.gz", hash = "sha256:a0d8bd1a2ed52e0b338cbe19c4b2eef3c5e7a048769753dac6a9f059c7b641b8"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = {version = ">=2.0.0,<3.0.0 || >3.0.0", markers = "python_version > \"3\""}
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.4"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"},
+    {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.2"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
+    {file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.1"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+category = "dev"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"},
+    {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+category = "dev"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.3"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
+    {file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.5"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
+    {file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
 
 [[package]]
 name = "toml"
@@ -259,6 +893,10 @@ description = "Python Library for Tom's Obvious, Minimal Language"
 category = "dev"
 optional = false
 python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
+files = [
+    {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
+    {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
+]
 
 [[package]]
 name = "tomli"
@@ -267,22 +905,51 @@ description = "A lil' TOML parser"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
+    {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
+]
 
 [[package]]
 name = "types-pyyaml"
-version = "6.0.12.1"
+version = "6.0.12.8"
 description = "Typing stubs for PyYAML"
 category = "main"
 optional = false
 python-versions = "*"
+files = [
+    {file = "types-PyYAML-6.0.12.8.tar.gz", hash = "sha256:19304869a89d49af00be681e7b267414df213f4eb89634c4495fa62e8f942b9f"},
+    {file = "types_PyYAML-6.0.12.8-py3-none-any.whl", hash = "sha256:5314a4b2580999b2ea06b2e5f9a7763d860d6e09cdf21c0e9561daa9cbd60178"},
+]
 
 [[package]]
 name = "typing-extensions"
-version = "4.4.0"
+version = "4.5.0"
 description = "Backported and Experimental Type Hints for Python 3.7+"
 category = "dev"
 optional = false
 python-versions = ">=3.7"
+files = [
+    {file = "typing_extensions-4.5.0-py3-none-any.whl", hash = "sha256:fb33085c39dd998ac16d1431ebc293a8b3eedd00fd4a32de0ff79002c19511b4"},
+    {file = "typing_extensions-4.5.0.tar.gz", hash = "sha256:5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb"},
+]
+
+[[package]]
+name = "urllib3"
+version = "1.26.15"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+files = [
+    {file = "urllib3-1.26.15-py2.py3-none-any.whl", hash = "sha256:aa751d169e23c7479ce47a0cb0da579e3ede798f994f5816a74e4f4500dcea42"},
+    {file = "urllib3-1.26.15.tar.gz", hash = "sha256:8a388717b9476f934a21484e8c8e61875ab60644d29b9b39e11e4b9dc1c6b305"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
+secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
+socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
 
 [[package]]
 name = "warlock"
@@ -291,47 +958,16 @@ description = "Python object model built on JSON schema and JSON patch."
 category = "main"
 optional = false
 python-versions = ">=3.7,<4.0"
+files = [
+    {file = "warlock-2.0.1-py3-none-any.whl", hash = "sha256:448df959cec31904f686ac8c6b1dfab80f0cdabce3d303be517dd433eeebf012"},
+    {file = "warlock-2.0.1.tar.gz", hash = "sha256:99abbf9525b2a77f2cde896d3a9f18a5b4590db063db65e08207694d2e0137fc"},
+]
 
 [package.dependencies]
 jsonpatch = ">=1,<2"
 jsonschema = ">=4,<5"
 
 [metadata]
-lock-version = "1.1"
+lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "a0f040b07fc6ce4deb0be078b9a88c2a465cb6bccb9e260a67e92c2403e2319f"
-
-[metadata.files]
-attrs = []
-black = []
-click = []
-colorama = []
-isort = []
-jsonpatch = []
-jsonpointer = []
-jsonschema = []
-mccabe = []
-mypy = []
-mypy-extensions = []
-pathspec = []
-pexpect = [
-    {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
-    {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
-]
-platformdirs = [
-    {file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
-    {file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
-]
-ptyprocess = []
-pycodestyle = []
-pydocstyle = []
-pyflakes = []
-pylama = []
-pyrsistent = []
-pyyaml = []
-snowballstemmer = []
-toml = []
-tomli = []
-types-pyyaml = []
-typing-extensions = []
-warlock = []
+content-hash = "b3f428e987713d7875434c4b43cadadcb7d77dd3d62fd6855fb8e77ec946f082"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index a136c91e5e..c0fe323272 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -22,6 +22,13 @@ pylama = "^8.4.1"
 pyflakes = "2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+Sphinx = "^6.1.3"
+sphinx-rtd-theme = "^1.2.0"
+
 [tool.poetry.scripts]
 dts = "main:main"
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v3 3/4] dts: add doc generation
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
  2023-05-11  9:14     ` [RFC PATCH v3 1/4] dts: code adjustments for sphinx Juraj Linkeš
  2023-05-11  9:14     ` [RFC PATCH v3 2/4] dts: add doc generation dependencies Juraj Linkeš
@ 2023-05-11  9:14     ` Juraj Linkeš
  2023-05-11  9:14     ` [RFC PATCH v3 4/4] dts: format docstrigs to google format Juraj Linkeš
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-11  9:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is sphinx, which is already
used in DPDK. The configuration is kept the same to preserve the style.

Sphinx generates the documentation from Python docstrings. The docstring
format most suitable for DTS seems to be the Google format [0] which
requires the sphinx.ext.napoleon extension.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx import the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 29 ++++++++++++-------
 doc/api/meson.build             |  1 +
 doc/guides/conf.py              | 22 ++++++++++----
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 29 +++++++++++++++++++
 dts/doc/doc-index.rst           | 20 +++++++++++++
 dts/doc/meson.build             | 51 +++++++++++++++++++++++++++++++++
 dts/meson.build                 | 16 +++++++++++
 meson.build                     |  1 +
 meson_options.txt               |  2 ++
 10 files changed, 157 insertions(+), 15 deletions(-)
 create mode 100644 dts/doc/doc-index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..c2f3acfb1d 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,46 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default='.')
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 2876a78a7e..1f0c725a94 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index a55ce38800..3f11cbc8fc 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,19 @@
           file=stderr)
     pass
 
+extensions = ['sphinx.ext.napoleon']
+
+# Python docstring options
+autodoc_member_order = 'bysource'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_use_ivar = True
+napoleon_use_rtype = False
+add_module_names = False
+toc_object_entries_show_parents = 'hide'
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +47,8 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+path.append(environ.get('DTS_ROOT'))
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index ebd6dceb6a..a547da2017 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -282,3 +282,32 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+   .. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+
+Build commands
+~~~~~~~~~~~~~~
+
+The documentation is built using the standard DPDK build system.
+
+After entering Poetry's shell, build the documentation with:
+
+   .. code-block:: console
+
+   ninja -C build dts/doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings.
diff --git a/dts/doc/doc-index.rst b/dts/doc/doc-index.rst
new file mode 100644
index 0000000000..10151c6851
--- /dev/null
+++ b/dts/doc/doc-index.rst
@@ -0,0 +1,20 @@
+.. DPDK Test Suite documentation master file, created by
+   sphinx-quickstart on Tue Mar 14 12:23:52 2023.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Welcome to DPDK Test Suite's documentation!
+===========================================
+
+.. toctree::
+   :maxdepth: 4
+   :caption: Contents:
+
+   modules
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..8c1416296b
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,51 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: get_option('enable_dts_docs'))
+sphinx_apidoc = find_program('sphinx-apidoc', required: get_option('enable_dts_docs'))
+
+if sphinx.found() and sphinx_apidoc.found()
+endif
+
+dts_api_framework_dir = join_paths(dts_dir, 'framework')
+dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+dts_api_src = custom_target('dts_api_src',
+        output: 'modules.rst',
+        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
+            sphinx_apidoc, '--append-syspath', '--force',
+            '--module-first', '--separate',
+            '--doc-project', 'DTS', '-V', meson.project_version(),
+            '-o', dts_api_build_dir,
+            dts_api_framework_dir],
+        build_by_default: get_option('enable_dts_docs'))
+doc_targets += dts_api_src
+doc_target_names += 'DTS_API_sphinx_sources'
+
+cp = find_program('cp', required: get_option('enable_dts_docs'))
+cp_index = custom_target('cp_index',
+        input: 'doc-index.rst',
+        output: 'index.rst',
+        depends: dts_api_src,
+        command: [cp, '@INPUT@', join_paths(dts_api_build_dir, 'index.rst')],
+        build_by_default: get_option('enable_dts_docs'))
+doc_targets += cp_index
+doc_target_names += 'DTS_API_sphinx_index'
+
+extra_sphinx_args = ['-a', '-c', doc_guides_source_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        depends: cp_index,
+        command: ['DTS_ROOT=@0@'.format(dts_dir),
+            sphinx_wrapper, sphinx, meson.project_version(),
+            dts_api_build_dir, dts_api_build_dir,
+            '--dts-root', dts_dir, extra_sphinx_args],
+        build_by_default: get_option('enable_dts_docs'),
+        install: get_option('enable_dts_docs'),
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..17bda07636
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts/doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index f91d652bc5..7820f334bb 100644
--- a/meson.build
+++ b/meson.build
@@ -84,6 +84,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
diff --git a/meson_options.txt b/meson_options.txt
index 82c8297065..267f1b3ef7 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -16,6 +16,8 @@ option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', d
        'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
 option('enable_docs', type: 'boolean', value: false, description:
        'build documentation')
+option('enable_dts_docs', type: 'boolean', value: false, description:
+       'Build DTS API documentation.')
 option('enable_apps', type: 'string', value: '', description:
        'Comma-separated list of apps to build. If unspecified, build all apps.')
 option('enable_drivers', type: 'string', value: '', description:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v3 4/4] dts: format docstrigs to google format
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
                       ` (2 preceding siblings ...)
  2023-05-11  9:14     ` [RFC PATCH v3 3/4] dts: add doc generation Juraj Linkeš
@ 2023-05-11  9:14     ` Juraj Linkeš
  2023-06-21 18:27       ` Jeremy Spewock
  2023-05-17 16:56     ` [RFC PATCH v3 0/4] dts: add dts api docs Bruce Richardson
  2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
  5 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-11  9:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, jspewock, probb
  Cc: dev, Juraj Linkeš

WIP: only one module is reformatted to serve as a demonstration.

The google format is documented here [0].

[0]: https://google.github.io/styleguide/pyguide.html

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 152 +++++++++++++++++++---------
 1 file changed, 103 insertions(+), 49 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 90467981c3..ad8ef442af 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+There's a base class, Node, that's supposed to be extended by other classes
+with functionality specific to that node type.
+The only part that can be used standalone is the Node.skip_setup static method,
+which is a decorator used to skip method execution
+if skip_setup is passed by the user on the cmdline or in an env variable.
 """
 
 from typing import Any, Callable
@@ -26,10 +31,25 @@
 
 
 class Node(object):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather extended.
+    It implements common methods to manage any node:
+
+       * connection to the node
+       * information gathering of CPU
+       * hugepages setup
+
+    Arguments:
+        node_config: The config from the input configuration file.
+
+    Attributes:
+        main_session: The primary OS-agnostic remote session used
+            to communicate with the node.
+        config: The configuration used to create the node.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and user configuration.
     """
 
     main_session: OSSession
@@ -56,65 +76,89 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Created node: {self.name}")
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call self._set_up_execution where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps
+        common to all DTS node types.
         """
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution teardown steps.
         """
 
     def set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps
+        common to all DTS node types.
+
+        Args:
+            build_target_config: The build target configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for derived classes.
+
+        Derived classes should optionally overwrite this
+        if they want to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps
+        common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-agnostic remote session.
+
+        The returned session won't be used by the object creating it.
+        Will be cleaned up automatically.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-agnostic remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -130,14 +174,24 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Logical cores that DTS can use are ones that are present on the node,
+        but filtered according to user config.
+        The filter_specifier will filter cores from those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies
+                the number of logical cores per core, cores per socket and
+                the number of sockets,
+                the other that specifies a logical core list.
+            ascending: If True, use cores with the lowest numerical id first
+                and continue in ascending order. If False, start with the highest
+                id and continue in descending order. This ordering affects which
+                sockets to consider first as well.
+
+        Returns:
+            A list of logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -147,17 +201,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self):
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the Node.
+
+        Configure the hugepages only if they're specified in user configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -165,9 +216,7 @@ def _setup_hugepages(self):
             )
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -176,6 +225,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """A decorator that skips the decorated function.
+
+        When used, the decorator executes an empty lambda function
+        instead of the decorated function.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v3 0/4] dts: add dts api docs
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
                       ` (3 preceding siblings ...)
  2023-05-11  9:14     ` [RFC PATCH v3 4/4] dts: format docstrigs to google format Juraj Linkeš
@ 2023-05-17 16:56     ` Bruce Richardson
  2023-05-22  9:17       ` Juraj Linkeš
  2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
  5 siblings, 1 reply; 255+ messages in thread
From: Bruce Richardson @ 2023-05-17 16:56 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Thu, May 11, 2023 at 11:14:04AM +0200, Juraj Linkeš wrote:
> Augment the meson build system with dts api generation. The api docs are
> generated from Python docstrings in DTS using Sphinx. The format of
> choice is the Google format [0].
> 
> The guides html sphinx configuration is used to preserve the same style.
> 
> The build requires the same Python version and dependencies as DTS,
> because Sphinx imports the Python modules. Dependencies are installed
> using Poetry from the dts directory:
> 
> poetry install --with docs
> 
> After installing, enter the Poetry shell:
> 
> poetry shell
> 
> And then run the build:
> ninja -C <meson_build_dir> dts/doc
> 
> There's only one properly documented module that serves as a
> demonstration of the style - framework.testbed_model.node. When we agree
> on the docstring format, all docstrings will be reformatted.
> 
> [0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
> 
> Juraj Linkeš (4):
>   dts: code adjustments for sphinx
>   dts: add doc generation dependencies
>   dts: add doc generation
>   dts: format docstrigs to google format
> 
Given that building the DTS docs requires a special set of commands to set
things up and then to run the build through poetry, I think you should just
drop the option in meson_options.txt. I think it's better if building the
DTS docs is the steps that out outline here, and we don't try and integrate
it into the main DPDK build.

With that change:

Series-acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v3 0/4] dts: add dts api docs
  2023-05-17 16:56     ` [RFC PATCH v3 0/4] dts: add dts api docs Bruce Richardson
@ 2023-05-22  9:17       ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-05-22  9:17 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
	jspewock, probb, dev

On Wed, May 17, 2023 at 6:57 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Thu, May 11, 2023 at 11:14:04AM +0200, Juraj Linkeš wrote:
> > Augment the meson build system with dts api generation. The api docs are
> > generated from Python docstrings in DTS using Sphinx. The format of
> > choice is the Google format [0].
> >
> > The guides html sphinx configuration is used to preserve the same style.
> >
> > The build requires the same Python version and dependencies as DTS,
> > because Sphinx imports the Python modules. Dependencies are installed
> > using Poetry from the dts directory:
> >
> > poetry install --with docs
> >
> > After installing, enter the Poetry shell:
> >
> > poetry shell
> >
> > And then run the build:
> > ninja -C <meson_build_dir> dts/doc
> >
> > There's only one properly documented module that serves as a
> > demonstration of the style - framework.testbed_model.node. When we agree
> > on the docstring format, all docstrings will be reformatted.
> >
> > [0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
> >
> > Juraj Linkeš (4):
> >   dts: code adjustments for sphinx
> >   dts: add doc generation dependencies
> >   dts: add doc generation
> >   dts: format docstrigs to google format
> >
> Given that building the DTS docs requires a special set of commands to set
> things up and then to run the build through poetry, I think you should just
> drop the option in meson_options.txt. I think it's better if building the
> DTS docs is the steps that out outline here, and we don't try and integrate
> it into the main DPDK build.
>

That makes a lot of sense. I'll make the change.

> With that change:
>
> Series-acked-by: Bruce Richardson <bruce.richardson@intel.com>

Thanks.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v3 4/4] dts: format docstrigs to google format
  2023-05-11  9:14     ` [RFC PATCH v3 4/4] dts: format docstrigs to google format Juraj Linkeš
@ 2023-06-21 18:27       ` Jeremy Spewock
  0 siblings, 0 replies; 255+ messages in thread
From: Jeremy Spewock @ 2023-06-21 18:27 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	wathsala.vithanage, probb, dev

[-- Attachment #1: Type: text/plain, Size: 10848 bytes --]

Acked-by: Jeremy Spweock <jspweock@iol.unh.edu>

On Thu, May 11, 2023 at 5:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> WIP: only one module is reformatted to serve as a demonstration.
>
> The google format is documented here [0].
>
> [0]: https://google.github.io/styleguide/pyguide.html
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/testbed_model/node.py | 152 +++++++++++++++++++---------
>  1 file changed, 103 insertions(+), 49 deletions(-)
>
> diff --git a/dts/framework/testbed_model/node.py
> b/dts/framework/testbed_model/node.py
> index 90467981c3..ad8ef442af 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -3,8 +3,13 @@
>  # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>  # Copyright(c) 2022-2023 University of New Hampshire
>
> -"""
> -A node is a generic host that DTS connects to and manages.
> +"""Common functionality for node management.
> +
> +There's a base class, Node, that's supposed to be extended by other
> classes
> +with functionality specific to that node type.
> +The only part that can be used standalone is the Node.skip_setup static
> method,
> +which is a decorator used to skip method execution
> +if skip_setup is passed by the user on the cmdline or in an env variable.
>  """
>
>  from typing import Any, Callable
> @@ -26,10 +31,25 @@
>
>
>  class Node(object):
> -    """
> -    Basic class for node management. This class implements methods that
> -    manage a node, such as information gathering (of CPU/PCI/NIC) and
> -    environment setup.
> +    """The base class for node management.
> +
> +    It shouldn't be instantiated, but rather extended.
> +    It implements common methods to manage any node:
> +
> +       * connection to the node
> +       * information gathering of CPU
> +       * hugepages setup
> +
> +    Arguments:
> +        node_config: The config from the input configuration file.
> +
> +    Attributes:
> +        main_session: The primary OS-agnostic remote session used
> +            to communicate with the node.
> +        config: The configuration used to create the node.
> +        name: The name of the node.
> +        lcores: The list of logical cores that DTS can use on the node.
> +            It's derived from logical cores present on the node and user
> configuration.
>      """
>
>      main_session: OSSession
> @@ -56,65 +76,89 @@ def __init__(self, node_config: NodeConfiguration):
>          self._logger.info(f"Created node: {self.name}")
>
>      def set_up_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> -        """
> -        Perform the execution setup that will be done for each execution
> -        this node is part of.
> +        """Execution setup steps.
> +
> +        Configure hugepages and call self._set_up_execution where
> +        the rest of the configuration steps (if any) are implemented.
> +
> +        Args:
> +            execution_config: The execution configuration according to
> which
> +                the setup steps will be taken.
>          """
>          self._setup_hugepages()
>          self._set_up_execution(execution_config)
>
>      def _set_up_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional execution setup steps for derived classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional execution setup steps.
>          """
>
>      def tear_down_execution(self) -> None:
> -        """
> -        Perform the execution teardown that will be done after each
> execution
> -        this node is part of concludes.
> +        """Execution teardown steps.
> +
> +        There are currently no common execution teardown steps
> +        common to all DTS node types.
>          """
>          self._tear_down_execution()
>
>      def _tear_down_execution(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional execution teardown steps for derived
> classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional execution teardown steps.
>          """
>
>      def set_up_build_target(
>          self, build_target_config: BuildTargetConfiguration
>      ) -> None:
> -        """
> -        Perform the build target setup that will be done for each build
> target
> -        tested on this node.
> +        """Build target setup steps.
> +
> +        There are currently no common build target setup steps
> +        common to all DTS node types.
> +
> +        Args:
> +            build_target_config: The build target configuration according
> to which
> +                the setup steps will be taken.
>          """
>          self._set_up_build_target(build_target_config)
>
>      def _set_up_build_target(
>          self, build_target_config: BuildTargetConfiguration
>      ) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional build target setup steps for derived
> classes.
> +
> +        Derived classes should optionally overwrite this
> +        if they want to add additional build target setup steps.
>          """
>
>      def tear_down_build_target(self) -> None:
> -        """
> -        Perform the build target teardown that will be done after each
> build target
> -        tested on this node.
> +        """Build target teardown steps.
> +
> +        There are currently no common build target teardown steps
> +        common to all DTS node types.
>          """
>          self._tear_down_build_target()
>
>      def _tear_down_build_target(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Optional additional build target teardown steps for derived
> classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional build target teardown steps.
>          """
>
>      def create_session(self, name: str) -> OSSession:
> -        """
> -        Create and return a new OSSession tailored to the remote OS.
> +        """Create and return a new OS-agnostic remote session.
> +
> +        The returned session won't be used by the object creating it.
> +        Will be cleaned up automatically.
> +
> +        Args:
> +            name: The name of the session.
> +
> +        Returns:
> +            A new OS-agnostic remote session.
>          """
>          session_name = f"{self.name} {name}"
>          connection = create_session(
> @@ -130,14 +174,24 @@ def filter_lcores(
>          filter_specifier: LogicalCoreCount | LogicalCoreList,
>          ascending: bool = True,
>      ) -> list[LogicalCore]:
> -        """
> -        Filter the LogicalCores found on the Node according to
> -        a LogicalCoreCount or a LogicalCoreList.
> +        """Filter the node's logical cores that DTS can use.
>
> -        If ascending is True, use cores with the lowest numerical id first
> -        and continue in ascending order. If False, start with the highest
> -        id and continue in descending order. This ordering affects which
> -        sockets to consider first as well.
> +        Logical cores that DTS can use are ones that are present on the
> node,
> +        but filtered according to user config.
> +        The filter_specifier will filter cores from those logical cores.
> +
> +        Args:
> +            filter_specifier: Two different filters can be used, one that
> specifies
> +                the number of logical cores per core, cores per socket and
> +                the number of sockets,
> +                the other that specifies a logical core list.
> +            ascending: If True, use cores with the lowest numerical id
> first
> +                and continue in ascending order. If False, start with the
> highest
> +                id and continue in descending order. This ordering
> affects which
> +                sockets to consider first as well.
> +
> +        Returns:
> +            A list of logical cores.
>          """
>          self._logger.debug(f"Filtering {filter_specifier} from
> {self.lcores}.")
>          return lcore_filter(
> @@ -147,17 +201,14 @@ def filter_lcores(
>          ).filter()
>
>      def _get_remote_cpus(self) -> None:
> -        """
> -        Scan CPUs in the remote OS and store a list of LogicalCores.
> -        """
> +        """Scan CPUs in the remote OS and store a list of LogicalCores."""
>          self._logger.info("Getting CPU information.")
>          self.lcores =
> self.main_session.get_remote_cpus(self.config.use_first_core)
>
>      def _setup_hugepages(self):
> -        """
> -        Setup hugepages on the Node. Different architectures can supply
> different
> -        amounts of memory for hugepages and numa-based hugepage
> allocation may need
> -        to be considered.
> +        """Setup hugepages on the Node.
> +
> +        Configure the hugepages only if they're specified in user
> configuration.
>          """
>          if self.config.hugepages:
>              self.main_session.setup_hugepages(
> @@ -165,9 +216,7 @@ def _setup_hugepages(self):
>              )
>
>      def close(self) -> None:
> -        """
> -        Close all connections and free other resources.
> -        """
> +        """Close all connections and free other resources."""
>          if self.main_session:
>              self.main_session.close()
>          for session in self._other_sessions:
> @@ -176,6 +225,11 @@ def close(self) -> None:
>
>      @staticmethod
>      def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> +        """A decorator that skips the decorated function.
> +
> +        When used, the decorator executes an empty lambda function
> +        instead of the decorated function.
> +        """
>          if SETTINGS.skip_setup:
>              return lambda *args: None
>          else:
> --
> 2.30.2
>
>

[-- Attachment #2: Type: text/html, Size: 13939 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v4 0/4] dts: add dts api docs
  2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
                       ` (4 preceding siblings ...)
  2023-05-17 16:56     ` [RFC PATCH v3 0/4] dts: add dts api docs Bruce Richardson
@ 2023-08-31 10:04     ` Juraj Linkeš
  2023-08-31 10:04       ` [RFC PATCH v4 1/4] dts: code adjustments for sphinx Juraj Linkeš
                         ` (4 more replies)
  5 siblings, 5 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-08-31 10:04 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev, Juraj Linkeš

Augment the meson build system with dts api generation. The api docs are
generated from Python docstrings in DTS using Sphinx. The format of
choice is the Google format [0].

The guides html sphinx configuration is used to preserve the same style,
except the sidebar is configured to allow unlimited depth and better
collapsing.

The build requires the same Python version and dependencies as DTS,
because Sphinx imports the Python modules. The modules are imported
individually, requiring code refactoring. Dependencies are installed
using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts/doc

There's only one properly documented module that serves as a
demonstration of the style - framework.testbed_model.node. When we agree
on the docstring format, all docstrings will be reformatted.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes

Juraj Linkeš (4):
  dts: code adjustments for sphinx
  dts: add doc generation dependencies
  dts: add doc generation
  dts: format docstrigs to google format

 buildtools/call-sphinx-build.py               |  29 +-
 doc/api/meson.build                           |   1 +
 doc/guides/conf.py                            |  32 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      |  29 ++
 dts/doc/doc-index.rst                         |  17 +
 dts/doc/meson.build                           |  50 ++
 dts/framework/config/__init__.py              |   3 -
 dts/framework/dts.py                          |  34 +-
 dts/framework/remote_session/__init__.py      |  41 +-
 .../interactive_remote_session.py             |   0
 .../{remote => }/interactive_shell.py         |   0
 .../{remote => }/python_shell.py              |   0
 .../remote_session/remote/__init__.py         |  27 --
 .../{remote => }/remote_session.py            |   0
 .../{remote => }/ssh_session.py               |   0
 .../{remote => }/testpmd_shell.py             |   0
 dts/framework/settings.py                     |  92 ++--
 dts/framework/test_suite.py                   |   3 +-
 dts/framework/testbed_model/__init__.py       |  12 +-
 dts/framework/testbed_model/common.py         |  29 ++
 dts/framework/testbed_model/{hw => }/cpu.py   |  13 +
 dts/framework/testbed_model/hw/__init__.py    |  27 --
 .../linux_session.py                          |   4 +-
 dts/framework/testbed_model/node.py           | 193 +++++---
 .../os_session.py                             |  14 +-
 dts/framework/testbed_model/{hw => }/port.py  |   0
 .../posix_session.py                          |   2 +-
 dts/framework/testbed_model/sut_node.py       |   8 +-
 dts/framework/testbed_model/tg_node.py        |  30 +-
 .../traffic_generator/__init__.py             |  24 +
 .../capturing_traffic_generator.py            |   2 +-
 .../{ => traffic_generator}/scapy.py          |  17 +-
 .../traffic_generator.py                      |  16 +-
 .../testbed_model/{hw => }/virtual_device.py  |   0
 dts/framework/utils.py                        |  53 +--
 dts/main.py                                   |   3 +-
 dts/meson.build                               |  16 +
 dts/poetry.lock                               | 447 +++++++++++++++++-
 dts/pyproject.toml                            |   7 +
 meson.build                                   |   1 +
 41 files changed, 961 insertions(+), 316 deletions(-)
 create mode 100644 dts/doc/doc-index.rst
 create mode 100644 dts/doc/meson.build
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 create mode 100644 dts/framework/testbed_model/common.py
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (98%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (97%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (99%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (99%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (96%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
 create mode 100644 dts/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v4 1/4] dts: code adjustments for sphinx
  2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
@ 2023-08-31 10:04       ` Juraj Linkeš
  2023-10-22 14:30         ` Yoan Picchi
  2023-08-31 10:04       ` [RFC PATCH v4 2/4] dts: add doc generation dependencies Juraj Linkeš
                         ` (3 subsequent siblings)
  4 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-08-31 10:04 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev, Juraj Linkeš

sphinx-build only imports the Python modules when building the
documentation; it doesn't run DTS. This requires changes that make the
code importable without running it. This means:
* properly guarding argument parsing in the if __name__ == '__main__'
  block.
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create unnecessary log files.
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere.
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by more granular imports.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              |  3 -
 dts/framework/dts.py                          | 34 ++++++-
 dts/framework/remote_session/__init__.py      | 41 ++++-----
 .../interactive_remote_session.py             |  0
 .../{remote => }/interactive_shell.py         |  0
 .../{remote => }/python_shell.py              |  0
 .../remote_session/remote/__init__.py         | 27 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               |  0
 .../{remote => }/testpmd_shell.py             |  0
 dts/framework/settings.py                     | 92 +++++++++++--------
 dts/framework/test_suite.py                   |  3 +-
 dts/framework/testbed_model/__init__.py       | 12 +--
 dts/framework/testbed_model/common.py         | 29 ++++++
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py    | 27 ------
 .../linux_session.py                          |  4 +-
 dts/framework/testbed_model/node.py           | 22 ++++-
 .../os_session.py                             | 14 +--
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py                          |  2 +-
 dts/framework/testbed_model/sut_node.py       |  8 +-
 dts/framework/testbed_model/tg_node.py        | 30 +-----
 .../traffic_generator/__init__.py             | 24 +++++
 .../capturing_traffic_generator.py            |  2 +-
 .../{ => traffic_generator}/scapy.py          | 17 +---
 .../traffic_generator.py                      | 16 +++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py                        | 53 +----------
 dts/main.py                                   |  3 +-
 30 files changed, 229 insertions(+), 247 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 create mode 100644 dts/framework/testbed_model/common.py
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (98%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (97%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (99%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (99%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (96%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index cb7e00ba34..5de8b54bcf 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -324,6 +324,3 @@ def load_config() -> Configuration:
     config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
     config_obj: Configuration = Configuration.from_dict(dict(config))
     return config_obj
-
-
-CONFIGURATION = load_config()
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f773f0c38d..925a212210 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,22 +3,23 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+import logging
 import sys
 
 from .config import (
-    CONFIGURATION,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     TestSuiteConfig,
+    load_config,
 )
 from .exception import BlockingTestSuiteError
 from .logger import DTSLOG, getLogger
 from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
 from .test_suite import get_test_suites
 from .testbed_model import SutNode, TGNode
-from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG | logging.Logger = logging.getLogger("DTSRunner")
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -30,14 +31,18 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
-    check_dts_python_version()
+    _check_dts_python_version()
 
     sut_nodes: dict[str, SutNode] = {}
     tg_nodes: dict[str, TGNode] = {}
     try:
         # for all Execution sections
-        for execution in CONFIGURATION.executions:
+        for execution in load_config().executions:
             sut_node = sut_nodes.get(execution.system_under_test_node.name)
             tg_node = tg_nodes.get(execution.traffic_generator_node.name)
 
@@ -82,6 +87,25 @@ def run_all() -> None:
     _exit_dts()
 
 
+def _check_dts_python_version() -> None:
+    def RED(text: str) -> str:
+        return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+    if sys.version_info.major < 3 or (
+        sys.version_info.major == 3 and sys.version_info.minor < 10
+    ):
+        print(
+            RED(
+                (
+                    "WARNING: DTS execution node's python version is lower than"
+                    "python 3.10, is deprecated and will not work in future releases."
+                )
+            ),
+            file=sys.stderr,
+        )
+        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+
+
 def _run_execution(
     sut_node: SutNode,
     tg_node: TGNode,
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 00b6d1f03a..5e7ddb2b05 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -12,29 +12,24 @@
 
 # pylama:ignore=W0611
 
-from framework.config import OS, NodeConfiguration
-from framework.exception import ConfigurationError
+from framework.config import NodeConfiguration
 from framework.logger import DTSLOG
 
-from .linux_session import LinuxSession
-from .os_session import InteractiveShellType, OSSession
-from .remote import (
-    CommandResult,
-    InteractiveRemoteSession,
-    InteractiveShell,
-    PythonShell,
-    RemoteSession,
-    SSHSession,
-    TestPmdDevice,
-    TestPmdShell,
-)
-
-
-def create_session(
+from .interactive_remote_session import InteractiveRemoteSession
+from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
+from .testpmd_shell import TestPmdShell
+
+
+def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> OSSession:
-    match node_config.os:
-        case OS.linux:
-            return LinuxSession(node_config, name, logger)
-        case _:
-            raise ConfigurationError(f"Unsupported OS {node_config.os}")
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
+
+
+def create_interactive_session(
+    node_config: NodeConfiguration, logger: DTSLOG
+) -> InteractiveRemoteSession:
+    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_remote_session.py
rename to dts/framework/remote_session/interactive_remote_session.py
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_shell.py
rename to dts/framework/remote_session/interactive_shell.py
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/python_shell.py
rename to dts/framework/remote_session/python_shell.py
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 06403691a5..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-# Copyright(c) 2023 University of New Hampshire
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .interactive_remote_session import InteractiveRemoteSession
-from .interactive_shell import InteractiveShell
-from .python_shell import PythonShell
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-from .testpmd_shell import TestPmdDevice, TestPmdShell
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
-
-
-def create_interactive_session(
-    node_config: NodeConfiguration, logger: DTSLOG
-) -> InteractiveRemoteSession:
-    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/testpmd_shell.py
rename to dts/framework/remote_session/testpmd_shell.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index cfa39d011b..bf86861efb 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -22,7 +22,7 @@ def __init__(
             option_strings: Sequence[str],
             dest: str,
             nargs: str | int | None = None,
-            const: str | None = None,
+            const: bool | None = None,
             default: str = None,
             type: Callable[[str], _T | argparse.FileType | None] = None,
             choices: Iterable[_T] | None = None,
@@ -32,6 +32,12 @@ def __init__(
         ) -> None:
             env_var_value = os.environ.get(env_var)
             default = env_var_value or default
+            if const is not None:
+                nargs = 0
+                default = const if env_var_value else default
+                type = None
+                choices = None
+                metavar = None
             super(_EnvironmentArgument, self).__init__(
                 option_strings,
                 dest,
@@ -52,22 +58,28 @@ def __call__(
             values: Any,
             option_string: str = None,
         ) -> None:
-            setattr(namespace, self.dest, values)
+            if self.const is not None:
+                setattr(namespace, self.dest, self.const)
+            else:
+                setattr(namespace, self.dest, values)
 
     return _EnvironmentArgument
 
 
-@dataclass(slots=True, frozen=True)
+@dataclass(slots=True)
 class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: _Settings = _Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
         "and targets.",
     )
@@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
         "compiling DPDK.",
@@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
-        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
+        default=SETTINGS.verbose,
+        const=True,
+        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
         "to the console.",
     )
 
@@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
-        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+        const=True,
+        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
     )
 
     parser.add_argument(
@@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--snapshot",
         "--git-ref",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
         "tag ID or tree ID to test. To test local changes, first commit them, "
@@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -162,23 +176,21 @@ def _get_parser() -> argparse.ArgumentParser:
     return parser
 
 
-def _get_settings() -> _Settings:
+def set_settings() -> None:
     parsed_args = _get_parser().parse_args()
-    return _Settings(
-        config_file_path=parsed_args.config_file,
-        output_dir=parsed_args.output_dir,
-        timeout=parsed_args.timeout,
-        verbose=(parsed_args.verbose == "Y"),
-        skip_setup=(parsed_args.skip_setup == "Y"),
-        dpdk_tarball_path=Path(
-            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
-        )
+    global SETTINGS
+    SETTINGS.config_file_path = parsed_args.config_file
+    SETTINGS.output_dir = parsed_args.output_dir
+    SETTINGS.timeout = parsed_args.timeout
+    SETTINGS.verbose = parsed_args.verbose
+    SETTINGS.skip_setup = parsed_args.skip_setup
+    SETTINGS.dpdk_tarball_path = (
+        Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
         if not os.path.exists(parsed_args.tarball)
-        else Path(parsed_args.tarball),
-        compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
-        re_run=parsed_args.re_run,
+        else Path(parsed_args.tarball)
     )
-
-
-SETTINGS: _Settings = _get_settings()
+    SETTINGS.compile_timeout = parsed_args.compile_timeout
+    SETTINGS.test_cases = (
+        parsed_args.test_cases.split(",") if parsed_args.test_cases else []
+    )
+    SETTINGS.re_run = parsed_args.re_run
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3b890c0451..b381990d98 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -26,8 +26,7 @@
 from .logger import DTSLOG, getLogger
 from .settings import SETTINGS
 from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode, TGNode
-from .testbed_model.hw.port import Port, PortLink
+from .testbed_model import Port, PortLink, SutNode, TGNode
 from .utils import get_packet_summaries
 
 
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 5cbb859e47..8ced05653b 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,15 +9,9 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
+from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from .node import Node
+from .port import Port, PortLink
 from .sut_node import SutNode
 from .tg_node import TGNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/common.py b/dts/framework/testbed_model/common.py
new file mode 100644
index 0000000000..9222f57847
--- /dev/null
+++ b/dts/framework/testbed_model/common.py
@@ -0,0 +1,29 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+
+class MesonArgs(object):
+    """
+    Aggregate the arguments needed to build DPDK:
+    default_library: Default library type, Meson allows "shared", "static" and "both".
+               Defaults to None, in which case the argument won't be used.
+    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
+               Do not use -D with them, for example:
+               meson_args = MesonArgs(enable_kmods=True).
+    """
+
+    _default_library: str
+
+    def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+        self._default_library = (
+            f"--default-library={default_library}" if default_library else ""
+        )
+        self._dpdk_args = " ".join(
+            (
+                f"-D{dpdk_arg_name}={dpdk_arg_value}"
+                for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
+            )
+        )
+
+    def __str__(self) -> str:
+        return " ".join(f"{self._default_library} {self._dpdk_args}".split())
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/testbed_model/cpu.py
index d1918a12dc..8fe785dfe4 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
similarity index 98%
rename from dts/framework/remote_session/linux_session.py
rename to dts/framework/testbed_model/linux_session.py
index a3f1a6bf3b..7b60b5353f 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -9,10 +9,10 @@
 from typing_extensions import NotRequired
 
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
 from framework.utils import expand_range
 
+from .cpu import LogicalCore
+from .port import Port
 from .posix_session import PosixSession
 
 
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index fc01e0bf8e..23efa79c50 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,23 +12,26 @@
 from typing import Any, Callable, Type, Union
 
 from framework.config import (
+    OS,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     NodeConfiguration,
 )
+from framework.exception import ConfigurationError
 from framework.logger import DTSLOG, getLogger
-from framework.remote_session import InteractiveShellType, OSSession, create_session
 from framework.settings import SETTINGS
 
-from .hw import (
+from .cpu import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
-    VirtualDevice,
     lcore_filter,
 )
-from .hw.port import Port
+from .linux_session import LinuxSession
+from .os_session import InteractiveShellType, OSSession
+from .port import Port
+from .virtual_device import VirtualDevice
 
 
 class Node(ABC):
@@ -69,6 +72,7 @@ def __init__(self, node_config: NodeConfiguration):
     def _init_ports(self) -> None:
         self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
         self.main_session.update_ports(self.ports)
+
         for port in self.ports:
             self.configure_port_state(port)
 
@@ -249,3 +253,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
             return lambda *args: None
         else:
             return func
+
+
+def create_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> OSSession:
+    match node_config.os:
+        case OS.linux:
+            return LinuxSession(node_config, name, logger)
+        case _:
+            raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
similarity index 97%
rename from dts/framework/remote_session/os_session.py
rename to dts/framework/testbed_model/os_session.py
index 8a709eac1c..19ba9a69d5 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -10,19 +10,19 @@
 
 from framework.config import Architecture, NodeConfiguration, NodeInfo
 from framework.logger import DTSLOG
-from framework.remote_session.remote import InteractiveShell
-from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
-from framework.utils import MesonArgs
-
-from .remote import (
+from framework.remote_session import (
     CommandResult,
     InteractiveRemoteSession,
+    InteractiveShell,
     RemoteSession,
     create_interactive_session,
     create_remote_session,
 )
+from framework.settings import SETTINGS
+
+from .common import MesonArgs
+from .cpu import LogicalCore
+from .port import Port
 
 InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
 
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
similarity index 100%
rename from dts/framework/testbed_model/hw/port.py
rename to dts/framework/testbed_model/port.py
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
similarity index 99%
rename from dts/framework/remote_session/posix_session.py
rename to dts/framework/testbed_model/posix_session.py
index 5da0516e05..9a95baa353 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -9,8 +9,8 @@
 from framework.config import Architecture, NodeInfo
 from framework.exception import DPDKBuildError, RemoteCommandExecutionError
 from framework.settings import SETTINGS
-from framework.utils import MesonArgs
 
+from .common import MesonArgs
 from .os_session import OSSession
 
 
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 202aebfd06..2b7e104dcd 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -15,12 +15,14 @@
     NodeInfo,
     SutNodeConfiguration,
 )
-from framework.remote_session import CommandResult, InteractiveShellType, OSSession
+from framework.remote_session import CommandResult
 from framework.settings import SETTINGS
-from framework.utils import MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
+from .common import MesonArgs
+from .cpu import LogicalCoreCount, LogicalCoreList
 from .node import Node
+from .os_session import InteractiveShellType, OSSession
+from .virtual_device import VirtualDevice
 
 
 class EalParameters(object):
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 27025cfa31..166eb8430e 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -16,16 +16,11 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.config import (
-    ScapyTrafficGeneratorConfig,
-    TGNodeConfiguration,
-    TrafficGeneratorType,
-)
-from framework.exception import ConfigurationError
-
-from .capturing_traffic_generator import CapturingTrafficGenerator
-from .hw.port import Port
+from framework.config import TGNodeConfiguration
+
 from .node import Node
+from .port import Port
+from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
 
 
 class TGNode(Node):
@@ -80,20 +75,3 @@ def close(self) -> None:
         """Free all resources used by the node"""
         self.traffic_generator.close()
         super(TGNode, self).close()
-
-
-def create_traffic_generator(
-    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
-) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
-
-    from .scapy import ScapyTrafficGenerator
-
-    match traffic_generator_config.traffic_generator_type:
-        case TrafficGeneratorType.SCAPY:
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
-        case _:
-            raise ConfigurationError(
-                "Unknown traffic generator: "
-                f"{traffic_generator_config.traffic_generator_type}"
-            )
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
new file mode 100644
index 0000000000..11bfa1ee0f
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
+from framework.exception import ConfigurationError
+from framework.testbed_model.node import Node
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .scapy import ScapyTrafficGenerator
+
+
+def create_traffic_generator(
+    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+    """A factory function for creating traffic generator object from user config."""
+
+    match traffic_generator_config.traffic_generator_type:
+        case TrafficGeneratorType.SCAPY:
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+        case _:
+            raise ConfigurationError(
+                "Unknown traffic generator: "
+                f"{traffic_generator_config.traffic_generator_type}"
+            )
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
similarity index 99%
rename from dts/framework/testbed_model/capturing_traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index ab98987f8e..765378fb4a 100644
--- a/dts/framework/testbed_model/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -16,9 +16,9 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.settings import SETTINGS
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
 from .traffic_generator import TrafficGenerator
 
 
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
similarity index 96%
rename from dts/framework/testbed_model/scapy.py
rename to dts/framework/testbed_model/traffic_generator/scapy.py
index af0d4dbb25..395d7e9bc0 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -24,16 +24,15 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import DTSLOG, getLogger
 from framework.remote_session import PythonShell
 from framework.settings import SETTINGS
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 
 from .capturing_traffic_generator import (
     CapturingTrafficGenerator,
     _get_default_capture_name,
 )
-from .hw.port import Port
-from .tg_node import TGNode
 
 """
 ========= BEGIN RPC FUNCTIONS =========
@@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     session: PythonShell
     rpc_server_proxy: xmlrpc.client.ServerProxy
     _config: ScapyTrafficGeneratorConfig
-    _tg_node: TGNode
-    _logger: DTSLOG
-
-    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
-        self._config = config
-        self._tg_node = tg_node
-        self._logger = getLogger(
-            f"{self._tg_node.name} {self._config.traffic_generator_type}"
-        )
+
+    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        super().__init__(tg_node, config)
 
         assert (
             self._tg_node.config.os == OS.linux
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
similarity index 80%
rename from dts/framework/testbed_model/traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 28c35d3ce4..ea7c3963da 100644
--- a/dts/framework/testbed_model/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -12,11 +12,12 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.logger import DTSLOG
+from framework.config import TrafficGeneratorConfig
+from framework.logger import DTSLOG, getLogger
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
-
 
 class TrafficGenerator(ABC):
     """The base traffic generator.
@@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
     Defines the few basic methods that each traffic generator must implement.
     """
 
+    _config: TrafficGeneratorConfig
+    _tg_node: Node
     _logger: DTSLOG
 
+    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        self._config = config
+        self._tg_node = tg_node
+        self._logger = getLogger(
+            f"{self._tg_node.name} {self._config.traffic_generator_type}"
+        )
+
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send a packet and block until it is fully sent.
 
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index d27c2c5b5f..07e2d1c076 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -7,7 +7,6 @@
 import json
 import os
 import subprocess
-import sys
 from enum import Enum
 from pathlib import Path
 from subprocess import SubprocessError
@@ -16,6 +15,8 @@
 
 from .exception import ConfigurationError
 
+REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
+
 
 class StrEnum(Enum):
     @staticmethod
@@ -28,25 +29,6 @@ def __str__(self) -> str:
         return self.name
 
 
-REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
-
-
-def check_dts_python_version() -> None:
-    if sys.version_info.major < 3 or (
-        sys.version_info.major == 3 and sys.version_info.minor < 10
-    ):
-        print(
-            RED(
-                (
-                    "WARNING: DTS execution node's python version is lower than"
-                    "python 3.10, is deprecated and will not work in future releases."
-                )
-            ),
-            file=sys.stderr,
-        )
-        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
-
-
 def expand_range(range_str: str) -> list[int]:
     """
     Process range string into a list of integers. There are two possible formats:
@@ -77,37 +59,6 @@ def get_packet_summaries(packets: list[Packet]):
     return f"Packet contents: \n{packet_summaries}"
 
 
-def RED(text: str) -> str:
-    return f"\u001B[31;1m{str(text)}\u001B[0m"
-
-
-class MesonArgs(object):
-    """
-    Aggregate the arguments needed to build DPDK:
-    default_library: Default library type, Meson allows "shared", "static" and "both".
-               Defaults to None, in which case the argument won't be used.
-    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
-               Do not use -D with them, for example:
-               meson_args = MesonArgs(enable_kmods=True).
-    """
-
-    _default_library: str
-
-    def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
-        self._default_library = (
-            f"--default-library={default_library}" if default_library else ""
-        )
-        self._dpdk_args = " ".join(
-            (
-                f"-D{dpdk_arg_name}={dpdk_arg_value}"
-                for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
-            )
-        )
-
-    def __str__(self) -> str:
-        return " ".join(f"{self._default_library} {self._dpdk_args}".split())
-
-
 class _TarCompressionFormat(StrEnum):
     """Compression formats that tar can use.
 
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..060ff1b19a 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,11 @@
 
 import logging
 
-from framework import dts
+from framework import dts, settings
 
 
 def main() -> None:
+    settings.set_settings()
     dts.run_all()
 
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v4 2/4] dts: add doc generation dependencies
  2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
  2023-08-31 10:04       ` [RFC PATCH v4 1/4] dts: code adjustments for sphinx Juraj Linkeš
@ 2023-08-31 10:04       ` Juraj Linkeš
  2023-10-27 15:27         ` Yoan Picchi
  2023-08-31 10:04       ` [RFC PATCH v4 3/4] dts: add doc generation Juraj Linkeš
                         ` (2 subsequent siblings)
  4 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-08-31 10:04 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 447 ++++++++++++++++++++++++++++++++++++++++++++-
 dts/pyproject.toml |   7 +
 2 files changed, 453 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..91afe5231a 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,17 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.12.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"},
+    {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"},
+]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +108,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +195,90 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.2.0"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.2.0.tar.gz", hash = "sha256:3bb3d25a8e6c0aedd251753a79ae98a093c7e7b471faa3aa9a93a81431987ace"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0b87549028f680ca955556e3bd57013ab47474c3124dc069faa0b6545b6c9710"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7c70087bfee18a42b4040bb9ec1ca15a08242cf5867c58726530bdf3945672ed"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a103b3a7069b62f5d4890ae1b8f0597618f628b286b03d4bc9195230b154bfa9"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94aea8eff76ee6d1cdacb07dd2123a68283cb5569e0250feab1240058f53b623"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:db901e2ac34c931d73054d9797383d0f8009991e723dab15109740a63e7f902a"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b0dac0ff919ba34d4df1b6131f59ce95b08b9065233446be7e459f95554c0dc8"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:193cbc708ea3aca45e7221ae58f0fd63f933753a9bfb498a3b474878f12caaad"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:09393e1b2a9461950b1c9a45d5fd251dc7c6f228acab64da1c9c0165d9c7765c"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:baacc6aee0b2ef6f3d308e197b5d7a81c0e70b06beae1f1fcacffdbd124fe0e3"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:bf420121d4c8dce6b889f0e8e4ec0ca34b7f40186203f06a946fa0276ba54029"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:c04a46716adde8d927adb9457bbe39cf473e1e2c2f5d0a16ceb837e5d841ad4f"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:aaf63899c94de41fe3cf934601b0f7ccb6b428c6e4eeb80da72c58eab077b19a"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d62e51710986674142526ab9f78663ca2b0726066ae26b78b22e0f5e571238dd"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-win32.whl", hash = "sha256:04e57ab9fbf9607b77f7d057974694b4f6b142da9ed4a199859d9d4d5c63fe96"},
+    {file = "charset_normalizer-3.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:48021783bdf96e3d6de03a6e39a1171ed5bd7e8bb93fc84cc649d11490f87cea"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:4957669ef390f0e6719db3613ab3a7631e68424604a7b448f079bee145da6e09"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46fb8c61d794b78ec7134a715a3e564aafc8f6b5e338417cb19fe9f57a5a9bf2"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f779d3ad205f108d14e99bb3859aa7dd8e9c68874617c72354d7ecaec2a054ac"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f25c229a6ba38a35ae6e25ca1264621cc25d4d38dca2942a7fce0b67a4efe918"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2efb1bd13885392adfda4614c33d3b68dee4921fd0ac1d3988f8cbb7d589e72a"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f30b48dd7fa1474554b0b0f3fdfdd4c13b5c737a3c6284d3cdc424ec0ffff3a"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:246de67b99b6851627d945db38147d1b209a899311b1305dd84916f2b88526c6"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bd9b3b31adcb054116447ea22caa61a285d92e94d710aa5ec97992ff5eb7cf3"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:8c2f5e83493748286002f9369f3e6607c565a6a90425a3a1fef5ae32a36d749d"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3170c9399da12c9dc66366e9d14da8bf7147e1e9d9ea566067bbce7bb74bd9c2"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:7a4826ad2bd6b07ca615c74ab91f32f6c96d08f6fcc3902ceeedaec8cdc3bcd6"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:3b1613dd5aee995ec6d4c69f00378bbd07614702a315a2cf6c1d21461fe17c23"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9e608aafdb55eb9f255034709e20d5a83b6d60c054df0802fa9c9883d0a937aa"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-win32.whl", hash = "sha256:f2a1d0fd4242bd8643ce6f98927cf9c04540af6efa92323e9d3124f57727bfc1"},
+    {file = "charset_normalizer-3.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:681eb3d7e02e3c3655d1b16059fbfb605ac464c834a0c629048a30fad2b27489"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c57921cda3a80d0f2b8aec7e25c8aa14479ea92b5b51b6876d975d925a2ea346"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41b25eaa7d15909cf3ac4c96088c1f266a9a93ec44f87f1d13d4a0e86c81b982"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f058f6963fd82eb143c692cecdc89e075fa0828db2e5b291070485390b2f1c9c"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a7647ebdfb9682b7bb97e2a5e7cb6ae735b1c25008a70b906aecca294ee96cf4"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eef9df1eefada2c09a5e7a40991b9fc6ac6ef20b1372abd48d2794a316dc0449"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e03b8895a6990c9ab2cdcd0f2fe44088ca1c65ae592b8f795c3294af00a461c3"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:ee4006268ed33370957f55bf2e6f4d263eaf4dc3cfc473d1d90baff6ed36ce4a"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c4983bf937209c57240cff65906b18bb35e64ae872da6a0db937d7b4af845dd7"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:3bb7fda7260735efe66d5107fb7e6af6a7c04c7fce9b2514e04b7a74b06bf5dd"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:72814c01533f51d68702802d74f77ea026b5ec52793c791e2da806a3844a46c3"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:70c610f6cbe4b9fce272c407dd9d07e33e6bf7b4aa1b7ffb6f6ded8e634e3592"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-win32.whl", hash = "sha256:a401b4598e5d3f4a9a811f3daf42ee2291790c7f9d74b18d75d6e21dda98a1a1"},
+    {file = "charset_normalizer-3.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c0b21078a4b56965e2b12f247467b234734491897e99c1d51cee628da9786959"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:95eb302ff792e12aba9a8b8f8474ab229a83c103d74a750ec0bd1c1eea32e669"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1a100c6d595a7f316f1b6f01d20815d916e75ff98c27a01ae817439ea7726329"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6339d047dab2780cc6220f46306628e04d9750f02f983ddb37439ca47ced7149"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4b749b9cc6ee664a3300bb3a273c1ca8068c46be705b6c31cf5d276f8628a94"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a38856a971c602f98472050165cea2cdc97709240373041b69030be15047691f"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f87f746ee241d30d6ed93969de31e5ffd09a2961a051e60ae6bddde9ec3583aa"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89f1b185a01fe560bc8ae5f619e924407efca2191b56ce749ec84982fc59a32a"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e1c8a2f4c69e08e89632defbfabec2feb8a8d99edc9f89ce33c4b9e36ab63037"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2f4ac36d8e2b4cc1aa71df3dd84ff8efbe3bfb97ac41242fbcfc053c67434f46"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a386ebe437176aab38c041de1260cd3ea459c6ce5263594399880bbc398225b2"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:ccd16eb18a849fd8dcb23e23380e2f0a354e8daa0c984b8a732d9cfaba3a776d"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:e6a5bf2cba5ae1bb80b154ed68a3cfa2fa00fde979a7f50d6598d3e17d9ac20c"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:45de3f87179c1823e6d9e32156fb14c1927fcc9aba21433f088fdfb555b77c10"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-win32.whl", hash = "sha256:1000fba1057b92a65daec275aec30586c3de2401ccdcd41f8a5c1e2c87078706"},
+    {file = "charset_normalizer-3.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:8b2c760cfc7042b27ebdb4a43a4453bd829a5742503599144d54a032c5dc7e9e"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:855eafa5d5a2034b4621c74925d89c5efef61418570e5ef9b37717d9c796419c"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:203f0c8871d5a7987be20c72442488a0b8cfd0f43b7973771640fc593f56321f"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e857a2232ba53ae940d3456f7533ce6ca98b81917d47adc3c7fd55dad8fab858"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e86d77b090dbddbe78867a0275cb4df08ea195e660f1f7f13435a4649e954e5"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c4fb39a81950ec280984b3a44f5bd12819953dc5fa3a7e6fa7a80db5ee853952"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2dee8e57f052ef5353cf608e0b4c871aee320dd1b87d351c28764fc0ca55f9f4"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8700f06d0ce6f128de3ccdbc1acaea1ee264d2caa9ca05daaf492fde7c2a7200"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1920d4ff15ce893210c1f0c0e9d19bfbecb7983c76b33f046c13a8ffbd570252"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c1c76a1743432b4b60ab3358c937a3fe1341c828ae6194108a94c69028247f22"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f7560358a6811e52e9c4d142d497f1a6e10103d3a6881f18d04dbce3729c0e2c"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:c8063cf17b19661471ecbdb3df1c84f24ad2e389e326ccaf89e3fb2484d8dd7e"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:cd6dbe0238f7743d0efe563ab46294f54f9bc8f4b9bcf57c3c666cc5bc9d1299"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:1249cbbf3d3b04902ff081ffbb33ce3377fa6e4c7356f759f3cd076cc138d020"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-win32.whl", hash = "sha256:6c409c0deba34f147f77efaa67b8e4bb83d2f11c8806405f76397ae5b8c0d1c9"},
+    {file = "charset_normalizer-3.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:7095f6fbfaa55defb6b733cfeb14efaae7a29f0b59d8cf213be4e7ca0b857b80"},
+    {file = "charset_normalizer-3.2.0-py3-none-any.whl", hash = "sha256:8e098148dd37b4ce3baca71fb394c81dc5d9c7728c95df695d2dca218edf40e6"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.6"
@@ -232,6 +349,17 @@ ssh = ["bcrypt (>=3.1.5)"]
 test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
 test-randomorder = ["pytest-randomly"]
 
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
 [[package]]
 name = "fabric"
 version = "2.7.1"
@@ -252,6 +380,28 @@ pathlib2 = "*"
 pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
 testing = ["mock (>=2.0.0,<3.0)"]
 
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
+
 [[package]]
 name = "invoke"
 version = "1.7.3"
@@ -280,6 +430,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
 plugins = ["setuptools"]
 requirements-deprecated-finder = ["pip-api", "pipreqs"]
 
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
 [[package]]
 name = "jsonpatch"
 version = "1.33"
@@ -340,6 +507,65 @@ files = [
 [package.dependencies]
 referencing = ">=0.28.0"
 
+[[package]]
+name = "markupsafe"
+version = "2.1.3"
+description = "Safely add untrusted strings to HTML/XML markup."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
+    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -404,6 +630,17 @@ files = [
     {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
 ]
 
+[[package]]
+name = "packaging"
+version = "23.1"
+description = "Core utilities for Python packages"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"},
+    {file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"},
+]
+
 [[package]]
 name = "paramiko"
 version = "3.2.0"
@@ -515,6 +752,20 @@ files = [
     {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
 ]
 
+[[package]]
+name = "pygments"
+version = "2.15.1"
+description = "Pygments is a syntax highlighting package written in Python."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.15.1-py3-none-any.whl", hash = "sha256:db2db3deb4b4179f399a09054b023b6a586b76499d36965813c71aa8ed7b5fd1"},
+    {file = "Pygments-2.15.1.tar.gz", hash = "sha256:8ace4d3c1dd481894b2005f560ead0f9f19ee64fe983366be1a21e171d12775c"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
 [[package]]
 name = "pylama"
 version = "8.4.1"
@@ -632,6 +883,27 @@ files = [
 attrs = ">=22.2.0"
 rpds-py = ">=0.7.0"
 
+[[package]]
+name = "requests"
+version = "2.31.0"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
+    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
 [[package]]
 name = "rpds-py"
 version = "0.9.2"
@@ -775,6 +1047,162 @@ files = [
     {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
 ]
 
+[[package]]
+name = "sphinx"
+version = "6.2.1"
+description = "Python documentation generator"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
+    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18.1,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.2"
+description = "Read the Docs theme for Sphinx"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
+    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = ">=4,<5"
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.4"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"},
+    {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.2"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
+    {file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.1"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"},
+    {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.3"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
+    {file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.5"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
+    {file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
+]
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
 [[package]]
 name = "toml"
 version = "0.10.2"
@@ -819,6 +1247,23 @@ files = [
     {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
 ]
 
+[[package]]
+name = "urllib3"
+version = "2.0.4"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.4-py3-none-any.whl", hash = "sha256:de7df1803967d2c2a98e4b11bb7d6bd9210474c46e8a0401514e3a42a75ebde4"},
+    {file = "urllib3-2.0.4.tar.gz", hash = "sha256:8d22f86aae8ef5e410d4f539fde9ce6b2113a001bb4d189e0aed70642d602b11"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
+
 [[package]]
 name = "warlock"
 version = "2.0.1"
@@ -837,4 +1282,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = "fea1a3eddd1286d2ccd3bdb61c6ce085403f31567dbe4f55b6775bcf1e325372"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 6762edfa6b..159940ce02 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -34,6 +34,13 @@ pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+sphinx = "<7"
+sphinx-rtd-theme = "^1.2.2"
+
 [build-system]
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v4 3/4] dts: add doc generation
  2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
  2023-08-31 10:04       ` [RFC PATCH v4 1/4] dts: code adjustments for sphinx Juraj Linkeš
  2023-08-31 10:04       ` [RFC PATCH v4 2/4] dts: add doc generation dependencies Juraj Linkeš
@ 2023-08-31 10:04       ` Juraj Linkeš
  2023-09-20  7:08         ` Juraj Linkeš
  2023-10-26 16:43         ` Yoan Picchi
  2023-08-31 10:04       ` [RFC PATCH v4 4/4] dts: format docstrigs to google format Juraj Linkeš
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
  4 siblings, 2 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-08-31 10:04 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is sphinx, which is already
used in DPDK. The configuration is kept the same to preserve the style.

Sphinx generates the documentation from Python docstrings. The docstring
format most suitable for DTS seems to be the Google format [0] which
requires the sphinx.ext.napoleon extension.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx import the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 29 ++++++++++++-------
 doc/api/meson.build             |  1 +
 doc/guides/conf.py              | 32 +++++++++++++++++----
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 29 +++++++++++++++++++
 dts/doc/doc-index.rst           | 17 +++++++++++
 dts/doc/meson.build             | 50 +++++++++++++++++++++++++++++++++
 dts/meson.build                 | 16 +++++++++++
 meson.build                     |  1 +
 9 files changed, 161 insertions(+), 15 deletions(-)
 create mode 100644 dts/doc/doc-index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..c2f3acfb1d 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,46 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default='.')
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 2876a78a7e..1f0c725a94 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..737e5a5688 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,29 @@
           file=stderr)
     pass
 
+extensions = ['sphinx.ext.napoleon']
+
+# Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_use_ivar = True
+napoleon_use_rtype = False
+add_module_names = False
+toc_object_entries = False
+
+# Sidebar config
+html_theme_options = {
+    'collapse_navigation': False,
+    'navigation_depth': -1,
+}
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +57,8 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+path.append(environ.get('DTS_ROOT'))
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..98923b1467 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -335,3 +335,32 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+   .. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+
+Build commands
+~~~~~~~~~~~~~~
+
+The documentation is built using the standard DPDK build system.
+
+After entering Poetry's shell, build the documentation with:
+
+   .. code-block:: console
+
+   ninja -C build dts/doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings.
diff --git a/dts/doc/doc-index.rst b/dts/doc/doc-index.rst
new file mode 100644
index 0000000000..f5dcd553f2
--- /dev/null
+++ b/dts/doc/doc-index.rst
@@ -0,0 +1,17 @@
+.. DPDK Test Suite documentation.
+
+Welcome to DPDK Test Suite's documentation!
+===========================================
+
+.. toctree::
+   :titlesonly:
+   :caption: Contents:
+
+   framework
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..8e70eabc51
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,50 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build')
+sphinx_apidoc = find_program('sphinx-apidoc')
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_api_framework_dir = join_paths(dts_dir, 'framework')
+dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+dts_api_src = custom_target('dts_api_src',
+        output: 'modules.rst',
+        command: [sphinx_apidoc, '--append-syspath', '--force',
+            '--module-first', '--separate', '-V', meson.project_version(),
+            '-o', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
+            dts_api_framework_dir],
+        build_by_default: false)
+doc_targets += dts_api_src
+doc_target_names += 'DTS_API_sphinx_sources'
+
+cp = find_program('cp')
+cp_index = custom_target('cp_index',
+        input: 'doc-index.rst',
+        output: 'index.rst',
+        depends: dts_api_src,
+        command: [cp, '@INPUT@', join_paths(dts_api_build_dir, 'index.rst')],
+        build_by_default: false)
+doc_targets += cp_index
+doc_target_names += 'DTS_API_sphinx_index'
+
+extra_sphinx_args = ['-a', '-c', doc_guides_source_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        depends: cp_index,
+        command: ['DTS_ROOT=@0@'.format(dts_dir),
+            sphinx_wrapper, sphinx, meson.project_version(),
+            dts_api_build_dir, dts_api_build_dir,
+            '--dts-root', dts_dir, extra_sphinx_args],
+        build_by_default: false,
+        install: false,
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..17bda07636
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts/doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 39cb73846d..4d34dc531c 100644
--- a/meson.build
+++ b/meson.build
@@ -85,6 +85,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [RFC PATCH v4 4/4] dts: format docstrigs to google format
  2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
                         ` (2 preceding siblings ...)
  2023-08-31 10:04       ` [RFC PATCH v4 3/4] dts: add doc generation Juraj Linkeš
@ 2023-08-31 10:04       ` Juraj Linkeš
  2023-09-01 17:02         ` Jeremy Spewock
  2023-10-31 12:10         ` Yoan Picchi
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
  4 siblings, 2 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-08-31 10:04 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev, Juraj Linkeš

WIP: only one module is reformatted to serve as a demonstration.

The google format is documented here [0].

[0]: https://google.github.io/styleguide/pyguide.html

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 171 +++++++++++++++++++---------
 1 file changed, 118 insertions(+), 53 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 23efa79c50..619743ebe7 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+There's a base class, Node, that's supposed to be extended by other classes
+with functionality specific to that node type.
+The only part that can be used standalone is the Node.skip_setup static method,
+which is a decorator used to skip method execution if skip_setup is passed
+by the user on the cmdline or in an env variable.
 """
 
 from abc import ABC
@@ -35,10 +40,26 @@
 
 
 class Node(ABC):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather extended.
+    It implements common methods to manage any node:
+
+       * connection to the node
+       * information gathering of CPU
+       * hugepages setup
+
+    Arguments:
+        node_config: The config from the input configuration file.
+
+    Attributes:
+        main_session: The primary OS-agnostic remote session used
+            to communicate with the node.
+        config: The configuration used to create the node.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and user configuration.
+        ports: The ports of this node specified in user configuration.
     """
 
     main_session: OSSession
@@ -77,9 +98,14 @@ def _init_ports(self) -> None:
             self.configure_port_state(port)
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call self._set_up_execution where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
@@ -88,58 +114,78 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
             self.virtual_devices.append(VirtualDevice(vdev))
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps
+        common to all DTS node types.
         """
         self.virtual_devices = []
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional execution teardown steps.
         """
 
     def set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps
+        common to all DTS node types.
+
+        Args:
+            build_target_config: The build target configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for derived classes.
+
+        Derived classes should optionally overwrite this
+        if they want to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps
+        common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for derived classes.
+
+        Derived classes should overwrite this
+        if they want to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-agnostic remote session.
+
+        The returned session won't be used by the node creating it.
+        The session must be used by the caller.
+        Will be cleaned up automatically.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-agnostic remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -186,14 +232,24 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Logical cores that DTS can use are ones that are present on the node,
+        but filtered according to user config.
+        The filter_specifier will filter cores from those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies
+                the number of logical cores per core, cores per socket and
+                the number of sockets,
+                the other that specifies a logical core list.
+            ascending: If True, use cores with the lowest numerical id first
+                and continue in ascending order. If False, start with the highest
+                id and continue in descending order. This ordering affects which
+                sockets to consider first as well.
+
+        Returns:
+            A list of logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -203,17 +259,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self):
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the Node.
+
+        Configure the hugepages only if they're specified in user configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -221,8 +274,11 @@ def _setup_hugepages(self):
             )
 
     def configure_port_state(self, port: Port, enable: bool = True) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable port.
+
+        Args:
+            port: The port to enable/disable.
+            enable: True to enable, false to disable.
         """
         self.main_session.configure_port_state(port, enable)
 
@@ -232,15 +288,19 @@ def configure_port_ip_address(
         port: Port,
         delete: bool = False,
     ) -> None:
-        """
-        Configure the IP address of a port on this node.
+        """Add an IP address to a port on this node.
+
+        Args:
+            address: The IP address with mask in CIDR format.
+                Can be either IPv4 or IPv6.
+            port: The port to which to add the address.
+            delete: If True, will delete the address from the port
+                instead of adding it.
         """
         self.main_session.configure_port_ip_address(address, port, delete)
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -249,6 +309,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """A decorator that skips the decorated function.
+
+        When used, the decorator executes an empty lambda function
+        instead of the decorated function.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 4/4] dts: format docstrigs to google format
  2023-08-31 10:04       ` [RFC PATCH v4 4/4] dts: format docstrigs to google format Juraj Linkeš
@ 2023-09-01 17:02         ` Jeremy Spewock
  2023-10-31 12:10         ` Yoan Picchi
  1 sibling, 0 replies; 255+ messages in thread
From: Jeremy Spewock @ 2023-09-01 17:02 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb, dev

[-- Attachment #1: Type: text/plain, Size: 2119 bytes --]

On Thu, Aug 31, 2023 at 6:04 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> WIP: only one module is reformatted to serve as a demonstration.
>
> The google format is documented here [0].
>
> [0]: https://google.github.io/styleguide/pyguide.html
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/testbed_model/node.py | 171 +++++++++++++++++++---------
>  1 file changed, 118 insertions(+), 53 deletions(-)
>
> diff --git a/dts/framework/testbed_model/node.py
> b/dts/framework/testbed_model/node.py
> index 23efa79c50..619743ebe7 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -3,8 +3,13 @@
>  # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>  # Copyright(c) 2022-2023 University of New Hampshire
>
> -"""
> -A node is a generic host that DTS connects to and manages.
> +"""Common functionality for node management.
> +
> +There's a base class, Node, that's supposed to be extended by other
> classes
> +with functionality specific to that node type.
> +The only part that can be used standalone is the Node.skip_setup static
> method,
> +which is a decorator used to skip method execution if skip_setup is passed
> +by the user on the cmdline or in an env variable.
>  """
>
>  from abc import ABC
> @@ -35,10 +40,26 @@
>
>
>  class Node(ABC):
> -    """
> -    Basic class for node management. This class implements methods that
> -    manage a node, such as information gathering (of CPU/PCI/NIC) and
> -    environment setup.
> +    """The base class for node management.
> +
> +    It shouldn't be instantiated, but rather extended.
> +    It implements common methods to manage any node:
> +
> +       * connection to the node
> +       * information gathering of CPU
> +       * hugepages setup
> +
> +    Arguments:
>

My only comment would be we might want to make this Args instead of
arguments just to line up with the rest of the comments, but like you said
this is just a proof of concept really.

Acked-by: Jeremy Spweock <jspweock@iol.unh.edu>

[-- Attachment #2: Type: text/html, Size: 2998 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 3/4] dts: add doc generation
  2023-08-31 10:04       ` [RFC PATCH v4 3/4] dts: add doc generation Juraj Linkeš
@ 2023-09-20  7:08         ` Juraj Linkeš
  2023-10-26 16:43         ` Yoan Picchi
  1 sibling, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-09-20  7:08 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev

<snip>

> diff --git a/doc/guides/conf.py b/doc/guides/conf.py
> index 0f7ff5282d..737e5a5688 100644
> --- a/doc/guides/conf.py
> +++ b/doc/guides/conf.py
> @@ -7,10 +7,9 @@
>  from sphinx import __version__ as sphinx_version
>  from os import listdir
>  from os import environ
> -from os.path import basename
> -from os.path import dirname
> +from os.path import basename, dirname
>  from os.path import join as path_join
> -from sys import argv, stderr
> +from sys import argv, stderr, path
>
>  import configparser
>
> @@ -24,6 +23,29 @@
>            file=stderr)
>      pass
>
> +extensions = ['sphinx.ext.napoleon']
> +
> +# Python docstring options
> +autodoc_default_options = {
> +    'members': True,
> +    'member-order': 'bysource',
> +    'show-inheritance': True,
> +}
> +autodoc_typehints = 'both'
> +autodoc_typehints_format = 'short'
> +napoleon_numpy_docstring = False
> +napoleon_attr_annotations = True
> +napoleon_use_ivar = True
> +napoleon_use_rtype = False
> +add_module_names = False
> +toc_object_entries = False
> +
> +# Sidebar config
> +html_theme_options = {
> +    'collapse_navigation': False,
> +    'navigation_depth': -1,
> +}
> +

Thomas, Bruce,

I've added this configuration which modifies the sidebar a bit. This
affects the DPDK docs so I'd like to know whether this is permissible.
I think the sidebar works better this way even with DPDK docs, but
that may be a personal preference.

Let me know what you think.

>  stop_on_error = ('-W' in argv)
>
>  project = 'Data Plane Development Kit'

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 1/4] dts: code adjustments for sphinx
  2023-08-31 10:04       ` [RFC PATCH v4 1/4] dts: code adjustments for sphinx Juraj Linkeš
@ 2023-10-22 14:30         ` Yoan Picchi
  2023-10-23  6:44           ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-10-22 14:30 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev

On 8/31/23 11:04, Juraj Linkeš wrote:
> sphinx-build only imports the Python modules when building the
> documentation; it doesn't run DTS. This requires changes that make the
> code importable without running it. This means:
> * properly guarding argument parsing in the if __name__ == '__main__'
>    block.
> * the logger used by DTS runner underwent the same treatment so that it
>    doesn't create unnecessary log files.
> * however, DTS uses the arguments to construct an object holding global
>    variables. The defaults for the global variables needed to be moved
>    from argument parsing elsewhere.
> * importing the remote_session module from framework resulted in
>    circular imports because of one module trying to import another
>    module. This is fixed by more granular imports.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/config/__init__.py              |  3 -
>   dts/framework/dts.py                          | 34 ++++++-
>   dts/framework/remote_session/__init__.py      | 41 ++++-----
>   .../interactive_remote_session.py             |  0
>   .../{remote => }/interactive_shell.py         |  0
>   .../{remote => }/python_shell.py              |  0
>   .../remote_session/remote/__init__.py         | 27 ------
>   .../{remote => }/remote_session.py            |  0
>   .../{remote => }/ssh_session.py               |  0
>   .../{remote => }/testpmd_shell.py             |  0
>   dts/framework/settings.py                     | 92 +++++++++++--------
>   dts/framework/test_suite.py                   |  3 +-
>   dts/framework/testbed_model/__init__.py       | 12 +--
>   dts/framework/testbed_model/common.py         | 29 ++++++
>   dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
>   dts/framework/testbed_model/hw/__init__.py    | 27 ------
>   .../linux_session.py                          |  4 +-
>   dts/framework/testbed_model/node.py           | 22 ++++-
>   .../os_session.py                             | 14 +--
>   dts/framework/testbed_model/{hw => }/port.py  |  0
>   .../posix_session.py                          |  2 +-
>   dts/framework/testbed_model/sut_node.py       |  8 +-
>   dts/framework/testbed_model/tg_node.py        | 30 +-----
>   .../traffic_generator/__init__.py             | 24 +++++
>   .../capturing_traffic_generator.py            |  2 +-
>   .../{ => traffic_generator}/scapy.py          | 17 +---
>   .../traffic_generator.py                      | 16 +++-
>   .../testbed_model/{hw => }/virtual_device.py  |  0
>   dts/framework/utils.py                        | 53 +----------
>   dts/main.py                                   |  3 +-
>   30 files changed, 229 insertions(+), 247 deletions(-)
>   rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
>   rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
>   rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
>   delete mode 100644 dts/framework/remote_session/remote/__init__.py
>   rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
>   rename dts/framework/remote_session/{remote => }/ssh_session.py (100%)
>   rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
>   create mode 100644 dts/framework/testbed_model/common.py
>   rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
>   delete mode 100644 dts/framework/testbed_model/hw/__init__.py
>   rename dts/framework/{remote_session => testbed_model}/linux_session.py (98%)
>   rename dts/framework/{remote_session => testbed_model}/os_session.py (97%)
>   rename dts/framework/testbed_model/{hw => }/port.py (100%)
>   rename dts/framework/{remote_session => testbed_model}/posix_session.py (99%)
>   create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
>   rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (99%)
>   rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (96%)
>   rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
>   rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
> 
> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> index cb7e00ba34..5de8b54bcf 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -324,6 +324,3 @@ def load_config() -> Configuration:
>       config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
>       config_obj: Configuration = Configuration.from_dict(dict(config))
>       return config_obj
> -
> -
> -CONFIGURATION = load_config()
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index f773f0c38d..925a212210 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -3,22 +3,23 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> +import logging
>   import sys
>   
>   from .config import (
> -    CONFIGURATION,
>       BuildTargetConfiguration,
>       ExecutionConfiguration,
>       TestSuiteConfig,
> +    load_config,
>   )
>   from .exception import BlockingTestSuiteError
>   from .logger import DTSLOG, getLogger
>   from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
>   from .test_suite import get_test_suites
>   from .testbed_model import SutNode, TGNode
> -from .utils import check_dts_python_version
>   
> -dts_logger: DTSLOG = getLogger("DTSRunner")
> +# dummy defaults to satisfy linters
> +dts_logger: DTSLOG | logging.Logger = logging.getLogger("DTSRunner")
>   result: DTSResult = DTSResult(dts_logger)
>   
>   
> @@ -30,14 +31,18 @@ def run_all() -> None:
>       global dts_logger
>       global result
>   
> +    # create a regular DTS logger and create a new result with it
> +    dts_logger = getLogger("DTSRunner")
> +    result = DTSResult(dts_logger)
> +
>       # check the python version of the server that run dts
> -    check_dts_python_version()
> +    _check_dts_python_version()
>   
>       sut_nodes: dict[str, SutNode] = {}
>       tg_nodes: dict[str, TGNode] = {}
>       try:
>           # for all Execution sections
> -        for execution in CONFIGURATION.executions:
> +        for execution in load_config().executions:
>               sut_node = sut_nodes.get(execution.system_under_test_node.name)
>               tg_node = tg_nodes.get(execution.traffic_generator_node.name)
>   
> @@ -82,6 +87,25 @@ def run_all() -> None:
>       _exit_dts()
>   
>   
> +def _check_dts_python_version() -> None:
> +    def RED(text: str) -> str:
> +        return f"\u001B[31;1m{str(text)}\u001B[0m"
> +
> +    if sys.version_info.major < 3 or (
> +        sys.version_info.major == 3 and sys.version_info.minor < 10
> +    ):
> +        print(
> +            RED(
> +                (
> +                    "WARNING: DTS execution node's python version is lower than"
> +                    "python 3.10, is deprecated and will not work in future releases."
> +                )
> +            ),
> +            file=sys.stderr,
> +        )
> +        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> +
> +
>   def _run_execution(
>       sut_node: SutNode,
>       tg_node: TGNode,
> diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
> index 00b6d1f03a..5e7ddb2b05 100644
> --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__init__.py
> @@ -12,29 +12,24 @@
>   
>   # pylama:ignore=W0611
>   
> -from framework.config import OS, NodeConfiguration
> -from framework.exception import ConfigurationError
> +from framework.config import NodeConfiguration
>   from framework.logger import DTSLOG
>   
> -from .linux_session import LinuxSession
> -from .os_session import InteractiveShellType, OSSession
> -from .remote import (
> -    CommandResult,
> -    InteractiveRemoteSession,
> -    InteractiveShell,
> -    PythonShell,
> -    RemoteSession,
> -    SSHSession,
> -    TestPmdDevice,
> -    TestPmdShell,
> -)
> -
> -
> -def create_session(
> +from .interactive_remote_session import InteractiveRemoteSession
> +from .interactive_shell import InteractiveShell
> +from .python_shell import PythonShell
> +from .remote_session import CommandResult, RemoteSession
> +from .ssh_session import SSHSession
> +from .testpmd_shell import TestPmdShell
> +
> +
> +def create_remote_session(
>       node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> OSSession:
> -    match node_config.os:
> -        case OS.linux:
> -            return LinuxSession(node_config, name, logger)
> -        case _:
> -            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> +) -> RemoteSession:
> +    return SSHSession(node_config, name, logger)
> +
> +
> +def create_interactive_session(
> +    node_config: NodeConfiguration, logger: DTSLOG
> +) -> InteractiveRemoteSession:
> +    return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/interactive_remote_session.py
> rename to dts/framework/remote_session/interactive_remote_session.py
> diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/interactive_shell.py
> rename to dts/framework/remote_session/interactive_shell.py
> diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/python_shell.py
> rename to dts/framework/remote_session/python_shell.py
> diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
> deleted file mode 100644
> index 06403691a5..0000000000
> --- a/dts/framework/remote_session/remote/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -# Copyright(c) 2023 University of New Hampshire
> -
> -# pylama:ignore=W0611
> -
> -from framework.config import NodeConfiguration
> -from framework.logger import DTSLOG
> -
> -from .interactive_remote_session import InteractiveRemoteSession
> -from .interactive_shell import InteractiveShell
> -from .python_shell import PythonShell
> -from .remote_session import CommandResult, RemoteSession
> -from .ssh_session import SSHSession
> -from .testpmd_shell import TestPmdDevice, TestPmdShell
> -
> -
> -def create_remote_session(
> -    node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> RemoteSession:
> -    return SSHSession(node_config, name, logger)
> -
> -
> -def create_interactive_session(
> -    node_config: NodeConfiguration, logger: DTSLOG
> -) -> InteractiveRemoteSession:
> -    return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/remote_session.py
> rename to dts/framework/remote_session/remote_session.py
> diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/ssh_session.py
> rename to dts/framework/remote_session/ssh_session.py
> diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/testpmd_shell.py
> rename to dts/framework/remote_session/testpmd_shell.py
> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> index cfa39d011b..bf86861efb 100644
> --- a/dts/framework/settings.py
> +++ b/dts/framework/settings.py
> @@ -6,7 +6,7 @@
>   import argparse
>   import os
>   from collections.abc import Callable, Iterable, Sequence
> -from dataclasses import dataclass
> +from dataclasses import dataclass, field
>   from pathlib import Path
>   from typing import Any, TypeVar
>   
> @@ -22,7 +22,7 @@ def __init__(
>               option_strings: Sequence[str],
>               dest: str,
>               nargs: str | int | None = None,
> -            const: str | None = None,
> +            const: bool | None = None,
>               default: str = None,
>               type: Callable[[str], _T | argparse.FileType | None] = None,
>               choices: Iterable[_T] | None = None,
> @@ -32,6 +32,12 @@ def __init__(
>           ) -> None:
>               env_var_value = os.environ.get(env_var)
>               default = env_var_value or default
> +            if const is not None:
> +                nargs = 0
> +                default = const if env_var_value else default
> +                type = None
> +                choices = None
> +                metavar = None
>               super(_EnvironmentArgument, self).__init__(
>                   option_strings,
>                   dest,
> @@ -52,22 +58,28 @@ def __call__(
>               values: Any,
>               option_string: str = None,
>           ) -> None:
> -            setattr(namespace, self.dest, values)
> +            if self.const is not None:
> +                setattr(namespace, self.dest, self.const)
> +            else:
> +                setattr(namespace, self.dest, values)
>   
>       return _EnvironmentArgument
>   
>   
> -@dataclass(slots=True, frozen=True)
> +@dataclass(slots=True)
>   class _Settings:
> -    config_file_path: str
> -    output_dir: str
> -    timeout: float
> -    verbose: bool
> -    skip_setup: bool
> -    dpdk_tarball_path: Path
> -    compile_timeout: float
> -    test_cases: list
> -    re_run: int
> +    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
> +    output_dir: str = "output"
> +    timeout: float = 15
> +    verbose: bool = False
> +    skip_setup: bool = False
> +    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
> +    compile_timeout: float = 1200
> +    test_cases: list[str] = field(default_factory=list)
> +    re_run: int = 0
> +
> +
> +SETTINGS: _Settings = _Settings()
>   
>   
>   def _get_parser() -> argparse.ArgumentParser:
> @@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
>       parser.add_argument(
>           "--config-file",
>           action=_env_arg("DTS_CFG_FILE"),
> -        default="conf.yaml",
> +        default=SETTINGS.config_file_path,
> +        type=Path,
>           help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
>           "and targets.",
>       )
> @@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--output-dir",
>           "--output",
>           action=_env_arg("DTS_OUTPUT_DIR"),
> -        default="output",
> +        default=SETTINGS.output_dir,
>           help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
>       )
>   
> @@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-t",
>           "--timeout",
>           action=_env_arg("DTS_TIMEOUT"),
> -        default=15,
> +        default=SETTINGS.timeout,
>           type=float,
>           help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
>           "compiling DPDK.",
> @@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-v",
>           "--verbose",
>           action=_env_arg("DTS_VERBOSE"),
> -        default="N",
> -        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
> +        default=SETTINGS.verbose,
> +        const=True,
> +        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
>           "to the console.",
>       )
>   
> @@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-s",
>           "--skip-setup",
>           action=_env_arg("DTS_SKIP_SETUP"),
> -        default="N",
> -        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
> +        const=True,
> +        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
>       )
>   
>       parser.add_argument(
> @@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--snapshot",
>           "--git-ref",
>           action=_env_arg("DTS_DPDK_TARBALL"),
> -        default="dpdk.tar.xz",
> +        default=SETTINGS.dpdk_tarball_path,
>           type=Path,
>           help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
>           "tag ID or tree ID to test. To test local changes, first commit them, "
> @@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
>       parser.add_argument(
>           "--compile-timeout",
>           action=_env_arg("DTS_COMPILE_TIMEOUT"),
> -        default=1200,
> +        default=SETTINGS.compile_timeout,
>           type=float,
>           help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
>       )
> @@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--re-run",
>           "--re_run",
>           action=_env_arg("DTS_RERUN"),
> -        default=0,
> +        default=SETTINGS.re_run,
>           type=int,
>           help="[DTS_RERUN] Re-run each test case the specified amount of times "
>           "if a test failure occurs",
> @@ -162,23 +176,21 @@ def _get_parser() -> argparse.ArgumentParser:
>       return parser
>   
>   
> -def _get_settings() -> _Settings:
> +def set_settings() -> None:
>       parsed_args = _get_parser().parse_args()
> -    return _Settings(
> -        config_file_path=parsed_args.config_file,
> -        output_dir=parsed_args.output_dir,
> -        timeout=parsed_args.timeout,
> -        verbose=(parsed_args.verbose == "Y"),
> -        skip_setup=(parsed_args.skip_setup == "Y"),
> -        dpdk_tarball_path=Path(
> -            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
> -        )
> +    global SETTINGS
> +    SETTINGS.config_file_path = parsed_args.config_file
> +    SETTINGS.output_dir = parsed_args.output_dir
> +    SETTINGS.timeout = parsed_args.timeout
> +    SETTINGS.verbose = parsed_args.verbose
> +    SETTINGS.skip_setup = parsed_args.skip_setup
> +    SETTINGS.dpdk_tarball_path = (
> +        Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
>           if not os.path.exists(parsed_args.tarball)
> -        else Path(parsed_args.tarball),
> -        compile_timeout=parsed_args.compile_timeout,
> -        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
> -        re_run=parsed_args.re_run,
> +        else Path(parsed_args.tarball)
>       )
> -
> -
> -SETTINGS: _Settings = _get_settings()
> +    SETTINGS.compile_timeout = parsed_args.compile_timeout
> +    SETTINGS.test_cases = (
> +        parsed_args.test_cases.split(",") if parsed_args.test_cases else []
> +    )
> +    SETTINGS.re_run = parsed_args.re_run
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index 3b890c0451..b381990d98 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -26,8 +26,7 @@
>   from .logger import DTSLOG, getLogger
>   from .settings import SETTINGS
>   from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
> -from .testbed_model import SutNode, TGNode
> -from .testbed_model.hw.port import Port, PortLink
> +from .testbed_model import Port, PortLink, SutNode, TGNode
>   from .utils import get_packet_summaries
>   
>   
> diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
> index 5cbb859e47..8ced05653b 100644
> --- a/dts/framework/testbed_model/__init__.py
> +++ b/dts/framework/testbed_model/__init__.py
> @@ -9,15 +9,9 @@
>   
>   # pylama:ignore=W0611
>   
> -from .hw import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -    VirtualDevice,
> -    lcore_filter,
> -)
> +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
>   from .node import Node
> +from .port import Port, PortLink
>   from .sut_node import SutNode
>   from .tg_node import TGNode
> +from .virtual_device import VirtualDevice
> diff --git a/dts/framework/testbed_model/common.py b/dts/framework/testbed_model/common.py
> new file mode 100644
> index 0000000000..9222f57847
> --- /dev/null
> +++ b/dts/framework/testbed_model/common.py
> @@ -0,0 +1,29 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +
> +class MesonArgs(object):
> +    """
> +    Aggregate the arguments needed to build DPDK:
> +    default_library: Default library type, Meson allows "shared", "static" and "both".
> +               Defaults to None, in which case the argument won't be used.
> +    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> +               Do not use -D with them, for example:
> +               meson_args = MesonArgs(enable_kmods=True).
> +    """
> +
> +    _default_library: str
> +
> +    def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> +        self._default_library = (
> +            f"--default-library={default_library}" if default_library else ""
> +        )
> +        self._dpdk_args = " ".join(
> +            (
> +                f"-D{dpdk_arg_name}={dpdk_arg_value}"
> +                for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
> +            )
> +        )
> +
> +    def __str__(self) -> str:
> +        return " ".join(f"{self._default_library} {self._dpdk_args}".split())
> diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
> similarity index 95%
> rename from dts/framework/testbed_model/hw/cpu.py
> rename to dts/framework/testbed_model/cpu.py
> index d1918a12dc..8fe785dfe4 100644
> --- a/dts/framework/testbed_model/hw/cpu.py
> +++ b/dts/framework/testbed_model/cpu.py
> @@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
>               )
>   
>           return filtered_lcores
> +
> +
> +def lcore_filter(
> +    core_list: list[LogicalCore],
> +    filter_specifier: LogicalCoreCount | LogicalCoreList,
> +    ascending: bool,
> +) -> LogicalCoreFilter:
> +    if isinstance(filter_specifier, LogicalCoreList):
> +        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
> +    elif isinstance(filter_specifier, LogicalCoreCount):
> +        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
> +    else:
> +        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
> deleted file mode 100644
> index 88ccac0b0e..0000000000
> --- a/dts/framework/testbed_model/hw/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -
> -# pylama:ignore=W0611
> -
> -from .cpu import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -)
> -from .virtual_device import VirtualDevice
> -
> -
> -def lcore_filter(
> -    core_list: list[LogicalCore],
> -    filter_specifier: LogicalCoreCount | LogicalCoreList,
> -    ascending: bool,
> -) -> LogicalCoreFilter:
> -    if isinstance(filter_specifier, LogicalCoreList):
> -        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
> -    elif isinstance(filter_specifier, LogicalCoreCount):
> -        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
> -    else:
> -        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
> similarity index 98%
> rename from dts/framework/remote_session/linux_session.py
> rename to dts/framework/testbed_model/linux_session.py
> index a3f1a6bf3b..7b60b5353f 100644
> --- a/dts/framework/remote_session/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -9,10 +9,10 @@
>   from typing_extensions import NotRequired
>   
>   from framework.exception import RemoteCommandExecutionError
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
>   from framework.utils import expand_range
>   
> +from .cpu import LogicalCore
> +from .port import Port
>   from .posix_session import PosixSession
>   
>   
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index fc01e0bf8e..23efa79c50 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -12,23 +12,26 @@
>   from typing import Any, Callable, Type, Union
>   
>   from framework.config import (
> +    OS,
>       BuildTargetConfiguration,
>       ExecutionConfiguration,
>       NodeConfiguration,
>   )
> +from framework.exception import ConfigurationError
>   from framework.logger import DTSLOG, getLogger
> -from framework.remote_session import InteractiveShellType, OSSession, create_session
>   from framework.settings import SETTINGS
>   
> -from .hw import (
> +from .cpu import (
>       LogicalCore,
>       LogicalCoreCount,
>       LogicalCoreList,
>       LogicalCoreListFilter,
> -    VirtualDevice,
>       lcore_filter,
>   )
> -from .hw.port import Port
> +from .linux_session import LinuxSession
> +from .os_session import InteractiveShellType, OSSession
> +from .port import Port
> +from .virtual_device import VirtualDevice
>   
>   
>   class Node(ABC):
> @@ -69,6 +72,7 @@ def __init__(self, node_config: NodeConfiguration):
>       def _init_ports(self) -> None:
>           self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
>           self.main_session.update_ports(self.ports)
> +
>           for port in self.ports:
>               self.configure_port_state(port)
>   
> @@ -249,3 +253,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
>               return lambda *args: None
>           else:
>               return func
> +
> +
> +def create_session(
> +    node_config: NodeConfiguration, name: str, logger: DTSLOG
> +) -> OSSession:
> +    match node_config.os:
> +        case OS.linux:
> +            return LinuxSession(node_config, name, logger)
> +        case _:
> +            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
> similarity index 97%
> rename from dts/framework/remote_session/os_session.py
> rename to dts/framework/testbed_model/os_session.py
> index 8a709eac1c..19ba9a69d5 100644
> --- a/dts/framework/remote_session/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -10,19 +10,19 @@
>   
>   from framework.config import Architecture, NodeConfiguration, NodeInfo
>   from framework.logger import DTSLOG
> -from framework.remote_session.remote import InteractiveShell
> -from framework.settings import SETTINGS
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
> -from framework.utils import MesonArgs
> -
> -from .remote import (
> +from framework.remote_session import (
>       CommandResult,
>       InteractiveRemoteSession,
> +    InteractiveShell,
>       RemoteSession,
>       create_interactive_session,
>       create_remote_session,
>   )
> +from framework.settings import SETTINGS
> +
> +from .common import MesonArgs
> +from .cpu import LogicalCore
> +from .port import Port
>   
>   InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
>   
> diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/port.py
> rename to dts/framework/testbed_model/port.py
> diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
> similarity index 99%
> rename from dts/framework/remote_session/posix_session.py
> rename to dts/framework/testbed_model/posix_session.py
> index 5da0516e05..9a95baa353 100644
> --- a/dts/framework/remote_session/posix_session.py
> +++ b/dts/framework/testbed_model/posix_session.py
> @@ -9,8 +9,8 @@
>   from framework.config import Architecture, NodeInfo
>   from framework.exception import DPDKBuildError, RemoteCommandExecutionError
>   from framework.settings import SETTINGS
> -from framework.utils import MesonArgs
>   
> +from .common import MesonArgs
>   from .os_session import OSSession
>   
>   
> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
> index 202aebfd06..2b7e104dcd 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -15,12 +15,14 @@
>       NodeInfo,
>       SutNodeConfiguration,
>   )
> -from framework.remote_session import CommandResult, InteractiveShellType, OSSession
> +from framework.remote_session import CommandResult
>   from framework.settings import SETTINGS
> -from framework.utils import MesonArgs
>   
> -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
> +from .common import MesonArgs
> +from .cpu import LogicalCoreCount, LogicalCoreList
>   from .node import Node
> +from .os_session import InteractiveShellType, OSSession
> +from .virtual_device import VirtualDevice
>   
>   
>   class EalParameters(object):
> diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
> index 27025cfa31..166eb8430e 100644
> --- a/dts/framework/testbed_model/tg_node.py
> +++ b/dts/framework/testbed_model/tg_node.py
> @@ -16,16 +16,11 @@
>   
>   from scapy.packet import Packet  # type: ignore[import]
>   
> -from framework.config import (
> -    ScapyTrafficGeneratorConfig,
> -    TGNodeConfiguration,
> -    TrafficGeneratorType,
> -)
> -from framework.exception import ConfigurationError
> -
> -from .capturing_traffic_generator import CapturingTrafficGenerator
> -from .hw.port import Port
> +from framework.config import TGNodeConfiguration
> +
>   from .node import Node
> +from .port import Port
> +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
>   
>   
>   class TGNode(Node):
> @@ -80,20 +75,3 @@ def close(self) -> None:
>           """Free all resources used by the node"""
>           self.traffic_generator.close()
>           super(TGNode, self).close()
> -
> -
> -def create_traffic_generator(
> -    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
> -) -> CapturingTrafficGenerator:
> -    """A factory function for creating traffic generator object from user config."""
> -
> -    from .scapy import ScapyTrafficGenerator
> -
> -    match traffic_generator_config.traffic_generator_type:
> -        case TrafficGeneratorType.SCAPY:
> -            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> -        case _:
> -            raise ConfigurationError(
> -                "Unknown traffic generator: "
> -                f"{traffic_generator_config.traffic_generator_type}"
> -            )
> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
> new file mode 100644
> index 0000000000..11bfa1ee0f
> --- /dev/null
> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> @@ -0,0 +1,24 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
> +from framework.exception import ConfigurationError
> +from framework.testbed_model.node import Node
> +
> +from .capturing_traffic_generator import CapturingTrafficGenerator
> +from .scapy import ScapyTrafficGenerator
> +
> +
> +def create_traffic_generator(
> +    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
> +) -> CapturingTrafficGenerator:
> +    """A factory function for creating traffic generator object from user config."""
> +
> +    match traffic_generator_config.traffic_generator_type:
> +        case TrafficGeneratorType.SCAPY:
> +            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> +        case _:
> +            raise ConfigurationError(
> +                "Unknown traffic generator: "
> +                f"{traffic_generator_config.traffic_generator_type}"
> +            )
> diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> similarity index 99%
> rename from dts/framework/testbed_model/capturing_traffic_generator.py
> rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> index ab98987f8e..765378fb4a 100644
> --- a/dts/framework/testbed_model/capturing_traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> @@ -16,9 +16,9 @@
>   from scapy.packet import Packet  # type: ignore[import]
>   
>   from framework.settings import SETTINGS
> +from framework.testbed_model.port import Port
>   from framework.utils import get_packet_summaries
>   
> -from .hw.port import Port
>   from .traffic_generator import TrafficGenerator
>   
>   
> diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
> similarity index 96%
> rename from dts/framework/testbed_model/scapy.py
> rename to dts/framework/testbed_model/traffic_generator/scapy.py
> index af0d4dbb25..395d7e9bc0 100644
> --- a/dts/framework/testbed_model/scapy.py
> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
> @@ -24,16 +24,15 @@
>   from scapy.packet import Packet  # type: ignore[import]
>   
>   from framework.config import OS, ScapyTrafficGeneratorConfig
> -from framework.logger import DTSLOG, getLogger
>   from framework.remote_session import PythonShell
>   from framework.settings import SETTINGS
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>   
>   from .capturing_traffic_generator import (
>       CapturingTrafficGenerator,
>       _get_default_capture_name,
>   )
> -from .hw.port import Port
> -from .tg_node import TGNode
>   
>   """
>   ========= BEGIN RPC FUNCTIONS =========
> @@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
>       session: PythonShell
>       rpc_server_proxy: xmlrpc.client.ServerProxy
>       _config: ScapyTrafficGeneratorConfig
> -    _tg_node: TGNode
> -    _logger: DTSLOG
> -
> -    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
> -        self._config = config
> -        self._tg_node = tg_node
> -        self._logger = getLogger(
> -            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> -        )
> +
> +    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
> +        super().__init__(tg_node, config)
>   
>           assert (
>               self._tg_node.config.os == OS.linux
> diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> similarity index 80%
> rename from dts/framework/testbed_model/traffic_generator.py
> rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
> index 28c35d3ce4..ea7c3963da 100644
> --- a/dts/framework/testbed_model/traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> @@ -12,11 +12,12 @@
>   
>   from scapy.packet import Packet  # type: ignore[import]
>   
> -from framework.logger import DTSLOG
> +from framework.config import TrafficGeneratorConfig
> +from framework.logger import DTSLOG, getLogger
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>   from framework.utils import get_packet_summaries
>   
> -from .hw.port import Port
> -
>   
>   class TrafficGenerator(ABC):
>       """The base traffic generator.
> @@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
>       Defines the few basic methods that each traffic generator must implement.
>       """
>   
> +    _config: TrafficGeneratorConfig
> +    _tg_node: Node
>       _logger: DTSLOG
>   
> +    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> +        self._config = config
> +        self._tg_node = tg_node
> +        self._logger = getLogger(
> +            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> +        )
> +
>       def send_packet(self, packet: Packet, port: Port) -> None:
>           """Send a packet and block until it is fully sent.
>   
> diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/virtual_device.py
> rename to dts/framework/testbed_model/virtual_device.py
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index d27c2c5b5f..07e2d1c076 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -7,7 +7,6 @@
>   import json
>   import os
>   import subprocess
> -import sys
>   from enum import Enum
>   from pathlib import Path
>   from subprocess import SubprocessError
> @@ -16,6 +15,8 @@
>   
>   from .exception import ConfigurationError
>   
> +REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
> +
>   
>   class StrEnum(Enum):
>       @staticmethod
> @@ -28,25 +29,6 @@ def __str__(self) -> str:
>           return self.name
>   
>   
> -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
> -
> -
> -def check_dts_python_version() -> None:
> -    if sys.version_info.major < 3 or (
> -        sys.version_info.major == 3 and sys.version_info.minor < 10
> -    ):
> -        print(
> -            RED(
> -                (
> -                    "WARNING: DTS execution node's python version is lower than"
> -                    "python 3.10, is deprecated and will not work in future releases."
> -                )
> -            ),
> -            file=sys.stderr,
> -        )
> -        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> -
> -
>   def expand_range(range_str: str) -> list[int]:
>       """
>       Process range string into a list of integers. There are two possible formats:
> @@ -77,37 +59,6 @@ def get_packet_summaries(packets: list[Packet]):
>       return f"Packet contents: \n{packet_summaries}"
>   
>   
> -def RED(text: str) -> str:
> -    return f"\u001B[31;1m{str(text)}\u001B[0m"
> -
> -
> -class MesonArgs(object):
> -    """
> -    Aggregate the arguments needed to build DPDK:
> -    default_library: Default library type, Meson allows "shared", "static" and "both".
> -               Defaults to None, in which case the argument won't be used.
> -    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> -               Do not use -D with them, for example:
> -               meson_args = MesonArgs(enable_kmods=True).
> -    """
> -
> -    _default_library: str
> -
> -    def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> -        self._default_library = (
> -            f"--default-library={default_library}" if default_library else ""
> -        )
> -        self._dpdk_args = " ".join(
> -            (
> -                f"-D{dpdk_arg_name}={dpdk_arg_value}"
> -                for dpdk_arg_name, dpdk_arg_value in dpdk_args.items()
> -            )
> -        )
> -
> -    def __str__(self) -> str:
> -        return " ".join(f"{self._default_library} {self._dpdk_args}".split())
> -
> -
>   class _TarCompressionFormat(StrEnum):
>       """Compression formats that tar can use.
>   
> diff --git a/dts/main.py b/dts/main.py
> index 43311fa847..060ff1b19a 100755
> --- a/dts/main.py
> +++ b/dts/main.py
> @@ -10,10 +10,11 @@
>   
>   import logging
>   
> -from framework import dts
> +from framework import dts, settings
>   
>   
>   def main() -> None:
> +    settings.set_settings()
>       dts.run_all()
>   
>   

My only nitpick comment would be on the name of the file common.py that 
only contain the MesonArgs class. Looks good otherwise

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 1/4] dts: code adjustments for sphinx
  2023-10-22 14:30         ` Yoan Picchi
@ 2023-10-23  6:44           ` Juraj Linkeš
  2023-10-23 11:52             ` Yoan Picchi
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-10-23  6:44 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb, dev

<snip>

>
> My only nitpick comment would be on the name of the file common.py that
> only contain the MesonArgs class. Looks good otherwise

Could you elaborate a bit more, Yoan? The common.py module is supposed
to be extended with code common to all other modules in the
testbed_model package. Right now we only have MesonArgs which fits in
common.py, but we could also move something else into common.py. We
also could rename common.py to something else, but then the above
purpose would not be clear.

I'm finishing the docstrings soon so expect a new version where things
like these will be clearer. :-)

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 1/4] dts: code adjustments for sphinx
  2023-10-23  6:44           ` Juraj Linkeš
@ 2023-10-23 11:52             ` Yoan Picchi
  2023-10-24  6:39               ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-10-23 11:52 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb, dev

On 10/23/23 07:44, Juraj Linkeš wrote:
> <snip>
> 
>>
>> My only nitpick comment would be on the name of the file common.py that
>> only contain the MesonArgs class. Looks good otherwise
> 
> Could you elaborate a bit more, Yoan? The common.py module is supposed
> to be extended with code common to all other modules in the
> testbed_model package. Right now we only have MesonArgs which fits in
> common.py, but we could also move something else into common.py. We
> also could rename common.py to something else, but then the above
> purpose would not be clear.
> 
> I'm finishing the docstrings soon so expect a new version where things
> like these will be clearer. :-)

My issue with the name is that it isn't clear what is the purpose of 
this file. It only tell to some extend how it is used.
If we start adding more things in this file, then I see two options:
	- Either this is related to the current class, and thus the file could 
be named meson_arg or something along those lines.
	- Or it is unrelated to the current class, and we end up with a file 
coalescing random bits of code, which is usually a bit dirty in OOP.

Like I said, it's a bit of a nitpick, but given it's an RFC I hope 
you'll give it a thought in the next version.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 1/4] dts: code adjustments for sphinx
  2023-10-23 11:52             ` Yoan Picchi
@ 2023-10-24  6:39               ` Juraj Linkeš
  2023-10-24 12:21                 ` Yoan Picchi
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-10-24  6:39 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb, dev

On Mon, Oct 23, 2023 at 1:52 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 10/23/23 07:44, Juraj Linkeš wrote:
> > <snip>
> >
> >>
> >> My only nitpick comment would be on the name of the file common.py that
> >> only contain the MesonArgs class. Looks good otherwise
> >
> > Could you elaborate a bit more, Yoan? The common.py module is supposed
> > to be extended with code common to all other modules in the
> > testbed_model package. Right now we only have MesonArgs which fits in
> > common.py, but we could also move something else into common.py. We
> > also could rename common.py to something else, but then the above
> > purpose would not be clear.
> >
> > I'm finishing the docstrings soon so expect a new version where things
> > like these will be clearer. :-)
>
> My issue with the name is that it isn't clear what is the purpose of
> this file. It only tell to some extend how it is used.

Well, the name suggests it's code that's common to other modules, as
in code that other modules use, just like the MesonArgs class, which
is used in three different modules. I've chosen common.py as that's
what some of the DPDK libs (such as EAL) seem to be using for this
purpose. Maybe there's a better name though or we could move the class
elsewhere.

> If we start adding more things in this file, then I see two options:
>         - Either this is related to the current class, and thus the file could
> be named meson_arg or something along those lines.
>         - Or it is unrelated to the current class, and we end up with a file
> coalescing random bits of code, which is usually a bit dirty in OOP.
>

It's code that's reused in multiple places, I'm not sure whether that
qualifies as random bits of code. It could be in os_session.py (as
that would work import-wise), but that's not a good place to put it,
as the logic is actually utilized in sut_node.py. But putting it into
sut_node.py doesn't work because of imports. Maybe we could just put
it into utils.py in the framework dir, which is a very similar file,
if not the same. My original thoughts were to have a file with common
code in each package (if needed), depending on where the code is used
(package level-wise), but it looks like we can just have this sort of
common utilities on the top level.

> Like I said, it's a bit of a nitpick, but given it's an RFC I hope
> you'll give it a thought in the next version.

I thought a lot about this before submitting this RFC, but I wanted
someone to have a look at this exact thing - whether the common.py
file makes sense and what is the better name, common.py or utils.py
(which is why I have both in this patch). I'll move the MesonArgs
class to the top level utils.py and remove the common.py file as that
makes the most sense to me now.

If you have any recommendations we may be able to make this even better.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 1/4] dts: code adjustments for sphinx
  2023-10-24  6:39               ` Juraj Linkeš
@ 2023-10-24 12:21                 ` Yoan Picchi
  0 siblings, 0 replies; 255+ messages in thread
From: Yoan Picchi @ 2023-10-24 12:21 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb, dev

On 10/24/23 07:39, Juraj Linkeš wrote:
> On Mon, Oct 23, 2023 at 1:52 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>>
>> On 10/23/23 07:44, Juraj Linkeš wrote:
>>> <snip>
>>>
>>>>
>>>> My only nitpick comment would be on the name of the file common.py that
>>>> only contain the MesonArgs class. Looks good otherwise
>>>
>>> Could you elaborate a bit more, Yoan? The common.py module is supposed
>>> to be extended with code common to all other modules in the
>>> testbed_model package. Right now we only have MesonArgs which fits in
>>> common.py, but we could also move something else into common.py. We
>>> also could rename common.py to something else, but then the above
>>> purpose would not be clear.
>>>
>>> I'm finishing the docstrings soon so expect a new version where things
>>> like these will be clearer. :-)
>>
>> My issue with the name is that it isn't clear what is the purpose of
>> this file. It only tell to some extend how it is used.
> 
> Well, the name suggests it's code that's common to other modules, as
> in code that other modules use, just like the MesonArgs class, which
> is used in three different modules. I've chosen common.py as that's
> what some of the DPDK libs (such as EAL) seem to be using for this
> purpose. Maybe there's a better name though or we could move the class
> elsewhere.
> 
>> If we start adding more things in this file, then I see two options:
>>          - Either this is related to the current class, and thus the file could
>> be named meson_arg or something along those lines.
>>          - Or it is unrelated to the current class, and we end up with a file
>> coalescing random bits of code, which is usually a bit dirty in OOP.
>>
> 
> It's code that's reused in multiple places, I'm not sure whether that
> qualifies as random bits of code. It could be in os_session.py (as
> that would work import-wise), but that's not a good place to put it,
> as the logic is actually utilized in sut_node.py. But putting it into
> sut_node.py doesn't work because of imports. Maybe we could just put
> it into utils.py in the framework dir, which is a very similar file,
> if not the same. My original thoughts were to have a file with common
> code in each package (if needed), depending on where the code is used
> (package level-wise), but it looks like we can just have this sort of
> common utilities on the top level.
> 
>> Like I said, it's a bit of a nitpick, but given it's an RFC I hope
>> you'll give it a thought in the next version.
> 
> I thought a lot about this before submitting this RFC, but I wanted
> someone to have a look at this exact thing - whether the common.py
> file makes sense and what is the better name, common.py or utils.py
> (which is why I have both in this patch). I'll move the MesonArgs
> class to the top level utils.py and remove the common.py file as that
> makes the most sense to me now.
> 
> If you have any recommendations we may be able to make this even better.

I didn't meant to imply you did not think a lot about it, sorry if it 
came that way.
I prefer the idea of utils.py to a common.py, be it at package level or 
above. There might also be the option of __init__.py but I'm not sure 
about it.
That being said, I'm relatively new to dpdk and didn't know common.py 
was a common thing in EAL so I'll leave it up to you.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 3/4] dts: add doc generation
  2023-08-31 10:04       ` [RFC PATCH v4 3/4] dts: add doc generation Juraj Linkeš
  2023-09-20  7:08         ` Juraj Linkeš
@ 2023-10-26 16:43         ` Yoan Picchi
  2023-10-27  9:52           ` Juraj Linkeš
  1 sibling, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-10-26 16:43 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev

On 8/31/23 11:04, Juraj Linkeš wrote:
> The tool used to generate developer docs is sphinx, which is already
> used in DPDK. The configuration is kept the same to preserve the style.
> 
> Sphinx generates the documentation from Python docstrings. The docstring
> format most suitable for DTS seems to be the Google format [0] which
> requires the sphinx.ext.napoleon extension.
> 
> There are two requirements for building DTS docs:
> * The same Python version as DTS or higher, because Sphinx import the
>    code.
> * Also the same Python packages as DTS, for the same reason.
> 
> [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   buildtools/call-sphinx-build.py | 29 ++++++++++++-------
>   doc/api/meson.build             |  1 +
>   doc/guides/conf.py              | 32 +++++++++++++++++----
>   doc/guides/meson.build          |  1 +
>   doc/guides/tools/dts.rst        | 29 +++++++++++++++++++
>   dts/doc/doc-index.rst           | 17 +++++++++++
>   dts/doc/meson.build             | 50 +++++++++++++++++++++++++++++++++
>   dts/meson.build                 | 16 +++++++++++
>   meson.build                     |  1 +
>   9 files changed, 161 insertions(+), 15 deletions(-)
>   create mode 100644 dts/doc/doc-index.rst
>   create mode 100644 dts/doc/meson.build
>   create mode 100644 dts/meson.build
> 
> diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
> index 39a60d09fa..c2f3acfb1d 100755
> --- a/buildtools/call-sphinx-build.py
> +++ b/buildtools/call-sphinx-build.py
> @@ -3,37 +3,46 @@
>   # Copyright(c) 2019 Intel Corporation
>   #
>   
> +import argparse
>   import sys
>   import os
>   from os.path import join
>   from subprocess import run, PIPE, STDOUT
>   from packaging.version import Version
>   
> -# assign parameters to variables
> -(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
> +parser = argparse.ArgumentParser()
> +parser.add_argument('sphinx')
> +parser.add_argument('version')
> +parser.add_argument('src')
> +parser.add_argument('dst')
> +parser.add_argument('--dts-root', default='.')
> +args, extra_args = parser.parse_known_args()
>   
>   # set the version in environment for sphinx to pick up
> -os.environ['DPDK_VERSION'] = version
> +os.environ['DPDK_VERSION'] = args.version
> +os.environ['DTS_ROOT'] = args.dts_root
>   
>   # for sphinx version >= 1.7 add parallelism using "-j auto"
> -ver = run([sphinx, '--version'], stdout=PIPE,
> +ver = run([args.sphinx, '--version'], stdout=PIPE,
>             stderr=STDOUT).stdout.decode().split()[-1]
> -sphinx_cmd = [sphinx] + extra_args
> +sphinx_cmd = [args.sphinx] + extra_args
>   if Version(ver) >= Version('1.7'):
>       sphinx_cmd += ['-j', 'auto']
>   
>   # find all the files sphinx will process so we can write them as dependencies
>   srcfiles = []
> -for root, dirs, files in os.walk(src):
> +for root, dirs, files in os.walk(args.src):
>       srcfiles.extend([join(root, f) for f in files])
>   
>   # run sphinx, putting the html output in a "html" directory
> -with open(join(dst, 'sphinx_html.out'), 'w') as out:
> -    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
> -                  stdout=out)
> +with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
> +    process = run(
> +        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
> +        stdout=out
> +    )
>   
>   # create a gcc format .d file giving all the dependencies of this doc build
> -with open(join(dst, '.html.d'), 'w') as d:
> +with open(join(args.dst, '.html.d'), 'w') as d:
>       d.write('html: ' + ' '.join(srcfiles) + '\n')
>   
>   sys.exit(process.returncode)
> diff --git a/doc/api/meson.build b/doc/api/meson.build
> index 2876a78a7e..1f0c725a94 100644
> --- a/doc/api/meson.build
> +++ b/doc/api/meson.build
> @@ -1,6 +1,7 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
>   
> +doc_api_build_dir = meson.current_build_dir()
>   doxygen = find_program('doxygen', required: get_option('enable_docs'))
>   
>   if not doxygen.found()
> diff --git a/doc/guides/conf.py b/doc/guides/conf.py
> index 0f7ff5282d..737e5a5688 100644
> --- a/doc/guides/conf.py
> +++ b/doc/guides/conf.py
> @@ -7,10 +7,9 @@
>   from sphinx import __version__ as sphinx_version
>   from os import listdir
>   from os import environ
> -from os.path import basename
> -from os.path import dirname
> +from os.path import basename, dirname
>   from os.path import join as path_join
> -from sys import argv, stderr
> +from sys import argv, stderr, path
>   
>   import configparser
>   
> @@ -24,6 +23,29 @@
>             file=stderr)
>       pass
>   
> +extensions = ['sphinx.ext.napoleon']
> +
> +# Python docstring options
> +autodoc_default_options = {
> +    'members': True,
> +    'member-order': 'bysource',
> +    'show-inheritance': True,
> +}
> +autodoc_typehints = 'both'
> +autodoc_typehints_format = 'short'
> +napoleon_numpy_docstring = False
> +napoleon_attr_annotations = True
> +napoleon_use_ivar = True
> +napoleon_use_rtype = False
> +add_module_names = False
> +toc_object_entries = False
> +
> +# Sidebar config
> +html_theme_options = {
> +    'collapse_navigation': False,
> +    'navigation_depth': -1,
> +}
> +
>   stop_on_error = ('-W' in argv)
>   
>   project = 'Data Plane Development Kit'
> @@ -35,8 +57,8 @@
>   html_show_copyright = False
>   highlight_language = 'none'
>   
> -release = environ.setdefault('DPDK_VERSION', "None")
> -version = release
> +path.append(environ.get('DTS_ROOT'))
> +version = environ.setdefault('DPDK_VERSION', "None")
>   
>   master_doc = 'index'
>   
> diff --git a/doc/guides/meson.build b/doc/guides/meson.build
> index 51f81da2e3..8933d75f6b 100644
> --- a/doc/guides/meson.build
> +++ b/doc/guides/meson.build
> @@ -1,6 +1,7 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2018 Intel Corporation
>   
> +doc_guides_source_dir = meson.current_source_dir()
>   sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
>   
>   if not sphinx.found()
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index 32c18ee472..98923b1467 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -335,3 +335,32 @@ There are three tools used in DTS to help with code checking, style and formatti
>   These three tools are all used in ``devtools/dts-check-format.sh``,
>   the DTS code check and format script.
>   Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
> +
> +
> +Building DTS API docs
> +---------------------
> +
> +To build DTS API docs, install the dependencies with Poetry, then enter its shell:
> +
> +   .. code-block:: console

I believe the code-block line should not be indented. As it is, it did 
not generate properly when I built the doc. Same for the other code 
block below.

> +
> +   poetry install --with docs
> +   poetry shell
> +
> +
> +Build commands
> +~~~~~~~~~~~~~~
> +
> +The documentation is built using the standard DPDK build system.
> +
> +After entering Poetry's shell, build the documentation with:
> +
> +   .. code-block:: console
> +
> +   ninja -C build dts/doc
> +
> +The output is generated in ``build/doc/api/dts/html``.
> +
> +.. Note::
> +
> +   Make sure to fix any Sphinx warnings when adding or updating docstrings.
> diff --git a/dts/doc/doc-index.rst b/dts/doc/doc-index.rst
> new file mode 100644
> index 0000000000..f5dcd553f2
> --- /dev/null
> +++ b/dts/doc/doc-index.rst
> @@ -0,0 +1,17 @@
> +.. DPDK Test Suite documentation.
> +
> +Welcome to DPDK Test Suite's documentation!
> +===========================================
> +
> +.. toctree::
> +   :titlesonly:
> +   :caption: Contents:
> +
> +   framework
> +
> +Indices and tables
> +==================
> +
> +* :ref:`genindex`
> +* :ref:`modindex`
> +* :ref:`search`
> diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> new file mode 100644
> index 0000000000..8e70eabc51
> --- /dev/null
> +++ b/dts/doc/meson.build
> @@ -0,0 +1,50 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +sphinx = find_program('sphinx-build')
> +sphinx_apidoc = find_program('sphinx-apidoc')
> +
> +if not sphinx.found() or not sphinx_apidoc.found()
> +    subdir_done()
> +endif
> +
> +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> +dts_api_src = custom_target('dts_api_src',
> +        output: 'modules.rst',
> +        command: [sphinx_apidoc, '--append-syspath', '--force',
> +            '--module-first', '--separate', '-V', meson.project_version(),
> +            '-o', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
> +            dts_api_framework_dir],
> +        build_by_default: false)
> +doc_targets += dts_api_src
> +doc_target_names += 'DTS_API_sphinx_sources'
> +
> +cp = find_program('cp')
> +cp_index = custom_target('cp_index',
> +        input: 'doc-index.rst',
> +        output: 'index.rst',
> +        depends: dts_api_src,
> +        command: [cp, '@INPUT@', join_paths(dts_api_build_dir, 'index.rst')],
> +        build_by_default: false)
> +doc_targets += cp_index
> +doc_target_names += 'DTS_API_sphinx_index'
> +
> +extra_sphinx_args = ['-a', '-c', doc_guides_source_dir]
> +if get_option('werror')
> +    extra_sphinx_args += '-W'
> +endif
> +
> +htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
> +dts_api_html = custom_target('dts_api_html',
> +        output: 'html',
> +        depends: cp_index,
> +        command: ['DTS_ROOT=@0@'.format(dts_dir),
> +            sphinx_wrapper, sphinx, meson.project_version(),
> +            dts_api_build_dir, dts_api_build_dir,
> +            '--dts-root', dts_dir, extra_sphinx_args],
> +        build_by_default: false,
> +        install: false,
> +        install_dir: htmldir)

I don't entirely understand what this command do. I suspect it's the one 
that create and populate the html directory in build/doc/api/dts/
The main thing that confuse me is this htmldir. Mine seemed to point to 
share/doc/dpdk. Such a directory doesn't seems to be built (or I could 
not find where). That might be intended given install is set to false. 
But in that case, why bother setting it?
The thing that worries me more is that dpdk/doc/guide/meson.build do 
build some html and have the same htmldir. If we were to enable the 
install in both, wouldn't the html overwrite each other (in particular 
index.html)?
If my understanding is correct, I believe htmldir should either be 
removed, or set to a different value (<datadir>/doc/dpdk/dts ?)

> +doc_targets += dts_api_html
> +doc_target_names += 'DTS_API_HTML'
> diff --git a/dts/meson.build b/dts/meson.build
> new file mode 100644
> index 0000000000..17bda07636
> --- /dev/null
> +++ b/dts/meson.build
> @@ -0,0 +1,16 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +doc_targets = []
> +doc_target_names = []
> +dts_dir = meson.current_source_dir()
> +
> +subdir('doc')
> +
> +if doc_targets.length() == 0
> +    message = 'No docs targets found'
> +else
> +    message = 'Built docs:'
> +endif
> +run_target('dts/doc', command: [echo, message, doc_target_names],
> +    depends: doc_targets)
> diff --git a/meson.build b/meson.build
> index 39cb73846d..4d34dc531c 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -85,6 +85,7 @@ subdir('app')
>   
>   # build docs
>   subdir('doc')
> +subdir('dts')
>   
>   # build any examples explicitly requested - useful for developers - and
>   # install any example code into the appropriate install path

When building from scratch, I had several warning/errors.

$ meson build
[usual meson output block...]
WARNING: Target "dts/doc" has a path separator in its name.
This is not supported, it can cause unexpected failures and will become
a hard error in the future.
Configuring rte_build_config.h using configuration
Message:
=================
Applications Enabled
=================
[...]

If you change the target name, remember to change it in the doc too.



$ ninja -C build dts/doc
[create all the rst...]
[3/4] Generating dts_api_html with a custom command.
dpdk/dts/framework/remote_session/interactive_shell.py:docstring of 
framework.remote_session.interactive_shell.InteractiveShell:25: WARNING: 
Definition list ends without a blank line; unexpected unindent.

dpdk/dts/framework/remote_session/interactive_shell.py:docstring of 
framework.remote_session.interactive_shell.InteractiveShell:27: ERROR: 
Unexpected indentation.

dpdk/dts/framework/testbed_model/common.py:docstring of 
framework.testbed_model.common.MesonArgs:3: ERROR: Unexpected indentation.

dpdk/dts/framework/testbed_model/common.py:docstring of 
framework.testbed_model.common.MesonArgs:4: WARNING: Block quote ends 
without a blank line; unexpected unindent.

dpdk/dts/framework/testbed_model/linux_session.py:docstring of 
framework.testbed_model.linux_session.LshwOutput:10: ERROR: Unexpected 
indentation.

dpdk/dts/framework/testbed_model/linux_session.py:docstring of 
framework.testbed_model.linux_session.LshwOutput:13: WARNING: Block 
quote ends without a blank line; unexpected unindent.

dpdk/dts/framework/testbed_model/sut_node.py:docstring of 
framework.testbed_model.sut_node.SutNode.create_eal_parameters:3: ERROR: 
Unexpected indentation.

dpdk/dts/framework/testbed_model/sut_node.py:docstring of 
framework.testbed_model.sut_node.SutNode.create_eal_parameters:6: 
WARNING: Block quote ends without a blank line; unexpected unindent.

dpdk/dts/framework/testbed_model/sut_node.py:docstring of 
framework.testbed_model.sut_node.SutNode.create_eal_parameters:18: 
ERROR: Unexpected indentation.

dpdk/dts/framework/testbed_model/sut_node.py:docstring of 
framework.testbed_model.sut_node.SutNode.create_eal_parameters:20: 
WARNING: Block quote ends without a blank line; unexpected unindent.

dpdk/dts/framework/test_suite.py:docstring of 
framework.test_suite.TestSuite:1: ERROR: Unknown target name: "test".

dpdk/dts/framework/test_suite.py:docstring of 
framework.test_suite.TestSuite:1: ERROR: Unknown target name: "test_perf".

If I then try to rerun ninja, those errors don't appear, so it seems to 
happen mostly on fresh build.


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 3/4] dts: add doc generation
  2023-10-26 16:43         ` Yoan Picchi
@ 2023-10-27  9:52           ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-10-27  9:52 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb, dev

Thanks for the comments, Yoan.

> > diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> > index 32c18ee472..98923b1467 100644
> > --- a/doc/guides/tools/dts.rst
> > +++ b/doc/guides/tools/dts.rst
> > @@ -335,3 +335,32 @@ There are three tools used in DTS to help with code checking, style and formatti
> >   These three tools are all used in ``devtools/dts-check-format.sh``,
> >   the DTS code check and format script.
> >   Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
> > +
> > +
> > +Building DTS API docs
> > +---------------------
> > +
> > +To build DTS API docs, install the dependencies with Poetry, then enter its shell:
> > +
> > +   .. code-block:: console
>
> I believe the code-block line should not be indented. As it is, it did
> not generate properly when I built the doc. Same for the other code
> block below.
>

Good catch, thanks.

> > diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> > new file mode 100644
> > index 0000000000..8e70eabc51
> > --- /dev/null
> > +++ b/dts/doc/meson.build
> > @@ -0,0 +1,50 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > +
> > +sphinx = find_program('sphinx-build')
> > +sphinx_apidoc = find_program('sphinx-apidoc')
> > +
> > +if not sphinx.found() or not sphinx_apidoc.found()
> > +    subdir_done()
> > +endif
> > +
> > +dts_api_framework_dir = join_paths(dts_dir, 'framework')
> > +dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> > +dts_api_src = custom_target('dts_api_src',
> > +        output: 'modules.rst',
> > +        command: [sphinx_apidoc, '--append-syspath', '--force',
> > +            '--module-first', '--separate', '-V', meson.project_version(),
> > +            '-o', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
> > +            dts_api_framework_dir],
> > +        build_by_default: false)
> > +doc_targets += dts_api_src
> > +doc_target_names += 'DTS_API_sphinx_sources'
> > +
> > +cp = find_program('cp')
> > +cp_index = custom_target('cp_index',
> > +        input: 'doc-index.rst',
> > +        output: 'index.rst',
> > +        depends: dts_api_src,
> > +        command: [cp, '@INPUT@', join_paths(dts_api_build_dir, 'index.rst')],
> > +        build_by_default: false)
> > +doc_targets += cp_index
> > +doc_target_names += 'DTS_API_sphinx_index'
> > +
> > +extra_sphinx_args = ['-a', '-c', doc_guides_source_dir]
> > +if get_option('werror')
> > +    extra_sphinx_args += '-W'
> > +endif
> > +
> > +htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
> > +dts_api_html = custom_target('dts_api_html',
> > +        output: 'html',
> > +        depends: cp_index,
> > +        command: ['DTS_ROOT=@0@'.format(dts_dir),
> > +            sphinx_wrapper, sphinx, meson.project_version(),
> > +            dts_api_build_dir, dts_api_build_dir,
> > +            '--dts-root', dts_dir, extra_sphinx_args],
> > +        build_by_default: false,
> > +        install: false,
> > +        install_dir: htmldir)
>
> I don't entirely understand what this command do. I suspect it's the one
> that create and populate the html directory in build/doc/api/dts/

Yes, this one generates the sphinx html docs from the .rst files
generated in the first step.

> The main thing that confuse me is this htmldir. Mine seemed to point to
> share/doc/dpdk. Such a directory doesn't seems to be built (or I could
> not find where). That might be intended given install is set to false.
> But in that case, why bother setting it?
> The thing that worries me more is that dpdk/doc/guide/meson.build do
> build some html and have the same htmldir. If we were to enable the
> install in both, wouldn't the html overwrite each other (in particular
> index.html)?
> If my understanding is correct, I believe htmldir should either be
> removed, or set to a different value (<datadir>/doc/dpdk/dts ?)

These install related configs are basically a placeholder (install:
false) and a copy-paste from the doxygen API generation because meson
required these to be filles (IIRC).
I was hoping Bruce would give me some guidance in this.

Bruce, how should we install the DTS API docs? I'm not really sure how
exactly the meson install procedure works, what's going to be copied
and so on. I don't really know what to do with this.

>
> > +doc_targets += dts_api_html
> > +doc_target_names += 'DTS_API_HTML'
> > diff --git a/dts/meson.build b/dts/meson.build
> > new file mode 100644
> > index 0000000000..17bda07636
> > --- /dev/null
> > +++ b/dts/meson.build
> > @@ -0,0 +1,16 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > +
> > +doc_targets = []
> > +doc_target_names = []
> > +dts_dir = meson.current_source_dir()
> > +
> > +subdir('doc')
> > +
> > +if doc_targets.length() == 0
> > +    message = 'No docs targets found'
> > +else
> > +    message = 'Built docs:'
> > +endif
> > +run_target('dts/doc', command: [echo, message, doc_target_names],
> > +    depends: doc_targets)
> > diff --git a/meson.build b/meson.build
> > index 39cb73846d..4d34dc531c 100644
> > --- a/meson.build
> > +++ b/meson.build
> > @@ -85,6 +85,7 @@ subdir('app')
> >
> >   # build docs
> >   subdir('doc')
> > +subdir('dts')
> >
> >   # build any examples explicitly requested - useful for developers - and
> >   # install any example code into the appropriate install path
>
> When building from scratch, I had several warning/errors.
>
> $ meson build
> [usual meson output block...]
> WARNING: Target "dts/doc" has a path separator in its name.
> This is not supported, it can cause unexpected failures and will become
> a hard error in the future.

This is likely due to a newer meson version that I'm using, which is
0.53.2 because that's used in CI. We can rename it to satisfy newer
versions.

> Configuring rte_build_config.h using configuration
> Message:
> =================
> Applications Enabled
> =================
> [...]
>
> If you change the target name, remember to change it in the doc too.
>
>
>
> $ ninja -C build dts/doc
> [create all the rst...]
> [3/4] Generating dts_api_html with a custom command.
> dpdk/dts/framework/remote_session/interactive_shell.py:docstring of
> framework.remote_session.interactive_shell.InteractiveShell:25: WARNING:
> Definition list ends without a blank line; unexpected unindent.
>
> dpdk/dts/framework/remote_session/interactive_shell.py:docstring of
> framework.remote_session.interactive_shell.InteractiveShell:27: ERROR:
> Unexpected indentation.
>
> dpdk/dts/framework/testbed_model/common.py:docstring of
> framework.testbed_model.common.MesonArgs:3: ERROR: Unexpected indentation.
>
> dpdk/dts/framework/testbed_model/common.py:docstring of
> framework.testbed_model.common.MesonArgs:4: WARNING: Block quote ends
> without a blank line; unexpected unindent.
>
> dpdk/dts/framework/testbed_model/linux_session.py:docstring of
> framework.testbed_model.linux_session.LshwOutput:10: ERROR: Unexpected
> indentation.
>
> dpdk/dts/framework/testbed_model/linux_session.py:docstring of
> framework.testbed_model.linux_session.LshwOutput:13: WARNING: Block
> quote ends without a blank line; unexpected unindent.
>
> dpdk/dts/framework/testbed_model/sut_node.py:docstring of
> framework.testbed_model.sut_node.SutNode.create_eal_parameters:3: ERROR:
> Unexpected indentation.
>
> dpdk/dts/framework/testbed_model/sut_node.py:docstring of
> framework.testbed_model.sut_node.SutNode.create_eal_parameters:6:
> WARNING: Block quote ends without a blank line; unexpected unindent.
>
> dpdk/dts/framework/testbed_model/sut_node.py:docstring of
> framework.testbed_model.sut_node.SutNode.create_eal_parameters:18:
> ERROR: Unexpected indentation.
>
> dpdk/dts/framework/testbed_model/sut_node.py:docstring of
> framework.testbed_model.sut_node.SutNode.create_eal_parameters:20:
> WARNING: Block quote ends without a blank line; unexpected unindent.
>
> dpdk/dts/framework/test_suite.py:docstring of
> framework.test_suite.TestSuite:1: ERROR: Unknown target name: "test".
>
> dpdk/dts/framework/test_suite.py:docstring of
> framework.test_suite.TestSuite:1: ERROR: Unknown target name: "test_perf".
>

These errors are here because the docstrings are either incomplete or
not yet reformatted. These will be addressed in a new version that's
coming soon.

> If I then try to rerun ninja, those errors don't appear, so it seems to
> happen mostly on fresh build.
>

Yes, that is expected.

This made me look into how we're running sphinx-build. We're using:
  -a                write all files (default: only write new and changed files)

in the hope of forcing a full rebuild, but that doesn't seem to be
working properly, if at all. What I actually want sphinx to do is to
update the .html files after a rebuild and -a doesn't help with that.

I've tried the -E option instead and that seems to be working - it
updates the modified .html files so I'll replace -a with -E in the new
version.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 2/4] dts: add doc generation dependencies
  2023-08-31 10:04       ` [RFC PATCH v4 2/4] dts: add doc generation dependencies Juraj Linkeš
@ 2023-10-27 15:27         ` Yoan Picchi
  0 siblings, 0 replies; 255+ messages in thread
From: Yoan Picchi @ 2023-10-27 15:27 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev

On 8/31/23 11:04, Juraj Linkeš wrote:
> Sphinx imports every Python module when generating documentation from
> docstrings, meaning all dts dependencies, including Python version,
> must be satisfied.
> By adding Sphinx to dts dependencies we make sure that the proper
> Python version and dependencies are used when Sphinx is executed.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/poetry.lock    | 447 ++++++++++++++++++++++++++++++++++++++++++++-
>   dts/pyproject.toml |   7 +
>   2 files changed, 453 insertions(+), 1 deletion(-)
> 
> diff --git a/dts/poetry.lock b/dts/poetry.lock
> index f7b3b6d602..91afe5231a 100644
> --- a/dts/poetry.lock
> +++ b/dts/poetry.lock
> @@ -1,5 +1,16 @@
>   # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
>   
> +[[package]]
> +name = "alabaster"
> +version = "0.7.13"
> +description = "A configurable sidebar-enabled Sphinx theme"
> +optional = false
> +python-versions = ">=3.6"
> +files = [
> +    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
> +    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
> +]
> +
>   [[package]]
>   name = "attrs"
>   version = "23.1.0"
> @@ -18,6 +29,17 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
>   tests = ["attrs[tests-no-zope]", "zope-interface"]
>   tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
>   
> +[[package]]
> +name = "babel"
> +version = "2.12.1"
> +description = "Internationalization utilities"
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"},
> +    {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"},
> +]
> +
>   [[package]]
>   name = "bcrypt"
>   version = "4.0.1"
> @@ -86,6 +108,17 @@ d = ["aiohttp (>=3.7.4)"]
>   jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
>   uvloop = ["uvloop (>=0.15.2)"]
>   
> +[[package]]
> +name = "certifi"
> +version = "2023.7.22"
> +description = "Python package for providing Mozilla's CA Bundle."
> +optional = false
> +python-versions = ">=3.6"
> +files = [
> +    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
> +    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
> +]
> +
>   [[package]]
>   name = "cffi"
>   version = "1.15.1"
> @@ -162,6 +195,90 @@ files = [
>   [package.dependencies]
>   pycparser = "*"
>   
> +[[package]]
> +name = "charset-normalizer"
> +version = "3.2.0"
> +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
> +optional = false
> +python-versions = ">=3.7.0"
> +files = [
> +    {file = "charset-normalizer-3.2.0.tar.gz", hash = "sha256:3bb3d25a8e6c0aedd251753a79ae98a093c7e7b471faa3aa9a93a81431987ace"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0b87549028f680ca955556e3bd57013ab47474c3124dc069faa0b6545b6c9710"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7c70087bfee18a42b4040bb9ec1ca15a08242cf5867c58726530bdf3945672ed"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a103b3a7069b62f5d4890ae1b8f0597618f628b286b03d4bc9195230b154bfa9"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94aea8eff76ee6d1cdacb07dd2123a68283cb5569e0250feab1240058f53b623"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:db901e2ac34c931d73054d9797383d0f8009991e723dab15109740a63e7f902a"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b0dac0ff919ba34d4df1b6131f59ce95b08b9065233446be7e459f95554c0dc8"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:193cbc708ea3aca45e7221ae58f0fd63f933753a9bfb498a3b474878f12caaad"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:09393e1b2a9461950b1c9a45d5fd251dc7c6f228acab64da1c9c0165d9c7765c"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:baacc6aee0b2ef6f3d308e197b5d7a81c0e70b06beae1f1fcacffdbd124fe0e3"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:bf420121d4c8dce6b889f0e8e4ec0ca34b7f40186203f06a946fa0276ba54029"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:c04a46716adde8d927adb9457bbe39cf473e1e2c2f5d0a16ceb837e5d841ad4f"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:aaf63899c94de41fe3cf934601b0f7ccb6b428c6e4eeb80da72c58eab077b19a"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d62e51710986674142526ab9f78663ca2b0726066ae26b78b22e0f5e571238dd"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-win32.whl", hash = "sha256:04e57ab9fbf9607b77f7d057974694b4f6b142da9ed4a199859d9d4d5c63fe96"},
> +    {file = "charset_normalizer-3.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:48021783bdf96e3d6de03a6e39a1171ed5bd7e8bb93fc84cc649d11490f87cea"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:4957669ef390f0e6719db3613ab3a7631e68424604a7b448f079bee145da6e09"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46fb8c61d794b78ec7134a715a3e564aafc8f6b5e338417cb19fe9f57a5a9bf2"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f779d3ad205f108d14e99bb3859aa7dd8e9c68874617c72354d7ecaec2a054ac"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f25c229a6ba38a35ae6e25ca1264621cc25d4d38dca2942a7fce0b67a4efe918"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2efb1bd13885392adfda4614c33d3b68dee4921fd0ac1d3988f8cbb7d589e72a"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f30b48dd7fa1474554b0b0f3fdfdd4c13b5c737a3c6284d3cdc424ec0ffff3a"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:246de67b99b6851627d945db38147d1b209a899311b1305dd84916f2b88526c6"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bd9b3b31adcb054116447ea22caa61a285d92e94d710aa5ec97992ff5eb7cf3"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:8c2f5e83493748286002f9369f3e6607c565a6a90425a3a1fef5ae32a36d749d"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3170c9399da12c9dc66366e9d14da8bf7147e1e9d9ea566067bbce7bb74bd9c2"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:7a4826ad2bd6b07ca615c74ab91f32f6c96d08f6fcc3902ceeedaec8cdc3bcd6"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:3b1613dd5aee995ec6d4c69f00378bbd07614702a315a2cf6c1d21461fe17c23"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9e608aafdb55eb9f255034709e20d5a83b6d60c054df0802fa9c9883d0a937aa"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-win32.whl", hash = "sha256:f2a1d0fd4242bd8643ce6f98927cf9c04540af6efa92323e9d3124f57727bfc1"},
> +    {file = "charset_normalizer-3.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:681eb3d7e02e3c3655d1b16059fbfb605ac464c834a0c629048a30fad2b27489"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c57921cda3a80d0f2b8aec7e25c8aa14479ea92b5b51b6876d975d925a2ea346"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41b25eaa7d15909cf3ac4c96088c1f266a9a93ec44f87f1d13d4a0e86c81b982"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f058f6963fd82eb143c692cecdc89e075fa0828db2e5b291070485390b2f1c9c"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a7647ebdfb9682b7bb97e2a5e7cb6ae735b1c25008a70b906aecca294ee96cf4"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eef9df1eefada2c09a5e7a40991b9fc6ac6ef20b1372abd48d2794a316dc0449"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e03b8895a6990c9ab2cdcd0f2fe44088ca1c65ae592b8f795c3294af00a461c3"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:ee4006268ed33370957f55bf2e6f4d263eaf4dc3cfc473d1d90baff6ed36ce4a"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c4983bf937209c57240cff65906b18bb35e64ae872da6a0db937d7b4af845dd7"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:3bb7fda7260735efe66d5107fb7e6af6a7c04c7fce9b2514e04b7a74b06bf5dd"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:72814c01533f51d68702802d74f77ea026b5ec52793c791e2da806a3844a46c3"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:70c610f6cbe4b9fce272c407dd9d07e33e6bf7b4aa1b7ffb6f6ded8e634e3592"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-win32.whl", hash = "sha256:a401b4598e5d3f4a9a811f3daf42ee2291790c7f9d74b18d75d6e21dda98a1a1"},
> +    {file = "charset_normalizer-3.2.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c0b21078a4b56965e2b12f247467b234734491897e99c1d51cee628da9786959"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:95eb302ff792e12aba9a8b8f8474ab229a83c103d74a750ec0bd1c1eea32e669"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1a100c6d595a7f316f1b6f01d20815d916e75ff98c27a01ae817439ea7726329"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6339d047dab2780cc6220f46306628e04d9750f02f983ddb37439ca47ced7149"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4b749b9cc6ee664a3300bb3a273c1ca8068c46be705b6c31cf5d276f8628a94"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a38856a971c602f98472050165cea2cdc97709240373041b69030be15047691f"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f87f746ee241d30d6ed93969de31e5ffd09a2961a051e60ae6bddde9ec3583aa"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89f1b185a01fe560bc8ae5f619e924407efca2191b56ce749ec84982fc59a32a"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e1c8a2f4c69e08e89632defbfabec2feb8a8d99edc9f89ce33c4b9e36ab63037"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2f4ac36d8e2b4cc1aa71df3dd84ff8efbe3bfb97ac41242fbcfc053c67434f46"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a386ebe437176aab38c041de1260cd3ea459c6ce5263594399880bbc398225b2"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:ccd16eb18a849fd8dcb23e23380e2f0a354e8daa0c984b8a732d9cfaba3a776d"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:e6a5bf2cba5ae1bb80b154ed68a3cfa2fa00fde979a7f50d6598d3e17d9ac20c"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:45de3f87179c1823e6d9e32156fb14c1927fcc9aba21433f088fdfb555b77c10"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-win32.whl", hash = "sha256:1000fba1057b92a65daec275aec30586c3de2401ccdcd41f8a5c1e2c87078706"},
> +    {file = "charset_normalizer-3.2.0-cp38-cp38-win_amd64.whl", hash = "sha256:8b2c760cfc7042b27ebdb4a43a4453bd829a5742503599144d54a032c5dc7e9e"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:855eafa5d5a2034b4621c74925d89c5efef61418570e5ef9b37717d9c796419c"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:203f0c8871d5a7987be20c72442488a0b8cfd0f43b7973771640fc593f56321f"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e857a2232ba53ae940d3456f7533ce6ca98b81917d47adc3c7fd55dad8fab858"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e86d77b090dbddbe78867a0275cb4df08ea195e660f1f7f13435a4649e954e5"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c4fb39a81950ec280984b3a44f5bd12819953dc5fa3a7e6fa7a80db5ee853952"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2dee8e57f052ef5353cf608e0b4c871aee320dd1b87d351c28764fc0ca55f9f4"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8700f06d0ce6f128de3ccdbc1acaea1ee264d2caa9ca05daaf492fde7c2a7200"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1920d4ff15ce893210c1f0c0e9d19bfbecb7983c76b33f046c13a8ffbd570252"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c1c76a1743432b4b60ab3358c937a3fe1341c828ae6194108a94c69028247f22"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f7560358a6811e52e9c4d142d497f1a6e10103d3a6881f18d04dbce3729c0e2c"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:c8063cf17b19661471ecbdb3df1c84f24ad2e389e326ccaf89e3fb2484d8dd7e"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:cd6dbe0238f7743d0efe563ab46294f54f9bc8f4b9bcf57c3c666cc5bc9d1299"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:1249cbbf3d3b04902ff081ffbb33ce3377fa6e4c7356f759f3cd076cc138d020"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-win32.whl", hash = "sha256:6c409c0deba34f147f77efaa67b8e4bb83d2f11c8806405f76397ae5b8c0d1c9"},
> +    {file = "charset_normalizer-3.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:7095f6fbfaa55defb6b733cfeb14efaae7a29f0b59d8cf213be4e7ca0b857b80"},
> +    {file = "charset_normalizer-3.2.0-py3-none-any.whl", hash = "sha256:8e098148dd37b4ce3baca71fb394c81dc5d9c7728c95df695d2dca218edf40e6"},
> +]
> +
>   [[package]]
>   name = "click"
>   version = "8.1.6"
> @@ -232,6 +349,17 @@ ssh = ["bcrypt (>=3.1.5)"]
>   test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
>   test-randomorder = ["pytest-randomly"]
>   
> +[[package]]
> +name = "docutils"
> +version = "0.18.1"
> +description = "Docutils -- Python Documentation Utilities"
> +optional = false
> +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
> +files = [
> +    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
> +    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
> +]
> +
>   [[package]]
>   name = "fabric"
>   version = "2.7.1"
> @@ -252,6 +380,28 @@ pathlib2 = "*"
>   pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
>   testing = ["mock (>=2.0.0,<3.0)"]
>   
> +[[package]]
> +name = "idna"
> +version = "3.4"
> +description = "Internationalized Domain Names in Applications (IDNA)"
> +optional = false
> +python-versions = ">=3.5"
> +files = [
> +    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
> +    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
> +]
> +
> +[[package]]
> +name = "imagesize"
> +version = "1.4.1"
> +description = "Getting image size from png/jpeg/jpeg2000/gif file"
> +optional = false
> +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
> +files = [
> +    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
> +    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
> +]
> +
>   [[package]]
>   name = "invoke"
>   version = "1.7.3"
> @@ -280,6 +430,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
>   plugins = ["setuptools"]
>   requirements-deprecated-finder = ["pip-api", "pipreqs"]
>   
> +[[package]]
> +name = "jinja2"
> +version = "3.1.2"
> +description = "A very fast and expressive template engine."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
> +    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
> +]
> +
> +[package.dependencies]
> +MarkupSafe = ">=2.0"
> +
> +[package.extras]
> +i18n = ["Babel (>=2.7)"]
> +
>   [[package]]
>   name = "jsonpatch"
>   version = "1.33"
> @@ -340,6 +507,65 @@ files = [
>   [package.dependencies]
>   referencing = ">=0.28.0"
>   
> +[[package]]
> +name = "markupsafe"
> +version = "2.1.3"
> +description = "Safely add untrusted strings to HTML/XML markup."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
> +    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
> +]
> +
>   [[package]]
>   name = "mccabe"
>   version = "0.7.0"
> @@ -404,6 +630,17 @@ files = [
>       {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
>   ]
>   
> +[[package]]
> +name = "packaging"
> +version = "23.1"
> +description = "Core utilities for Python packages"
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"},
> +    {file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"},
> +]
> +
>   [[package]]
>   name = "paramiko"
>   version = "3.2.0"
> @@ -515,6 +752,20 @@ files = [
>       {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
>   ]
>   
> +[[package]]
> +name = "pygments"
> +version = "2.15.1"
> +description = "Pygments is a syntax highlighting package written in Python."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "Pygments-2.15.1-py3-none-any.whl", hash = "sha256:db2db3deb4b4179f399a09054b023b6a586b76499d36965813c71aa8ed7b5fd1"},
> +    {file = "Pygments-2.15.1.tar.gz", hash = "sha256:8ace4d3c1dd481894b2005f560ead0f9f19ee64fe983366be1a21e171d12775c"},
> +]
> +
> +[package.extras]
> +plugins = ["importlib-metadata"]
> +
>   [[package]]
>   name = "pylama"
>   version = "8.4.1"
> @@ -632,6 +883,27 @@ files = [
>   attrs = ">=22.2.0"
>   rpds-py = ">=0.7.0"
>   
> +[[package]]
> +name = "requests"
> +version = "2.31.0"
> +description = "Python HTTP for Humans."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
> +    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
> +]
> +
> +[package.dependencies]
> +certifi = ">=2017.4.17"
> +charset-normalizer = ">=2,<4"
> +idna = ">=2.5,<4"
> +urllib3 = ">=1.21.1,<3"
> +
> +[package.extras]
> +socks = ["PySocks (>=1.5.6,!=1.5.7)"]
> +use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
> +
>   [[package]]
>   name = "rpds-py"
>   version = "0.9.2"
> @@ -775,6 +1047,162 @@ files = [
>       {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
>   ]
>   
> +[[package]]
> +name = "sphinx"
> +version = "6.2.1"
> +description = "Python documentation generator"
> +optional = false
> +python-versions = ">=3.8"
> +files = [
> +    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
> +    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
> +]
> +
> +[package.dependencies]
> +alabaster = ">=0.7,<0.8"
> +babel = ">=2.9"
> +colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
> +docutils = ">=0.18.1,<0.20"
> +imagesize = ">=1.3"
> +Jinja2 = ">=3.0"
> +packaging = ">=21.0"
> +Pygments = ">=2.13"
> +requests = ">=2.25.0"
> +snowballstemmer = ">=2.0"
> +sphinxcontrib-applehelp = "*"
> +sphinxcontrib-devhelp = "*"
> +sphinxcontrib-htmlhelp = ">=2.0.0"
> +sphinxcontrib-jsmath = "*"
> +sphinxcontrib-qthelp = "*"
> +sphinxcontrib-serializinghtml = ">=1.1.5"
> +
> +[package.extras]
> +docs = ["sphinxcontrib-websupport"]
> +lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
> +test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
> +
> +[[package]]
> +name = "sphinx-rtd-theme"
> +version = "1.2.2"
> +description = "Read the Docs theme for Sphinx"
> +optional = false
> +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
> +files = [
> +    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
> +    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
> +]
> +
> +[package.dependencies]
> +docutils = "<0.19"
> +sphinx = ">=1.6,<7"
> +sphinxcontrib-jquery = ">=4,<5"
> +
> +[package.extras]
> +dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
> +
> +[[package]]
> +name = "sphinxcontrib-applehelp"
> +version = "1.0.4"
> +description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
> +optional = false
> +python-versions = ">=3.8"
> +files = [
> +    {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"},
> +    {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"},
> +]
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-devhelp"
> +version = "1.0.2"
> +description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
> +optional = false
> +python-versions = ">=3.5"
> +files = [
> +    {file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
> +    {file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
> +]
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-htmlhelp"
> +version = "2.0.1"
> +description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
> +optional = false
> +python-versions = ">=3.8"
> +files = [
> +    {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"},
> +    {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"},
> +]
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["html5lib", "pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-jquery"
> +version = "4.1"
> +description = "Extension to include jQuery on newer Sphinx releases"
> +optional = false
> +python-versions = ">=2.7"
> +files = [
> +    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
> +    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
> +]
> +
> +[package.dependencies]
> +Sphinx = ">=1.8"
> +
> +[[package]]
> +name = "sphinxcontrib-jsmath"
> +version = "1.0.1"
> +description = "A sphinx extension which renders display math in HTML via JavaScript"
> +optional = false
> +python-versions = ">=3.5"
> +files = [
> +    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
> +    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
> +]
> +
> +[package.extras]
> +test = ["flake8", "mypy", "pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-qthelp"
> +version = "1.0.3"
> +description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
> +optional = false
> +python-versions = ">=3.5"
> +files = [
> +    {file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
> +    {file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
> +]
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-serializinghtml"
> +version = "1.1.5"
> +description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
> +optional = false
> +python-versions = ">=3.5"
> +files = [
> +    {file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
> +    {file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
> +]
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
>   [[package]]
>   name = "toml"
>   version = "0.10.2"
> @@ -819,6 +1247,23 @@ files = [
>       {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
>   ]
>   
> +[[package]]
> +name = "urllib3"
> +version = "2.0.4"
> +description = "HTTP library with thread-safe connection pooling, file post, and more."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "urllib3-2.0.4-py3-none-any.whl", hash = "sha256:de7df1803967d2c2a98e4b11bb7d6bd9210474c46e8a0401514e3a42a75ebde4"},
> +    {file = "urllib3-2.0.4.tar.gz", hash = "sha256:8d22f86aae8ef5e410d4f539fde9ce6b2113a001bb4d189e0aed70642d602b11"},
> +]
> +
> +[package.extras]
> +brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
> +secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
> +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
> +zstd = ["zstandard (>=0.18.0)"]
> +
>   [[package]]
>   name = "warlock"
>   version = "2.0.1"
> @@ -837,4 +1282,4 @@ jsonschema = ">=4,<5"
>   [metadata]
>   lock-version = "2.0"
>   python-versions = "^3.10"
> -content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
> +content-hash = "fea1a3eddd1286d2ccd3bdb61c6ce085403f31567dbe4f55b6775bcf1e325372"
> diff --git a/dts/pyproject.toml b/dts/pyproject.toml
> index 6762edfa6b..159940ce02 100644
> --- a/dts/pyproject.toml
> +++ b/dts/pyproject.toml
> @@ -34,6 +34,13 @@ pylama = "^8.4.1"
>   pyflakes = "^2.5.0"
>   toml = "^0.10.2"
>   
> +[tool.poetry.group.docs]
> +optional = true
> +
> +[tool.poetry.group.docs.dependencies]
> +sphinx = "<7"
> +sphinx-rtd-theme = "^1.2.2"
> +
>   [build-system]
>   requires = ["poetry-core>=1.0.0"]
>   build-backend = "poetry.core.masonry.api"
Reviewed-by: Yoan Picchi <yoan.picchi@arm.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 4/4] dts: format docstrigs to google format
  2023-08-31 10:04       ` [RFC PATCH v4 4/4] dts: format docstrigs to google format Juraj Linkeš
  2023-09-01 17:02         ` Jeremy Spewock
@ 2023-10-31 12:10         ` Yoan Picchi
  2023-11-02 10:17           ` Juraj Linkeš
  1 sibling, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-10-31 12:10 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb
  Cc: dev

On 8/31/23 11:04, Juraj Linkeš wrote:
> WIP: only one module is reformatted to serve as a demonstration.
> 
> The google format is documented here [0].
> 
> [0]: https://google.github.io/styleguide/pyguide.html
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Acked-by: Jeremy Spweock <jspweock@iol.unh.edu>
> ---
>   dts/framework/testbed_model/node.py | 171 +++++++++++++++++++---------
>   1 file changed, 118 insertions(+), 53 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index 23efa79c50..619743ebe7 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -3,8 +3,13 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> -"""
> -A node is a generic host that DTS connects to and manages.
> +"""Common functionality for node management.
> +
> +There's a base class, Node, that's supposed to be extended by other classes

This comment and all the following ones are all something of a nitpick.
This sounds too passive. Why not having something simpler like:
The virtual class Node is meant to be extended by other classes
with functionality specific to that node type.

> +with functionality specific to that node type.
> +The only part that can be used standalone is the Node.skip_setup static method,
> +which is a decorator used to skip method execution if skip_setup is passed
> +by the user on the cmdline or in an env variable.

I'd extend env to the full word as this is meant to go in the documentation.

>   """
>   
>   from abc import ABC
> @@ -35,10 +40,26 @@
>   
>   
>   class Node(ABC):
> -    """
> -    Basic class for node management. This class implements methods that
> -    manage a node, such as information gathering (of CPU/PCI/NIC) and
> -    environment setup.
> +    """The base class for node management.
> +
> +    It shouldn't be instantiated, but rather extended.
> +    It implements common methods to manage any node:
> +
> +       * connection to the node
> +       * information gathering of CPU
> +       * hugepages setup
> +
> +    Arguments:
> +        node_config: The config from the input configuration file.
> +
> +    Attributes:
> +        main_session: The primary OS-agnostic remote session used
> +            to communicate with the node.
> +        config: The configuration used to create the node.
> +        name: The name of the node.
> +        lcores: The list of logical cores that DTS can use on the node.
> +            It's derived from logical cores present on the node and user configuration.
> +        ports: The ports of this node specified in user configuration.
>       """
>   
>       main_session: OSSession
> @@ -77,9 +98,14 @@ def _init_ports(self) -> None:
>               self.configure_port_state(port)
>   
>       def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> -        """
> -        Perform the execution setup that will be done for each execution
> -        this node is part of.
> +        """Execution setup steps.
> +
> +        Configure hugepages and call self._set_up_execution where
> +        the rest of the configuration steps (if any) are implemented.
> +
> +        Args:
> +            execution_config: The execution configuration according to which
> +                the setup steps will be taken.
>           """
>           self._setup_hugepages()
>           self._set_up_execution(execution_config)
> @@ -88,58 +114,78 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
>               self.virtual_devices.append(VirtualDevice(vdev))
>   
>       def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional execution setup steps for derived classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional execution setup steps.

I'd probably use need or require instead of want (it's the dev that 
wants, not the class)

>           """
>   
>       def tear_down_execution(self) -> None:
> -        """
> -        Perform the execution teardown that will be done after each execution
> -        this node is part of concludes.
> +        """Execution teardown steps.
> +
> +        There are currently no common execution teardown steps
> +        common to all DTS node types.
>           """
>           self.virtual_devices = []
>           self._tear_down_execution()
>   
>       def _tear_down_execution(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional execution teardown steps for derived classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional execution teardown steps.
>           """
>   
>       def set_up_build_target(
>           self, build_target_config: BuildTargetConfiguration
>       ) -> None:
> -        """
> -        Perform the build target setup that will be done for each build target
> -        tested on this node.
> +        """Build target setup steps.
> +
> +        There are currently no common build target setup steps
> +        common to all DTS node types.
> +
> +        Args:
> +            build_target_config: The build target configuration according to which
> +                the setup steps will be taken.
>           """
>           self._set_up_build_target(build_target_config)
>   
>       def _set_up_build_target(
>           self, build_target_config: BuildTargetConfiguration
>       ) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional build target setup steps for derived classes.
> +
> +        Derived classes should optionally overwrite this

Is there a reason for the "optionally" here when it's absent in all the 
other functions?

> +        if they want to add additional build target setup steps.
>           """
>   
>       def tear_down_build_target(self) -> None:
> -        """
> -        Perform the build target teardown that will be done after each build target
> -        tested on this node.
> +        """Build target teardown steps.
> +
> +        There are currently no common build target teardown steps
> +        common to all DTS node types.
>           """
>           self._tear_down_build_target()
>   
>       def _tear_down_build_target(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional build target teardown steps for derived classes.
> +
> +        Derived classes should overwrite this
> +        if they want to add additional build target teardown steps.
>           """
>   
>       def create_session(self, name: str) -> OSSession:
> -        """
> -        Create and return a new OSSession tailored to the remote OS.
> +        """Create and return a new OS-agnostic remote session.
> +
> +        The returned session won't be used by the node creating it.
> +        The session must be used by the caller.
> +        Will be cleaned up automatically.

I had a double take reading this before I understood that the subject 
was the previously mentioned session. I'd recommend to either add "it" 
or "the session". Also, it will be cleaned automatically... when? When I 
create a new session? when the node is deleted?

> +
> +        Args:
> +            name: The name of the session.
> +
> +        Returns:
> +            A new OS-agnostic remote session.
>           """
>           session_name = f"{self.name} {name}"
>           connection = create_session(
> @@ -186,14 +232,24 @@ def filter_lcores(
>           filter_specifier: LogicalCoreCount | LogicalCoreList,
>           ascending: bool = True,
>       ) -> list[LogicalCore]:
> -        """
> -        Filter the LogicalCores found on the Node according to
> -        a LogicalCoreCount or a LogicalCoreList.
> +        """Filter the node's logical cores that DTS can use.
>   
> -        If ascending is True, use cores with the lowest numerical id first
> -        and continue in ascending order. If False, start with the highest
> -        id and continue in descending order. This ordering affects which
> -        sockets to consider first as well.
> +        Logical cores that DTS can use are ones that are present on the node,
> +        but filtered according to user config.
> +        The filter_specifier will filter cores from those logical cores.
> +
> +        Args:
> +            filter_specifier: Two different filters can be used, one that specifies
> +                the number of logical cores per core, cores per socket and
> +                the number of sockets,
> +                the other that specifies a logical core list.
> +            ascending: If True, use cores with the lowest numerical id first
> +                and continue in ascending order. If False, start with the highest
> +                id and continue in descending order. This ordering affects which
> +                sockets to consider first as well.
> +
> +        Returns:
> +            A list of logical cores.
>           """
>           self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
>           return lcore_filter(
> @@ -203,17 +259,14 @@ def filter_lcores(
>           ).filter()
>   
>       def _get_remote_cpus(self) -> None:
> -        """
> -        Scan CPUs in the remote OS and store a list of LogicalCores.
> -        """
> +        """Scan CPUs in the remote OS and store a list of LogicalCores."""
>           self._logger.info("Getting CPU information.")
>           self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
>   
>       def _setup_hugepages(self):
> -        """
> -        Setup hugepages on the Node. Different architectures can supply different
> -        amounts of memory for hugepages and numa-based hugepage allocation may need
> -        to be considered.
> +        """Setup hugepages on the Node.
> +
> +        Configure the hugepages only if they're specified in user configuration.
>           """
>           if self.config.hugepages:
>               self.main_session.setup_hugepages(
> @@ -221,8 +274,11 @@ def _setup_hugepages(self):
>               )
>   
>       def configure_port_state(self, port: Port, enable: bool = True) -> None:
> -        """
> -        Enable/disable port.
> +        """Enable/disable port.
> +
> +        Args:
> +            port: The port to enable/disable.
> +            enable: True to enable, false to disable.
>           """
>           self.main_session.configure_port_state(port, enable)
>   
> @@ -232,15 +288,19 @@ def configure_port_ip_address(
>           port: Port,
>           delete: bool = False,
>       ) -> None:
> -        """
> -        Configure the IP address of a port on this node.
> +        """Add an IP address to a port on this node.
> +
> +        Args:
> +            address: The IP address with mask in CIDR format.
> +                Can be either IPv4 or IPv6.
> +            port: The port to which to add the address.
> +            delete: If True, will delete the address from the port
> +                instead of adding it.
>           """
>           self.main_session.configure_port_ip_address(address, port, delete)
>   
>       def close(self) -> None:
> -        """
> -        Close all connections and free other resources.
> -        """
> +        """Close all connections and free other resources."""
>           if self.main_session:
>               self.main_session.close()
>           for session in self._other_sessions:
> @@ -249,6 +309,11 @@ def close(self) -> None:
>   
>       @staticmethod
>       def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> +        """A decorator that skips the decorated function.
> +
> +        When used, the decorator executes an empty lambda function
> +        instead of the decorated function.
> +        """
>           if SETTINGS.skip_setup:
>               return lambda *args: None
>           else:


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [RFC PATCH v4 4/4] dts: format docstrigs to google format
  2023-10-31 12:10         ` Yoan Picchi
@ 2023-11-02 10:17           ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-02 10:17 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
	jspewock, probb, dev

On Tue, Oct 31, 2023 at 1:10 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 8/31/23 11:04, Juraj Linkeš wrote:
> > WIP: only one module is reformatted to serve as a demonstration.
> >
> > The google format is documented here [0].
> >
> > [0]: https://google.github.io/styleguide/pyguide.html
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > Acked-by: Jeremy Spweock <jspweock@iol.unh.edu>
> > ---
> >   dts/framework/testbed_model/node.py | 171 +++++++++++++++++++---------
> >   1 file changed, 118 insertions(+), 53 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> > index 23efa79c50..619743ebe7 100644
> > --- a/dts/framework/testbed_model/node.py
> > +++ b/dts/framework/testbed_model/node.py
> > @@ -3,8 +3,13 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >
> > -"""
> > -A node is a generic host that DTS connects to and manages.
> > +"""Common functionality for node management.
> > +
> > +There's a base class, Node, that's supposed to be extended by other classes
>
> This comment and all the following ones are all something of a nitpick.

I think they're pretty helpful.

> This sounds too passive. Why not having something simpler like:
> The virtual class Node is meant to be extended by other classes
> with functionality specific to that node type.
>

Okay, makes sense.

> > +with functionality specific to that node type.
> > +The only part that can be used standalone is the Node.skip_setup static method,
> > +which is a decorator used to skip method execution if skip_setup is passed
> > +by the user on the cmdline or in an env variable.
>
> I'd extend env to the full word as this is meant to go in the documentation.
>

I'll try to do this everywhere in docs.

> >   """
> >
> >   from abc import ABC
> > @@ -35,10 +40,26 @@
> >
> >
> >   class Node(ABC):
> > -    """
> > -    Basic class for node management. This class implements methods that
> > -    manage a node, such as information gathering (of CPU/PCI/NIC) and
> > -    environment setup.
> > +    """The base class for node management.
> > +
> > +    It shouldn't be instantiated, but rather extended.
> > +    It implements common methods to manage any node:
> > +
> > +       * connection to the node
> > +       * information gathering of CPU
> > +       * hugepages setup
> > +
> > +    Arguments:
> > +        node_config: The config from the input configuration file.
> > +
> > +    Attributes:
> > +        main_session: The primary OS-agnostic remote session used
> > +            to communicate with the node.
> > +        config: The configuration used to create the node.
> > +        name: The name of the node.
> > +        lcores: The list of logical cores that DTS can use on the node.
> > +            It's derived from logical cores present on the node and user configuration.
> > +        ports: The ports of this node specified in user configuration.
> >       """
> >
> >       main_session: OSSession
> > @@ -77,9 +98,14 @@ def _init_ports(self) -> None:
> >               self.configure_port_state(port)
> >
> >       def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> > -        """
> > -        Perform the execution setup that will be done for each execution
> > -        this node is part of.
> > +        """Execution setup steps.
> > +
> > +        Configure hugepages and call self._set_up_execution where
> > +        the rest of the configuration steps (if any) are implemented.
> > +
> > +        Args:
> > +            execution_config: The execution configuration according to which
> > +                the setup steps will be taken.
> >           """
> >           self._setup_hugepages()
> >           self._set_up_execution(execution_config)
> > @@ -88,58 +114,78 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> >               self.virtual_devices.append(VirtualDevice(vdev))
> >
> >       def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional execution setup steps for derived classes.
> > +
> > +        Derived classes should overwrite this
> > +        if they want to add additional execution setup steps.
>
> I'd probably use need or require instead of want (it's the dev that
> wants, not the class)
>

Good point, that's a better wording.

> >           """
> >
> >       def tear_down_execution(self) -> None:
> > -        """
> > -        Perform the execution teardown that will be done after each execution
> > -        this node is part of concludes.
> > +        """Execution teardown steps.
> > +
> > +        There are currently no common execution teardown steps
> > +        common to all DTS node types.
> >           """
> >           self.virtual_devices = []
> >           self._tear_down_execution()
> >
> >       def _tear_down_execution(self) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional execution teardown steps for derived classes.
> > +
> > +        Derived classes should overwrite this
> > +        if they want to add additional execution teardown steps.
> >           """
> >
> >       def set_up_build_target(
> >           self, build_target_config: BuildTargetConfiguration
> >       ) -> None:
> > -        """
> > -        Perform the build target setup that will be done for each build target
> > -        tested on this node.
> > +        """Build target setup steps.
> > +
> > +        There are currently no common build target setup steps
> > +        common to all DTS node types.
> > +
> > +        Args:
> > +            build_target_config: The build target configuration according to which
> > +                the setup steps will be taken.
> >           """
> >           self._set_up_build_target(build_target_config)
> >
> >       def _set_up_build_target(
> >           self, build_target_config: BuildTargetConfiguration
> >       ) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional build target setup steps for derived classes.
> > +
> > +        Derived classes should optionally overwrite this
>
> Is there a reason for the "optionally" here when it's absent in all the
> other functions?
>

I've actually caught this as well when addressing the previous
comment. I unified the wording (without optionally, it's reduntant).

> > +        if they want to add additional build target setup steps.
> >           """
> >
> >       def tear_down_build_target(self) -> None:
> > -        """
> > -        Perform the build target teardown that will be done after each build target
> > -        tested on this node.
> > +        """Build target teardown steps.
> > +
> > +        There are currently no common build target teardown steps
> > +        common to all DTS node types.
> >           """
> >           self._tear_down_build_target()
> >
> >       def _tear_down_build_target(self) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional build target teardown steps for derived classes.
> > +
> > +        Derived classes should overwrite this
> > +        if they want to add additional build target teardown steps.
> >           """
> >
> >       def create_session(self, name: str) -> OSSession:
> > -        """
> > -        Create and return a new OSSession tailored to the remote OS.
> > +        """Create and return a new OS-agnostic remote session.
> > +
> > +        The returned session won't be used by the node creating it.
> > +        The session must be used by the caller.
> > +        Will be cleaned up automatically.
>
> I had a double take reading this before I understood that the subject
> was the previously mentioned session. I'd recommend to either add "it"
> or "the session". Also, it will be cleaned automatically... when? When I
> create a new session? when the node is deleted?
>

The session will be cleaned up as part of the node's cleanup. What
about this new wording:

The returned session won't be used by the node creating it. The
session must be used by
the caller. The session will be maintained for the entire lifecycle of
the node object,
at the end of which the session will be cleaned up automatically.

I've added a note:

Note:
    Any number of these supplementary sessions may be created.


> > +
> > +        Args:
> > +            name: The name of the session.
> > +
> > +        Returns:
> > +            A new OS-agnostic remote session.
> >           """
> >           session_name = f"{self.name} {name}"
> >           connection = create_session(
> > @@ -186,14 +232,24 @@ def filter_lcores(
> >           filter_specifier: LogicalCoreCount | LogicalCoreList,
> >           ascending: bool = True,
> >       ) -> list[LogicalCore]:
> > -        """
> > -        Filter the LogicalCores found on the Node according to
> > -        a LogicalCoreCount or a LogicalCoreList.
> > +        """Filter the node's logical cores that DTS can use.
> >
> > -        If ascending is True, use cores with the lowest numerical id first
> > -        and continue in ascending order. If False, start with the highest
> > -        id and continue in descending order. This ordering affects which
> > -        sockets to consider first as well.
> > +        Logical cores that DTS can use are ones that are present on the node,
> > +        but filtered according to user config.
> > +        The filter_specifier will filter cores from those logical cores.
> > +
> > +        Args:
> > +            filter_specifier: Two different filters can be used, one that specifies
> > +                the number of logical cores per core, cores per socket and
> > +                the number of sockets,
> > +                the other that specifies a logical core list.
> > +            ascending: If True, use cores with the lowest numerical id first
> > +                and continue in ascending order. If False, start with the highest
> > +                id and continue in descending order. This ordering affects which
> > +                sockets to consider first as well.
> > +
> > +        Returns:
> > +            A list of logical cores.
> >           """
> >           self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
> >           return lcore_filter(
> > @@ -203,17 +259,14 @@ def filter_lcores(
> >           ).filter()
> >
> >       def _get_remote_cpus(self) -> None:
> > -        """
> > -        Scan CPUs in the remote OS and store a list of LogicalCores.
> > -        """
> > +        """Scan CPUs in the remote OS and store a list of LogicalCores."""
> >           self._logger.info("Getting CPU information.")
> >           self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
> >
> >       def _setup_hugepages(self):
> > -        """
> > -        Setup hugepages on the Node. Different architectures can supply different
> > -        amounts of memory for hugepages and numa-based hugepage allocation may need
> > -        to be considered.
> > +        """Setup hugepages on the Node.
> > +
> > +        Configure the hugepages only if they're specified in user configuration.
> >           """
> >           if self.config.hugepages:
> >               self.main_session.setup_hugepages(
> > @@ -221,8 +274,11 @@ def _setup_hugepages(self):
> >               )
> >
> >       def configure_port_state(self, port: Port, enable: bool = True) -> None:
> > -        """
> > -        Enable/disable port.
> > +        """Enable/disable port.
> > +
> > +        Args:
> > +            port: The port to enable/disable.
> > +            enable: True to enable, false to disable.
> >           """
> >           self.main_session.configure_port_state(port, enable)
> >
> > @@ -232,15 +288,19 @@ def configure_port_ip_address(
> >           port: Port,
> >           delete: bool = False,
> >       ) -> None:
> > -        """
> > -        Configure the IP address of a port on this node.
> > +        """Add an IP address to a port on this node.
> > +
> > +        Args:
> > +            address: The IP address with mask in CIDR format.
> > +                Can be either IPv4 or IPv6.
> > +            port: The port to which to add the address.
> > +            delete: If True, will delete the address from the port
> > +                instead of adding it.
> >           """
> >           self.main_session.configure_port_ip_address(address, port, delete)
> >
> >       def close(self) -> None:
> > -        """
> > -        Close all connections and free other resources.
> > -        """
> > +        """Close all connections and free other resources."""
> >           if self.main_session:
> >               self.main_session.close()
> >           for session in self._other_sessions:
> > @@ -249,6 +309,11 @@ def close(self) -> None:
> >
> >       @staticmethod
> >       def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> > +        """A decorator that skips the decorated function.
> > +
> > +        When used, the decorator executes an empty lambda function
> > +        instead of the decorated function.
> > +        """
> >           if SETTINGS.skip_setup:
> >               return lambda *args: None
> >           else:
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 00/23] dts: add dts api docs
  2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
                         ` (3 preceding siblings ...)
  2023-08-31 10:04       ` [RFC PATCH v4 4/4] dts: format docstrigs to google format Juraj Linkeš
@ 2023-11-06 17:15       ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 01/23] dts: code adjustments for doc generation Juraj Linkeš
                           ` (23 more replies)
  4 siblings, 24 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

The commits can be split into groups.

The first commit makes changes to the code. These code changes mainly
change the structure of the code so that the actual API docs generation
works. There are also some code changes which get reflected in the
documentation, such as making functions/methods/attributes private or
public.

The second set of commits (2-21) deal with the actual docstring
documentation (from which the API docs are generated). The format of
docstrings is the Google format [0] with PEP257 [1] and some guidelines
captured in the last commit of this group covering what the Google
format doesn't.
The docstring updates are split into many commits to make review
possible. When accepted, these may be squashed (commits 4-21).
The docstrings have been composed in anticipation of [2], adhering to
maximum line length of 100. We don't have a tool for automatic docstring
formatting, hence the usage of 100 right away to save time.

NOTE: The logger.py module is not fully documented, as it's being
refactored and the refactor will be submitted in the near future.
Documenting it now seems unnecessary.

The last two commits comprise the final group, enabling the actual
generation of documentation.
The generation is done with Sphinx, which DPDK already uses, with
slightly modified configuration (the sidebar: unlimited depth and better
collapsing - I need comment on this).

The first two groups are the most important to merge, as future
development can't proceed without them. The third group may be
finished/accepted at a later date, as it's fairly independent.

The build requires the same Python version and dependencies as DTS,
because Sphinx imports the Python modules. The modules are imported
individually, requiring the code refactoring mentioned above.
Dependencies are installed
using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts-doc

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
[1] https://peps.python.org/pep-0257/
[2] https://patches.dpdk.org/project/dpdk/list/?series=29844

Juraj Linkeš (23):
  dts: code adjustments for doc generation
  dts: add docstring checker
  dts: add basic developer docs
  dts: exceptions docstring update
  dts: settings docstring update
  dts: logger and settings docstring update
  dts: dts runner and main docstring update
  dts: test suite docstring update
  dts: test result docstring update
  dts: config docstring update
  dts: remote session docstring update
  dts: interactive remote session docstring update
  dts: port and virtual device docstring update
  dts: cpu docstring update
  dts: os session docstring update
  dts: posix and linux sessions docstring update
  dts: node docstring update
  dts: sut and tg nodes docstring update
  dts: base traffic generators docstring update
  dts: scapy tg docstring update
  dts: test suites docstring update
  dts: add doc generation dependencies
  dts: add doc generation

 buildtools/call-sphinx-build.py               |  29 +-
 doc/api/meson.build                           |   1 +
 doc/guides/conf.py                            |  34 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      | 103 ++++
 dts/doc/conf_yaml_schema.json                 |   1 +
 dts/doc/index.rst                             |  17 +
 dts/doc/meson.build                           |  49 ++
 dts/framework/__init__.py                     |  12 +-
 dts/framework/config/__init__.py              | 379 ++++++++++---
 dts/framework/config/types.py                 | 132 +++++
 dts/framework/dts.py                          | 161 +++++-
 dts/framework/exception.py                    | 156 +++---
 dts/framework/logger.py                       |  72 ++-
 dts/framework/remote_session/__init__.py      |  80 ++-
 .../interactive_remote_session.py             |  36 +-
 .../remote_session/interactive_shell.py       | 152 ++++++
 dts/framework/remote_session/os_session.py    | 284 ----------
 dts/framework/remote_session/python_shell.py  |  32 ++
 .../remote_session/remote/__init__.py         |  27 -
 .../remote/interactive_shell.py               | 133 -----
 .../remote_session/remote/python_shell.py     |  12 -
 .../remote_session/remote/remote_session.py   | 172 ------
 .../remote_session/remote/testpmd_shell.py    |  49 --
 .../remote_session/remote_session.py          | 232 ++++++++
 .../{remote => }/ssh_session.py               |  28 +-
 dts/framework/remote_session/testpmd_shell.py |  86 +++
 dts/framework/settings.py                     | 188 +++++--
 dts/framework/test_result.py                  | 296 +++++++---
 dts/framework/test_suite.py                   | 230 ++++++--
 dts/framework/testbed_model/__init__.py       |  28 +-
 dts/framework/testbed_model/{hw => }/cpu.py   | 209 +++++--
 dts/framework/testbed_model/hw/__init__.py    |  27 -
 dts/framework/testbed_model/hw/port.py        |  60 ---
 .../testbed_model/hw/virtual_device.py        |  16 -
 .../linux_session.py                          |  69 ++-
 dts/framework/testbed_model/node.py           | 217 +++++---
 dts/framework/testbed_model/os_session.py     | 425 +++++++++++++++
 dts/framework/testbed_model/port.py           |  93 ++++
 .../posix_session.py                          |  85 ++-
 dts/framework/testbed_model/sut_node.py       | 227 +++++---
 dts/framework/testbed_model/tg_node.py        |  70 +--
 .../testbed_model/traffic_generator.py        |  72 ---
 .../traffic_generator/__init__.py             |  44 ++
 .../capturing_traffic_generator.py            |  52 +-
 .../{ => traffic_generator}/scapy.py          | 114 ++--
 .../traffic_generator/traffic_generator.py    |  87 +++
 dts/framework/testbed_model/virtual_device.py |  29 +
 dts/framework/utils.py                        | 136 +++--
 dts/main.py                                   |  17 +-
 dts/meson.build                               |  16 +
 dts/poetry.lock                               | 509 +++++++++++++++++-
 dts/pyproject.toml                            |  13 +-
 dts/tests/TestSuite_hello_world.py            |  16 +-
 dts/tests/TestSuite_os_udp.py                 |  16 +-
 dts/tests/TestSuite_smoke_tests.py            |  53 +-
 meson.build                                   |   1 +
 57 files changed, 4181 insertions(+), 1704 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/framework/config/types.py
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (76%)
 create mode 100644 dts/framework/remote_session/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/os_session.py
 create mode 100644 dts/framework/remote_session/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 delete mode 100644 dts/framework/remote_session/remote/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/remote/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/remote_session.py
 delete mode 100644 dts/framework/remote_session/remote/testpmd_shell.py
 create mode 100644 dts/framework/remote_session/remote_session.py
 rename dts/framework/remote_session/{remote => }/ssh_session.py (83%)
 create mode 100644 dts/framework/remote_session/testpmd_shell.py
 rename dts/framework/testbed_model/{hw => }/cpu.py (50%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 delete mode 100644 dts/framework/testbed_model/hw/port.py
 delete mode 100644 dts/framework/testbed_model/hw/virtual_device.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (79%)
 create mode 100644 dts/framework/testbed_model/os_session.py
 create mode 100644 dts/framework/testbed_model/port.py
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (74%)
 delete mode 100644 dts/framework/testbed_model/traffic_generator.py
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (66%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (71%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/traffic_generator.py
 create mode 100644 dts/framework/testbed_model/virtual_device.py
 create mode 100644 dts/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 01/23] dts: code adjustments for doc generation
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-08 13:35           ` Yoan Picchi
  2023-11-06 17:15         ` [PATCH v5 02/23] dts: add docstring checker Juraj Linkeš
                           ` (22 subsequent siblings)
  23 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

The standard Python tool for generating API documentation, Sphinx,
imports modules one-by-one when generating the documentation. This
requires code changes:
* properly guarding argument parsing in the if __name__ == '__main__'
  block,
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create log files outside of a DTS run,
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere,
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by reorganizing the code,
* some code reorganization was done because the resulting structure
  makes more sense, improving documentation clarity.

The are some other changes which are documentation related:
* added missing type annotation so they appear in the generated docs,
* reordered arguments in some methods,
* removed superfluous arguments and attributes,
* change private functions/methods/attributes to private and vice-versa.

The above all appear in the generated documentation and the with them,
the documentation is improved.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              | 10 ++-
 dts/framework/dts.py                          | 33 +++++--
 dts/framework/exception.py                    | 54 +++++-------
 dts/framework/remote_session/__init__.py      | 41 ++++-----
 .../interactive_remote_session.py             |  0
 .../{remote => }/interactive_shell.py         |  0
 .../{remote => }/python_shell.py              |  0
 .../remote_session/remote/__init__.py         | 27 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               | 12 +--
 .../{remote => }/testpmd_shell.py             |  0
 dts/framework/settings.py                     | 87 +++++++++++--------
 dts/framework/test_result.py                  |  4 +-
 dts/framework/test_suite.py                   |  7 +-
 dts/framework/testbed_model/__init__.py       | 12 +--
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py    | 27 ------
 .../linux_session.py                          |  6 +-
 dts/framework/testbed_model/node.py           | 26 ++++--
 .../os_session.py                             | 22 ++---
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py                          |  4 +-
 dts/framework/testbed_model/sut_node.py       |  8 +-
 dts/framework/testbed_model/tg_node.py        | 30 +------
 .../traffic_generator/__init__.py             | 24 +++++
 .../capturing_traffic_generator.py            |  6 +-
 .../{ => traffic_generator}/scapy.py          | 23 ++---
 .../traffic_generator.py                      | 16 +++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py                        | 46 +++-------
 dts/main.py                                   |  9 +-
 31 files changed, 259 insertions(+), 288 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index cb7e00ba34..2044c82611 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -17,6 +17,7 @@
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
 
@@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict):
+    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
         # This looks useless now, but is designed to allow expansion to traffic
         # generators that require more configuration later.
         match TrafficGeneratorType(d["type"]):
@@ -97,6 +98,10 @@ def from_dict(d: dict):
                 return ScapyTrafficGeneratorConfig(
                     traffic_generator_type=TrafficGeneratorType.SCAPY
                 )
+            case _:
+                raise ConfigurationError(
+                    f'Unknown traffic generator type "{d["type"]}".'
+                )
 
 
 @dataclass(slots=True, frozen=True)
@@ -324,6 +329,3 @@ def load_config() -> Configuration:
     config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
     config_obj: Configuration = Configuration.from_dict(dict(config))
     return config_obj
-
-
-CONFIGURATION = load_config()
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f773f0c38d..4c7fb0c40a 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,19 +6,19 @@
 import sys
 
 from .config import (
-    CONFIGURATION,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     TestSuiteConfig,
+    load_config,
 )
 from .exception import BlockingTestSuiteError
 from .logger import DTSLOG, getLogger
 from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
 from .test_suite import get_test_suites
 from .testbed_model import SutNode, TGNode
-from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG = None  # type: ignore[assignment]
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -30,14 +30,18 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
-    check_dts_python_version()
+    _check_dts_python_version()
 
     sut_nodes: dict[str, SutNode] = {}
     tg_nodes: dict[str, TGNode] = {}
     try:
         # for all Execution sections
-        for execution in CONFIGURATION.executions:
+        for execution in load_config().executions:
             sut_node = sut_nodes.get(execution.system_under_test_node.name)
             tg_node = tg_nodes.get(execution.traffic_generator_node.name)
 
@@ -82,6 +86,25 @@ def run_all() -> None:
     _exit_dts()
 
 
+def _check_dts_python_version() -> None:
+    def RED(text: str) -> str:
+        return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+    if sys.version_info.major < 3 or (
+        sys.version_info.major == 3 and sys.version_info.minor < 10
+    ):
+        print(
+            RED(
+                (
+                    "WARNING: DTS execution node's python version is lower than"
+                    "python 3.10, is deprecated and will not work in future releases."
+                )
+            ),
+            file=sys.stderr,
+        )
+        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+
+
 def _run_execution(
     sut_node: SutNode,
     tg_node: TGNode,
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 001a5a5496..7489c03570 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
     Command execution timeout.
     """
 
-    command: str
-    output: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _command: str
 
-    def __init__(self, command: str, output: str):
-        self.command = command
-        self.output = output
+    def __init__(self, command: str):
+        self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self.command}"
-
-    def get_output(self) -> str:
-        return self.output
+        return f"TIMEOUT on {self._command}"
 
 
 class SSHConnectionError(DTSError):
@@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
     SSH connection error.
     """
 
-    host: str
-    errors: list[str]
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
+    _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
-        self.host = host
-        self.errors = [] if errors is None else errors
+        self._host = host
+        self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
-        message = f"Error trying to connect with {self.host}."
-        if self.errors:
-            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
+        message = f"Error trying to connect with {self._host}."
+        if self._errors:
+            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
 
         return message
 
@@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
     It can no longer be used.
     """
 
-    host: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
 
     def __init__(self, host: str):
-        self.host = host
+        self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self.host} has died"
+        return f"SSH session with {self._host} has died"
 
 
 class ConfigurationError(DTSError):
@@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError):
     Raised when a command executed on a Node returns a non-zero exit status.
     """
 
-    command: str
-    command_return_code: int
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    command: str
+    _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
         self.command = command
-        self.command_return_code = command_return_code
+        self._command_return_code = command_return_code
 
     def __str__(self) -> str:
         return (
             f"Command {self.command} returned a non-zero exit code: "
-            f"{self.command_return_code}"
+            f"{self._command_return_code}"
         )
 
 
@@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError):
     Used in test cases to verify the expected behavior.
     """
 
-    value: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
-    def __init__(self, value: str):
-        self.value = value
-
-    def __str__(self) -> str:
-        return repr(self.value)
-
 
 class BlockingTestSuiteError(DTSError):
-    suite_name: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
+    _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
-        self.suite_name = suite_name
+        self._suite_name = suite_name
 
     def __str__(self) -> str:
-        return f"Blocking suite {self.suite_name} failed."
+        return f"Blocking suite {self._suite_name} failed."
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 00b6d1f03a..5e7ddb2b05 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -12,29 +12,24 @@
 
 # pylama:ignore=W0611
 
-from framework.config import OS, NodeConfiguration
-from framework.exception import ConfigurationError
+from framework.config import NodeConfiguration
 from framework.logger import DTSLOG
 
-from .linux_session import LinuxSession
-from .os_session import InteractiveShellType, OSSession
-from .remote import (
-    CommandResult,
-    InteractiveRemoteSession,
-    InteractiveShell,
-    PythonShell,
-    RemoteSession,
-    SSHSession,
-    TestPmdDevice,
-    TestPmdShell,
-)
-
-
-def create_session(
+from .interactive_remote_session import InteractiveRemoteSession
+from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
+from .testpmd_shell import TestPmdShell
+
+
+def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> OSSession:
-    match node_config.os:
-        case OS.linux:
-            return LinuxSession(node_config, name, logger)
-        case _:
-            raise ConfigurationError(f"Unsupported OS {node_config.os}")
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
+
+
+def create_interactive_session(
+    node_config: NodeConfiguration, logger: DTSLOG
+) -> InteractiveRemoteSession:
+    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_remote_session.py
rename to dts/framework/remote_session/interactive_remote_session.py
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_shell.py
rename to dts/framework/remote_session/interactive_shell.py
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/python_shell.py
rename to dts/framework/remote_session/python_shell.py
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 06403691a5..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-# Copyright(c) 2023 University of New Hampshire
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .interactive_remote_session import InteractiveRemoteSession
-from .interactive_shell import InteractiveShell
-from .python_shell import PythonShell
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-from .testpmd_shell import TestPmdDevice, TestPmdShell
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
-
-
-def create_interactive_session(
-    node_config: NodeConfiguration, logger: DTSLOG
-) -> InteractiveRemoteSession:
-    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
index 8d127f1601..cee11d14d6 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -18,9 +18,7 @@
     SSHException,
 )
 
-from framework.config import NodeConfiguration
 from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
-from framework.logger import DTSLOG
 
 from .remote_session import CommandResult, RemoteSession
 
@@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
 
     session: Connection
 
-    def __init__(
-        self,
-        node_config: NodeConfiguration,
-        session_name: str,
-        logger: DTSLOG,
-    ):
-        super(SSHSession, self).__init__(node_config, session_name, logger)
-
     def _connect(self) -> None:
         errors = []
         retry_attempts = 10
@@ -117,7 +107,7 @@ def _send_command(
 
         except CommandTimedOut as e:
             self._logger.exception(e)
-            raise SSHTimeoutError(command, e.result.stderr) from e
+            raise SSHTimeoutError(command) from e
 
         return CommandResult(
             self.name, command, output.stdout, output.stderr, output.return_code
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/testpmd_shell.py
rename to dts/framework/remote_session/testpmd_shell.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index cfa39d011b..7f5841d073 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -22,8 +22,8 @@ def __init__(
             option_strings: Sequence[str],
             dest: str,
             nargs: str | int | None = None,
-            const: str | None = None,
-            default: str = None,
+            const: bool | None = None,
+            default: Any = None,
             type: Callable[[str], _T | argparse.FileType | None] = None,
             choices: Iterable[_T] | None = None,
             required: bool = False,
@@ -32,6 +32,12 @@ def __init__(
         ) -> None:
             env_var_value = os.environ.get(env_var)
             default = env_var_value or default
+            if const is not None:
+                nargs = 0
+                default = const if env_var_value else default
+                type = None
+                choices = None
+                metavar = None
             super(_EnvironmentArgument, self).__init__(
                 option_strings,
                 dest,
@@ -52,22 +58,28 @@ def __call__(
             values: Any,
             option_string: str = None,
         ) -> None:
-            setattr(namespace, self.dest, values)
+            if self.const is not None:
+                setattr(namespace, self.dest, self.const)
+            else:
+                setattr(namespace, self.dest, values)
 
     return _EnvironmentArgument
 
 
-@dataclass(slots=True, frozen=True)
-class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+@dataclass(slots=True)
+class Settings:
+    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: Settings = Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
         "and targets.",
     )
@@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
         "compiling DPDK.",
@@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
-        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
+        default=SETTINGS.verbose,
+        const=True,
+        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
         "to the console.",
     )
 
@@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
-        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+        const=True,
+        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
     )
 
     parser.add_argument(
@@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--snapshot",
         "--git-ref",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
         "tag ID or tree ID to test. To test local changes, first commit them, "
@@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
     return parser
 
 
-def _get_settings() -> _Settings:
+def get_settings() -> Settings:
     parsed_args = _get_parser().parse_args()
-    return _Settings(
+    return Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
-        verbose=(parsed_args.verbose == "Y"),
-        skip_setup=(parsed_args.skip_setup == "Y"),
+        verbose=parsed_args.verbose,
+        skip_setup=parsed_args.skip_setup,
         dpdk_tarball_path=Path(
-            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
-        )
-        if not os.path.exists(parsed_args.tarball)
-        else Path(parsed_args.tarball),
+            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
+            if not os.path.exists(parsed_args.tarball)
+            else Path(parsed_args.tarball)
+        ),
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=(
+            parsed_args.test_cases.split(",") if parsed_args.test_cases else []
+        ),
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index f0fbe80f6f..603e18872c 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -254,7 +254,7 @@ def add_build_target(
         self._inner_results.append(build_target_result)
         return build_target_result
 
-    def add_sut_info(self, sut_info: NodeInfo):
+    def add_sut_info(self, sut_info: NodeInfo) -> None:
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
@@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
         self._inner_results.append(execution_result)
         return execution_result
 
-    def add_error(self, error) -> None:
+    def add_error(self, error: Exception) -> None:
         self._errors.append(error)
 
     def process(self) -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3b890c0451..d53553bf34 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -11,7 +11,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Union
+from typing import Any, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -26,8 +26,7 @@
 from .logger import DTSLOG, getLogger
 from .settings import SETTINGS
 from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode, TGNode
-from .testbed_model.hw.port import Port, PortLink
+from .testbed_model import Port, PortLink, SutNode, TGNode
 from .utils import get_packet_summaries
 
 
@@ -453,7 +452,7 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
-    def is_test_suite(object) -> bool:
+    def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
                 return True
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 5cbb859e47..8ced05653b 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,15 +9,9 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
+from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from .node import Node
+from .port import Port, PortLink
 from .sut_node import SutNode
 from .tg_node import TGNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/testbed_model/cpu.py
index d1918a12dc..8fe785dfe4 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
similarity index 97%
rename from dts/framework/remote_session/linux_session.py
rename to dts/framework/testbed_model/linux_session.py
index a3f1a6bf3b..f472bb8f0f 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -9,10 +9,10 @@
 from typing_extensions import NotRequired
 
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
 from framework.utils import expand_range
 
+from .cpu import LogicalCore
+from .port import Port
 from .posix_session import PosixSession
 
 
@@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
             lcores.append(LogicalCore(lcore, core, socket, node))
         return lcores
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return dpdk_prefix
 
     def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index fc01e0bf8e..7571e7b98d 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,23 +12,26 @@
 from typing import Any, Callable, Type, Union
 
 from framework.config import (
+    OS,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     NodeConfiguration,
 )
+from framework.exception import ConfigurationError
 from framework.logger import DTSLOG, getLogger
-from framework.remote_session import InteractiveShellType, OSSession, create_session
 from framework.settings import SETTINGS
 
-from .hw import (
+from .cpu import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
-    VirtualDevice,
     lcore_filter,
 )
-from .hw.port import Port
+from .linux_session import LinuxSession
+from .os_session import InteractiveShellType, OSSession
+from .port import Port
+from .virtual_device import VirtualDevice
 
 
 class Node(ABC):
@@ -69,6 +72,7 @@ def __init__(self, node_config: NodeConfiguration):
     def _init_ports(self) -> None:
         self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
         self.main_session.update_ports(self.ports)
+
         for port in self.ports:
             self.configure_port_state(port)
 
@@ -172,9 +176,9 @@ def create_interactive_shell(
 
         return self.main_session.create_interactive_shell(
             shell_cls,
-            app_args,
             timeout,
             privileged,
+            app_args,
         )
 
     def filter_lcores(
@@ -205,7 +209,7 @@ def _get_remote_cpus(self) -> None:
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
-    def _setup_hugepages(self):
+    def _setup_hugepages(self) -> None:
         """
         Setup hugepages on the Node. Different architectures can supply different
         amounts of memory for hugepages and numa-based hugepage allocation may need
@@ -249,3 +253,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
             return lambda *args: None
         else:
             return func
+
+
+def create_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> OSSession:
+    match node_config.os:
+        case OS.linux:
+            return LinuxSession(node_config, name, logger)
+        case _:
+            raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
similarity index 95%
rename from dts/framework/remote_session/os_session.py
rename to dts/framework/testbed_model/os_session.py
index 8a709eac1c..76e595a518 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -10,19 +10,19 @@
 
 from framework.config import Architecture, NodeConfiguration, NodeInfo
 from framework.logger import DTSLOG
-from framework.remote_session.remote import InteractiveShell
-from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
-from framework.utils import MesonArgs
-
-from .remote import (
+from framework.remote_session import (
     CommandResult,
     InteractiveRemoteSession,
+    InteractiveShell,
     RemoteSession,
     create_interactive_session,
     create_remote_session,
 )
+from framework.settings import SETTINGS
+from framework.utils import MesonArgs
+
+from .cpu import LogicalCore
+from .port import Port
 
 InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
 
@@ -85,9 +85,9 @@ def send_command(
     def create_interactive_shell(
         self,
         shell_cls: Type[InteractiveShellType],
-        eal_parameters: str,
         timeout: float,
         privileged: bool,
+        app_args: str,
     ) -> InteractiveShellType:
         """
         See "create_interactive_shell" in SutNode
@@ -96,7 +96,7 @@ def create_interactive_shell(
             self.interactive_session.session,
             self._logger,
             self._get_privileged_command if privileged else None,
-            eal_parameters,
+            app_args,
             timeout,
         )
 
@@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
         """
 
     @abstractmethod
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
         """
         Try to find DPDK remote dir in remote_dir.
         """
@@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
         """
 
     @abstractmethod
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
         Get the DPDK file prefix that will be used when running DPDK apps.
         """
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
similarity index 100%
rename from dts/framework/testbed_model/hw/port.py
rename to dts/framework/testbed_model/port.py
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
similarity index 98%
rename from dts/framework/remote_session/posix_session.py
rename to dts/framework/testbed_model/posix_session.py
index 5da0516e05..1d1d5b1b26 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
 
         return ret_opts
 
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
@@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs(
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 202aebfd06..4e33cf02ea 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -15,12 +15,14 @@
     NodeInfo,
     SutNodeConfiguration,
 )
-from framework.remote_session import CommandResult, InteractiveShellType, OSSession
+from framework.remote_session import CommandResult
 from framework.settings import SETTINGS
 from framework.utils import MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
+from .cpu import LogicalCoreCount, LogicalCoreList
 from .node import Node
+from .os_session import InteractiveShellType, OSSession
+from .virtual_device import VirtualDevice
 
 
 class EalParameters(object):
@@ -289,7 +291,7 @@ def create_eal_parameters(
         prefix: str = "dpdk",
         append_prefix_timestamp: bool = True,
         no_pci: bool = False,
-        vdevs: list[VirtualDevice] = None,
+        vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
         """
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 27025cfa31..166eb8430e 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -16,16 +16,11 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.config import (
-    ScapyTrafficGeneratorConfig,
-    TGNodeConfiguration,
-    TrafficGeneratorType,
-)
-from framework.exception import ConfigurationError
-
-from .capturing_traffic_generator import CapturingTrafficGenerator
-from .hw.port import Port
+from framework.config import TGNodeConfiguration
+
 from .node import Node
+from .port import Port
+from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
 
 
 class TGNode(Node):
@@ -80,20 +75,3 @@ def close(self) -> None:
         """Free all resources used by the node"""
         self.traffic_generator.close()
         super(TGNode, self).close()
-
-
-def create_traffic_generator(
-    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
-) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
-
-    from .scapy import ScapyTrafficGenerator
-
-    match traffic_generator_config.traffic_generator_type:
-        case TrafficGeneratorType.SCAPY:
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
-        case _:
-            raise ConfigurationError(
-                "Unknown traffic generator: "
-                f"{traffic_generator_config.traffic_generator_type}"
-            )
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
new file mode 100644
index 0000000000..11bfa1ee0f
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
+from framework.exception import ConfigurationError
+from framework.testbed_model.node import Node
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .scapy import ScapyTrafficGenerator
+
+
+def create_traffic_generator(
+    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+    """A factory function for creating traffic generator object from user config."""
+
+    match traffic_generator_config.traffic_generator_type:
+        case TrafficGeneratorType.SCAPY:
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+        case _:
+            raise ConfigurationError(
+                "Unknown traffic generator: "
+                f"{traffic_generator_config.traffic_generator_type}"
+            )
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
similarity index 96%
rename from dts/framework/testbed_model/capturing_traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index ab98987f8e..e521211ef0 100644
--- a/dts/framework/testbed_model/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -16,9 +16,9 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.settings import SETTINGS
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
 from .traffic_generator import TrafficGenerator
 
 
@@ -130,7 +130,9 @@ def _send_packets_and_capture(
         for the specified duration. It must be able to handle no received packets.
         """
 
-    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+    def _write_capture_from_packets(
+        self, capture_name: str, packets: list[Packet]
+    ) -> None:
         file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
         self._logger.debug(f"Writing packets to {file_name}.")
         scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
similarity index 95%
rename from dts/framework/testbed_model/scapy.py
rename to dts/framework/testbed_model/traffic_generator/scapy.py
index af0d4dbb25..51864b6e6b 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -24,16 +24,15 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import DTSLOG, getLogger
 from framework.remote_session import PythonShell
 from framework.settings import SETTINGS
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 
 from .capturing_traffic_generator import (
     CapturingTrafficGenerator,
     _get_default_capture_name,
 )
-from .hw.port import Port
-from .tg_node import TGNode
 
 """
 ========= BEGIN RPC FUNCTIONS =========
@@ -146,7 +145,7 @@ def quit(self) -> None:
         self._BaseServer__shutdown_request = True
         return None
 
-    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
         """Add a function to the server.
 
         This is meant to be executed remotely.
@@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     session: PythonShell
     rpc_server_proxy: xmlrpc.client.ServerProxy
     _config: ScapyTrafficGeneratorConfig
-    _tg_node: TGNode
-    _logger: DTSLOG
-
-    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
-        self._config = config
-        self._tg_node = tg_node
-        self._logger = getLogger(
-            f"{self._tg_node.name} {self._config.traffic_generator_type}"
-        )
+
+    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        super().__init__(tg_node, config)
 
         assert (
             self._tg_node.config.os == OS.linux
@@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
             function_bytes = marshal.dumps(function.__code__)
             self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
 
-    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
+    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
         # load the source of the function
         src = inspect.getsource(QuittableXMLRPCServer)
         # Lines with only whitespace break the repl if in the middle of a function
@@ -280,7 +273,7 @@ def _send_packets_and_capture(
         scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
         return scapy_packets
 
-    def close(self):
+    def close(self) -> None:
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
similarity index 80%
rename from dts/framework/testbed_model/traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 28c35d3ce4..ea7c3963da 100644
--- a/dts/framework/testbed_model/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -12,11 +12,12 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.logger import DTSLOG
+from framework.config import TrafficGeneratorConfig
+from framework.logger import DTSLOG, getLogger
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
-
 
 class TrafficGenerator(ABC):
     """The base traffic generator.
@@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
     Defines the few basic methods that each traffic generator must implement.
     """
 
+    _config: TrafficGeneratorConfig
+    _tg_node: Node
     _logger: DTSLOG
 
+    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        self._config = config
+        self._tg_node = tg_node
+        self._logger = getLogger(
+            f"{self._tg_node.name} {self._config.traffic_generator_type}"
+        )
+
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send a packet and block until it is fully sent.
 
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index d27c2c5b5f..f0c916471c 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -7,7 +7,6 @@
 import json
 import os
 import subprocess
-import sys
 from enum import Enum
 from pathlib import Path
 from subprocess import SubprocessError
@@ -16,35 +15,7 @@
 
 from .exception import ConfigurationError
 
-
-class StrEnum(Enum):
-    @staticmethod
-    def _generate_next_value_(
-        name: str, start: int, count: int, last_values: object
-    ) -> str:
-        return name
-
-    def __str__(self) -> str:
-        return self.name
-
-
-REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
-
-
-def check_dts_python_version() -> None:
-    if sys.version_info.major < 3 or (
-        sys.version_info.major == 3 and sys.version_info.minor < 10
-    ):
-        print(
-            RED(
-                (
-                    "WARNING: DTS execution node's python version is lower than"
-                    "python 3.10, is deprecated and will not work in future releases."
-                )
-            ),
-            file=sys.stderr,
-        )
-        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
 
 
 def expand_range(range_str: str) -> list[int]:
@@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
     return expanded_range
 
 
-def get_packet_summaries(packets: list[Packet]):
+def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
     return f"Packet contents: \n{packet_summaries}"
 
 
-def RED(text: str) -> str:
-    return f"\u001B[31;1m{str(text)}\u001B[0m"
+class StrEnum(Enum):
+    @staticmethod
+    def _generate_next_value_(
+        name: str, start: int, count: int, last_values: object
+    ) -> str:
+        return name
+
+    def __str__(self) -> str:
+        return self.name
 
 
 class MesonArgs(object):
@@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
         if self._tarball_path and os.path.exists(self._tarball_path):
             os.remove(self._tarball_path)
 
-    def __fspath__(self):
+    def __fspath__(self) -> str:
         return str(self._tarball_path)
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..5d4714b0c3 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,17 @@
 
 import logging
 
-from framework import dts
+from framework import settings
 
 
 def main() -> None:
+    """Set DTS settings, then run DTS.
+
+    The DTS settings are taken from the command line arguments and the environment variables.
+    """
+    settings.SETTINGS = settings.get_settings()
+    from framework import dts
+
     dts.run_all()
 
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 02/23] dts: add docstring checker
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 01/23] dts: code adjustments for doc generation Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-07 17:38           ` Yoan Picchi
  2023-11-06 17:15         ` [PATCH v5 03/23] dts: add basic developer docs Juraj Linkeš
                           ` (21 subsequent siblings)
  23 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Python docstrings are the in-code way to document the code. The
docstring checker of choice is pydocstyle which we're executing from
Pylama, but the current latest versions are not complatible due to [0],
so pin the pydocstyle version to the latest working version.

[0] https://github.com/klen/pylama/issues/232

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 12 ++++++------
 dts/pyproject.toml |  6 +++++-
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..a734fa71f0 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -489,20 +489,20 @@ files = [
 
 [[package]]
 name = "pydocstyle"
-version = "6.3.0"
+version = "6.1.1"
 description = "Python docstring style checker"
 optional = false
 python-versions = ">=3.6"
 files = [
-    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
-    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
+    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
 ]
 
 [package.dependencies]
-snowballstemmer = ">=2.2.0"
+snowballstemmer = "*"
 
 [package.extras]
-toml = ["tomli (>=1.2.3)"]
+toml = ["toml"]
 
 [[package]]
 name = "pyflakes"
@@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 6762edfa6b..3943c87c87 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -25,6 +25,7 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
+pydocstyle = "6.1.1"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^0.961"
@@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
 [tool.pylama]
-linters = "mccabe,pycodestyle,pyflakes"
+linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
 max_line_length = 88 # https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length
 
+[tool.pylama.linter.pydocstyle]
+convention = "google"
+
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 03/23] dts: add basic developer docs
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 01/23] dts: code adjustments for doc generation Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 02/23] dts: add docstring checker Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-07 14:39           ` Yoan Picchi
  2023-11-06 17:15         ` [PATCH v5 04/23] dts: exceptions docstring update Juraj Linkeš
                           ` (20 subsequent siblings)
  23 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Expand the framework contribution guidelines and add how to document the
code with Python docstrings.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..b1e99107c3 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
 The results contain basic statistics of passed/failed test cases and DPDK version.
 
 
+Contributing to DTS
+-------------------
+
+There are two areas of contribution: The DTS framework and DTS test suites.
+
+The framework contains the logic needed to run test cases, such as connecting to nodes,
+running DPDK apps and collecting results.
+
+The test cases call APIs from the framework to test their scenarios. Adding test cases may
+require adding code to the framework as well.
+
+
+Framework Coding Guidelines
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When adding code to the DTS framework, pay attention to the rest of the code
+and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
+warnings when some of the basics are not met.
+
+The code must be properly documented with docstrings. The style must conform to
+the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
+See an example of the style
+`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
+For cases which are not covered by the Google style, refer
+to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
+the two style guides, where we deviate or where some additional clarification is helpful:
+
+   * The __init__() methods of classes are documented separately from the docstring of the class
+     itself.
+   * The docstrigs of implemented abstract methods should refer to the superclass's definition
+     if there's no deviation.
+   * Instance variables/attributes should be documented in the docstring of the class
+     in the ``Attributes:`` section.
+   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
+     attributes which result in instance variables/attributes should also be recorded
+     in the ``Attributes:`` section.
+   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
+     the type annotated line. The description may be omitted if the meaning is obvious.
+   * The Enum and TypedDict also process the attributes in particular ways and should be documented
+     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
+   * When referencing a parameter of a function or a method in their docstring, don't use
+     any articles and put the parameter into single backticks. This mimics the style of
+     `Python's documentation <https://docs.python.org/3/index.html>`_.
+   * When specifying a value, use double backticks::
+
+        def foo(greet: bool) -> None:
+            """Demonstration of single and double backticks.
+
+            `greet` controls whether ``Hello World`` is printed.
+
+            Args:
+               greet: Whether to print the ``Hello World`` message.
+            """
+            if greet:
+               print(f"Hello World")
+
+   * The docstring maximum line length is the same as the code maximum line length.
+
+
 How To Write a Test Suite
 -------------------------
 
@@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
    | These methods don't need to be implemented if there's no need for them in a test suite.
      In that case, nothing will happen when they're is executed.
 
+#. **Configuration, traffic and other logic**
+
+   The ``TestSuite`` class contains a variety of methods for anything that
+   a test suite setup or teardown or a test case may need to do.
+
+   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
+   and use the interactive shell instances directly.
+
+   These are the two main ways to call the framework logic in test suites. If there's any
+   functionality or logic missing from the framework, it should be implemented so that
+   the test suites can use one of these two ways.
+
 #. **Test case verification**
 
    Test case verification should be done with the ``verify`` method, which records the result.
@@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
    and used by the test suite via the ``sut_node`` field.
 
 
+.. _dts_dev_tools:
+
 DTS Developer Tools
 -------------------
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 04/23] dts: exceptions docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (2 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 03/23] dts: add basic developer docs Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 05/23] dts: settings " Juraj Linkeš
                           ` (19 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/__init__.py  |  12 ++++-
 dts/framework/exception.py | 106 +++++++++++++++++++++++++------------
 2 files changed, 83 insertions(+), 35 deletions(-)

diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
index d551ad4bf0..662e6ccad2 100644
--- a/dts/framework/__init__.py
+++ b/dts/framework/__init__.py
@@ -1,3 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
+
+"""Libraries and utilities for running DPDK Test Suite (DTS).
+
+The various modules in the DTS framework offer:
+
+* Connections to nodes, both interactive and non-interactive,
+* A straightforward way to add support for different operating systems of remote nodes,
+* Test suite setup, execution and teardown, along with test case setup, execution and teardown,
+* Pre-test suite setup and post-test suite teardown.
+"""
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 7489c03570..ee1562c672 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -3,8 +3,10 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-User-defined exceptions used across the framework.
+"""DTS exceptions.
+
+The exceptions all have different severities expressed as an integer.
+The highest severity of all raised exception is used as the exit code of DTS.
 """
 
 from enum import IntEnum, unique
@@ -13,59 +15,79 @@
 
 @unique
 class ErrorSeverity(IntEnum):
-    """
-    The severity of errors that occur during DTS execution.
+    """The severity of errors that occur during DTS execution.
+
     All exceptions are caught and the most severe error is used as return code.
     """
 
+    #:
     NO_ERR = 0
+    #:
     GENERIC_ERR = 1
+    #:
     CONFIG_ERR = 2
+    #:
     REMOTE_CMD_EXEC_ERR = 3
+    #:
     SSH_ERR = 4
+    #:
     DPDK_BUILD_ERR = 10
+    #:
     TESTCASE_VERIFY_ERR = 20
+    #:
     BLOCKING_TESTSUITE_ERR = 25
 
 
 class DTSError(Exception):
-    """
-    The base exception from which all DTS exceptions are derived.
-    Stores error severity.
+    """The base exception from which all DTS exceptions are subclassed.
+
+    Do not use this exception, only use subclassed exceptions.
     """
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
 class SSHTimeoutError(DTSError):
-    """
-    Command execution timeout.
-    """
+    """The SSH execution of a command timed out."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _command: str
 
     def __init__(self, command: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            command: The executed command.
+        """
         self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self._command}"
+        """Add some context to the string representation."""
+        return f"{self._command} execution timed out."
 
 
 class SSHConnectionError(DTSError):
-    """
-    SSH connection error.
-    """
+    """An unsuccessful SSH connection."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
     _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            host: The hostname to which we're trying to connect.
+            errors: Any errors that occurred during the connection attempt.
+        """
         self._host = host
         self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
+        """Include the errors in the string representation."""
         message = f"Error trying to connect with {self._host}."
         if self._errors:
             message += f" Errors encountered while retrying: {', '.join(self._errors)}"
@@ -74,43 +96,53 @@ def __str__(self) -> str:
 
 
 class SSHSessionDeadError(DTSError):
-    """
-    SSH session is not alive.
-    It can no longer be used.
-    """
+    """The SSH session is no longer alive."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
 
     def __init__(self, host: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            host: The hostname of the disconnected node.
+        """
         self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self._host} has died"
+        """Add some context to the string representation."""
+        return f"SSH session with {self._host} has died."
 
 
 class ConfigurationError(DTSError):
-    """
-    Raised when an invalid configuration is encountered.
-    """
+    """An invalid configuration."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
 
 
 class RemoteCommandExecutionError(DTSError):
-    """
-    Raised when a command executed on a Node returns a non-zero exit status.
-    """
+    """An unsuccessful execution of a remote command."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    #: The executed command.
     command: str
     _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            command: The executed command.
+            command_return_code: The return code of the executed command.
+        """
         self.command = command
         self._command_return_code = command_return_code
 
     def __str__(self) -> str:
+        """Include both the command and return code in the string representation."""
         return (
             f"Command {self.command} returned a non-zero exit code: "
             f"{self._command_return_code}"
@@ -118,35 +150,41 @@ def __str__(self) -> str:
 
 
 class RemoteDirectoryExistsError(DTSError):
-    """
-    Raised when a remote directory to be created already exists.
-    """
+    """A directory that exists on a remote node."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
 
 
 class DPDKBuildError(DTSError):
-    """
-    Raised when DPDK build fails for any reason.
-    """
+    """A DPDK build failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
 
 
 class TestCaseVerifyError(DTSError):
-    """
-    Used in test cases to verify the expected behavior.
-    """
+    """A test case failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
 
 class BlockingTestSuiteError(DTSError):
+    """A failure in a blocking test suite."""
+
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
     _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
+        """Define the meaning of the first argument.
+
+        Args:
+            suite_name: The blocking test suite.
+        """
         self._suite_name = suite_name
 
     def __str__(self) -> str:
+        """Add some context to the string representation."""
         return f"Blocking suite {self._suite_name} failed."
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 05/23] dts: settings docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (3 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 04/23] dts: exceptions docstring update Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 06/23] dts: logger and " Juraj Linkeš
                           ` (18 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/settings.py | 101 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 7f5841d073..787db7c198 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -3,6 +3,70 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
+"""Environment variables and command line arguments parsing.
+
+This is a simple module utilizing the built-in argparse module to parse command line arguments,
+augment them with values from environment variables and make them available across the framework.
+
+The command line value takes precedence, followed by the environment variable value,
+followed by the default value defined in this module.
+
+The command line arguments along with the supported environment variables are:
+
+.. option:: --config-file
+.. envvar:: DTS_CFG_FILE
+
+    The path to the YAML test run configuration file.
+
+.. option:: --output-dir, --output
+.. envvar:: DTS_OUTPUT_DIR
+
+    The directory where DTS logs and results are saved.
+
+.. option:: --compile-timeout
+.. envvar:: DTS_COMPILE_TIMEOUT
+
+    The timeout for compiling DPDK.
+
+.. option:: -t, --timeout
+.. envvar:: DTS_TIMEOUT
+
+    The timeout for all DTS operation except for compiling DPDK.
+
+.. option:: -v, --verbose
+.. envvar:: DTS_VERBOSE
+
+    Set to any value to enable logging everything to the console.
+
+.. option:: -s, --skip-setup
+.. envvar:: DTS_SKIP_SETUP
+
+    Set to any value to skip building DPDK.
+
+.. option:: --tarball, --snapshot, --git-ref
+.. envvar:: DTS_DPDK_TARBALL
+
+    The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
+
+.. option:: --test-cases
+.. envvar:: DTS_TESTCASES
+
+    A comma-separated list of test cases to execute. Unknown test cases will be silently ignored.
+
+.. option:: --re-run, --re_run
+.. envvar:: DTS_RERUN
+
+    Re-run each test case this many times in case of a failure.
+
+Attributes:
+    SETTINGS: The module level variable storing framework-wide DTS settings.
+
+Typical usage example::
+
+  from framework.settings import SETTINGS
+  foo = SETTINGS.foo
+"""
+
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
@@ -16,6 +80,23 @@
 
 
 def _env_arg(env_var: str) -> Any:
+    """A helper method augmenting the argparse Action with environment variables.
+
+    If the supplied environment variable is defined, then the default value
+    of the argument is modified. This satisfies the priority order of
+    command line argument > environment variable > default value.
+
+    Arguments with no values (flags) should be defined using the const keyword argument
+    (True or False). When the argument is specified, it will be set to const, if not specified,
+    the default will be stored (possibly modified by the corresponding environment variable).
+
+    Other arguments work the same as default argparse arguments, that is using
+    the default 'store' action.
+
+    Returns:
+          The modified argparse.Action.
+    """
+
     class _EnvironmentArgument(argparse.Action):
         def __init__(
             self,
@@ -68,14 +149,28 @@ def __call__(
 
 @dataclass(slots=True)
 class Settings:
+    """Default framework-wide user settings.
+
+    The defaults may be modified at the start of the run.
+    """
+
+    #:
     config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    #:
     output_dir: str = "output"
+    #:
     timeout: float = 15
+    #:
     verbose: bool = False
+    #:
     skip_setup: bool = False
+    #:
     dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    #:
     compile_timeout: float = 1200
+    #:
     test_cases: list[str] = field(default_factory=list)
+    #:
     re_run: int = 0
 
 
@@ -169,7 +264,7 @@ def _get_parser() -> argparse.ArgumentParser:
         action=_env_arg("DTS_RERUN"),
         default=SETTINGS.re_run,
         type=int,
-        help="[DTS_RERUN] Re-run each test case the specified amount of times "
+        help="[DTS_RERUN] Re-run each test case the specified number of times "
         "if a test failure occurs",
     )
 
@@ -177,6 +272,10 @@ def _get_parser() -> argparse.ArgumentParser:
 
 
 def get_settings() -> Settings:
+    """Create new settings with inputs from the user.
+
+    The inputs are taken from the command line and from environment variables.
+    """
     parsed_args = _get_parser().parse_args()
     return Settings(
         config_file_path=parsed_args.config_file,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 06/23] dts: logger and settings docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (4 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 05/23] dts: settings " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 07/23] dts: dts runner and main " Juraj Linkeš
                           ` (17 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/logger.py | 72 +++++++++++++++++++++----------
 dts/framework/utils.py  | 96 ++++++++++++++++++++++++++++++-----------
 2 files changed, 121 insertions(+), 47 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index bb2991e994..d3eb75a4e4 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -3,9 +3,9 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-DTS logger module with several log level. DTS framework and TestSuite logs
-are saved in different log files.
+"""DTS logger module.
+
+DTS framework and TestSuite logs are saved in different log files.
 """
 
 import logging
@@ -18,19 +18,21 @@
 stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 
 
-class LoggerDictType(TypedDict):
-    logger: "DTSLOG"
-    name: str
-    node: str
-
+class DTSLOG(logging.LoggerAdapter):
+    """DTS logger adapter class for framework and testsuites.
 
-# List for saving all using loggers
-Loggers: list[LoggerDictType] = []
+    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
+    variable control the verbosity of output. If enabled, all messages will be emitted to the
+    console.
 
+    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+    variable modify the directory where the logs will be stored.
 
-class DTSLOG(logging.LoggerAdapter):
-    """
-    DTS log class for framework and testsuite.
+    Attributes:
+        node: The additional identifier. Currently unused.
+        sh: The handler which emits logs to console.
+        fh: The handler which emits logs to a file.
+        verbose_fh: Just as fh, but logs with a different, more verbose, format.
     """
 
     _logger: logging.Logger
@@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
     verbose_fh: logging.FileHandler
 
     def __init__(self, logger: logging.Logger, node: str = "suite"):
+        """Extend the constructor with additional handlers.
+
+        One handler logs to the console, the other one to a file, with either a regular or verbose
+        format.
+
+        Args:
+            logger: The logger from which to create the logger adapter.
+            node: An additional identifier. Currently unused.
+        """
         self._logger = logger
         # 1 means log everything, this will be used by file handlers if their level
         # is not set
@@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
         super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
 
     def logger_exit(self) -> None:
-        """
-        Remove stream handler and logfile handler.
-        """
+        """Remove the stream handler and the logfile handler."""
         for handler in (self.sh, self.fh, self.verbose_fh):
             handler.flush()
             self._logger.removeHandler(handler)
 
 
+class _LoggerDictType(TypedDict):
+    logger: DTSLOG
+    name: str
+    node: str
+
+
+# List for saving all loggers in use
+_Loggers: list[_LoggerDictType] = []
+
+
 def getLogger(name: str, node: str = "suite") -> DTSLOG:
+    """Get DTS logger adapter identified by name and node.
+
+    An existing logger will be return if one with the exact name and node already exists.
+    A new one will be created and stored otherwise.
+
+    Args:
+        name: The name of the logger.
+        node: An additional identifier for the logger.
+
+    Returns:
+        A logger uniquely identified by both name and node.
     """
-    Get logger handler and if there's no handler for specified Node will create one.
-    """
-    global Loggers
+    global _Loggers
     # return saved logger
-    logger: LoggerDictType
-    for logger in Loggers:
+    logger: _LoggerDictType
+    for logger in _Loggers:
         if logger["name"] == name and logger["node"] == node:
             return logger["logger"]
 
     # return new logger
     dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
-    Loggers.append({"logger": dts_logger, "name": name, "node": node})
+    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
     return dts_logger
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index f0c916471c..0613adf7ad 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -3,6 +3,16 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Various utility classes and functions.
+
+These are used in multiple modules across the framework. They're here because
+they provide some non-specific functionality, greatly simplify imports or just don't
+fit elsewhere.
+
+Attributes:
+    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
+"""
+
 import atexit
 import json
 import os
@@ -19,12 +29,20 @@
 
 
 def expand_range(range_str: str) -> list[int]:
-    """
-    Process range string into a list of integers. There are two possible formats:
-    n - a single integer
-    n-m - a range of integers
+    """Process `range_str` into a list of integers.
+
+    There are two possible formats of `range_str`:
+
+        * ``n`` - a single integer,
+        * ``n-m`` - a range of integers.
 
-    The returned range includes both n and m. Empty string returns an empty list.
+    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
+
+    Args:
+        range_str: The range to expand.
+
+    Returns:
+        All the numbers from the range.
     """
     expanded_range: list[int] = []
     if range_str:
@@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]:
 
 
 def get_packet_summaries(packets: list[Packet]) -> str:
+    """Format a string summary from `packets`.
+
+    Args:
+        packets: The packets to format.
+
+    Returns:
+        The summary of `packets`.
+    """
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str:
 
 
 class StrEnum(Enum):
+    """Enum with members stored as strings."""
+
     @staticmethod
     def _generate_next_value_(
         name: str, start: int, count: int, last_values: object
@@ -56,22 +84,29 @@ def _generate_next_value_(
         return name
 
     def __str__(self) -> str:
+        """The string representation is the name of the member."""
         return self.name
 
 
 class MesonArgs(object):
-    """
-    Aggregate the arguments needed to build DPDK:
-    default_library: Default library type, Meson allows "shared", "static" and "both".
-               Defaults to None, in which case the argument won't be used.
-    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
-               Do not use -D with them, for example:
-               meson_args = MesonArgs(enable_kmods=True).
-    """
+    """Aggregate the arguments needed to build DPDK."""
 
     _default_library: str
 
     def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+        """Initialize the meson arguments.
+
+        Args:
+            default_library: The default library type, Meson supports ``shared``, ``static`` and
+                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
+            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Example:
+            ::
+
+                meson_args = MesonArgs(enable_kmods=True).
+        """
         self._default_library = (
             f"--default-library={default_library}" if default_library else ""
         )
@@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
         )
 
     def __str__(self) -> str:
+        """The actual args."""
         return " ".join(f"{self._default_library} {self._dpdk_args}".split())
 
 
@@ -93,35 +129,33 @@ class _TarCompressionFormat(StrEnum):
     and Enum values are the associated file extensions.
     """
 
+    #:
     gzip = "gz"
+    #:
     compress = "Z"
+    #:
     bzip2 = "bz2"
+    #:
     lzip = "lz"
+    #:
     lzma = "lzma"
+    #:
     lzop = "lzo"
+    #:
     xz = "xz"
+    #:
     zstd = "zst"
 
 
 class DPDKGitTarball(object):
-    """Create a compressed tarball of DPDK from the repository.
-
-    The DPDK version is specified with git object git_ref.
-    The tarball will be compressed with _TarCompressionFormat,
-    which must be supported by the DTS execution environment.
-    The resulting tarball will be put into output_dir.
+    """Compressed tarball of DPDK from the repository.
 
-    The class supports the os.PathLike protocol,
+    The class supports the :class:`os.PathLike` protocol,
     which is used to get the Path of the tarball::
 
         from pathlib import Path
         tarball = DPDKGitTarball("HEAD", "output")
         tarball_path = Path(tarball)
-
-    Arguments:
-        git_ref: A git commit ID, tag ID or tree ID.
-        output_dir: The directory where to put the resulting tarball.
-        tar_compression_format: The compression format to use.
     """
 
     _git_ref: str
@@ -136,6 +170,17 @@ def __init__(
         output_dir: str,
         tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
     ):
+        """Create the tarball during initialization.
+
+        The DPDK version is specified with `git_ref`. The tarball will be compressed with
+        `tar_compression_format`, which must be supported by the DTS execution environment.
+        The resulting tarball will be put into `output_dir`.
+
+        Args:
+            git_ref: A git commit ID, tag ID or tree ID.
+            output_dir: The directory where to put the resulting tarball.
+            tar_compression_format: The compression format to use.
+        """
         self._git_ref = git_ref
         self._tar_compression_format = tar_compression_format
 
@@ -204,4 +249,5 @@ def _delete_tarball(self) -> None:
             os.remove(self._tarball_path)
 
     def __fspath__(self) -> str:
+        """The os.PathLike protocol implementation."""
         return str(self._tarball_path)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 07/23] dts: dts runner and main docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (5 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 06/23] dts: logger and " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 08/23] dts: test suite " Juraj Linkeš
                           ` (16 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++-------
 dts/main.py          |   8 ++-
 2 files changed, 112 insertions(+), 24 deletions(-)

diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 4c7fb0c40a..331fed7dc4 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,33 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+r"""Test suite runner module.
+
+A DTS run is split into stages:
+
+    #. Execution stage,
+    #. Build target stage,
+    #. Test suite stage,
+    #. Test case stage.
+
+The module is responsible for running tests on testbeds defined in the test run configuration.
+Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or
+one of its subclasses. The test case results are also recorded.
+
+If an error occurs, the current stage is aborted, the error is recorded and the run continues in
+the next iteration of the same stage. The return code is the highest `severity` of all
+:class:`~.framework.exception.DTSError`\s.
+
+Example:
+    An error occurs in a build target setup. The current build target is aborted and the run
+    continues with the next build target. If the errored build target was the last one in the given
+    execution, the next execution begins.
+
+Attributes:
+    dts_logger: The logger instance used in this module.
+    result: The top level result used in the module.
+"""
+
 import sys
 
 from .config import (
@@ -23,9 +50,38 @@
 
 
 def run_all() -> None:
-    """
-    The main process of DTS. Runs all build targets in all executions from the main
-    config file.
+    """Run all build targets in all executions from the test run configuration.
+
+    Before running test suites, executions and build targets are first set up.
+    The executions and build targets defined in the test run configuration are iterated over.
+    The executions define which tests to run and where to run them and build targets define
+    the DPDK build setup.
+
+    The tests suites are set up for each execution/build target tuple and each scheduled
+    test case within the test suite is set up, executed and torn down. After all test cases
+    have been executed, the test suite is torn down and the next build target will be tested.
+
+    All the nested steps look like this:
+
+        #. Execution setup
+
+            #. Build target setup
+
+                #. Test suite setup
+
+                    #. Test case setup
+                    #. Test case logic
+                    #. Test case teardown
+
+                #. Test suite teardown
+
+            #. Build target teardown
+
+        #. Execution teardown
+
+    The test cases are filtered according to the specification in the test run configuration and
+    the :option:`--test-cases` command line argument or
+    the :envvar:`DTS_TESTCASES` environment variable.
     """
     global dts_logger
     global result
@@ -87,6 +143,8 @@ def run_all() -> None:
 
 
 def _check_dts_python_version() -> None:
+    """Check the required Python version - v3.10."""
+
     def RED(text: str) -> str:
         return f"\u001B[31;1m{str(text)}\u001B[0m"
 
@@ -111,9 +169,16 @@ def _run_execution(
     execution: ExecutionConfiguration,
     result: DTSResult,
 ) -> None:
-    """
-    Run the given execution. This involves running the execution setup as well as
-    running all build targets in the given execution.
+    """Run the given execution.
+
+    This involves running the execution setup as well as running all build targets
+    in the given execution. After that, execution teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: An execution's test run configuration.
+        result: The top level result object.
     """
     dts_logger.info(
         f"Running execution with SUT '{execution.system_under_test_node.name}'."
@@ -150,8 +215,18 @@ def _run_build_target(
     execution: ExecutionConfiguration,
     execution_result: ExecutionResult,
 ) -> None:
-    """
-    Run the given build target.
+    """Run the given build target.
+
+    This involves running the build target setup as well as running all test suites
+    in the given execution the build target is defined in.
+    After that, build target teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        build_target: A build target's test run configuration.
+        execution: The build target's execution's test run configuration.
+        execution_result: The execution level result object associated with the execution.
     """
     dts_logger.info(f"Running build target '{build_target.name}'.")
     build_target_result = execution_result.add_build_target(build_target)
@@ -183,10 +258,17 @@ def _run_all_suites(
     execution: ExecutionConfiguration,
     build_target_result: BuildTargetResult,
 ) -> None:
-    """
-    Use the given build_target to run execution's test suites
-    with possibly only a subset of test cases.
-    If no subset is specified, run all test cases.
+    """Run the execution's (possibly a subset) test suites using the current build_target.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
     """
     end_build_target = False
     if not execution.skip_smoke_tests:
@@ -215,16 +297,22 @@ def _run_single_suite(
     build_target_result: BuildTargetResult,
     test_suite_config: TestSuiteConfig,
 ) -> None:
-    """Runs a single test suite.
+    """Run all test suite in a single test suite module.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
 
     Args:
-        sut_node: Node to run tests on.
-        execution: Execution the test case belongs to.
-        build_target_result: Build target configuration test case is run on
-        test_suite_config: Test suite configuration
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
+        test_suite_config: Test suite test run configuration specifying the test suite module
+            and possibly a subset of test cases of test suites in that module.
 
     Raises:
-        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
+        BlockingTestSuiteError: If a blocking test suite fails.
     """
     try:
         full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
@@ -248,9 +336,7 @@ def _run_single_suite(
 
 
 def _exit_dts() -> None:
-    """
-    Process all errors and exit with the proper exit code.
-    """
+    """Process all errors and exit with the proper exit code."""
     result.process()
 
     if dts_logger:
diff --git a/dts/main.py b/dts/main.py
index 5d4714b0c3..f703615d11 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -4,9 +4,7 @@
 # Copyright(c) 2022 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
-"""
-A test framework for testing DPDK.
-"""
+"""The DTS executable."""
 
 import logging
 
@@ -17,6 +15,10 @@ def main() -> None:
     """Set DTS settings, then run DTS.
 
     The DTS settings are taken from the command line arguments and the environment variables.
+    The settings object is stored in the module-level variable settings.SETTINGS which the entire
+    framework uses. After importing the module (or the variable), any changes to the variable are
+    not going to be reflected without a re-import. This means that the SETTINGS variable must
+    be modified before the settings module is imported anywhere else in the framework.
     """
     settings.SETTINGS = settings.get_settings()
     from framework import dts
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 08/23] dts: test suite docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (6 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 07/23] dts: dts runner and main " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 09/23] dts: test result " Juraj Linkeš
                           ` (15 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_suite.py | 223 +++++++++++++++++++++++++++---------
 1 file changed, 168 insertions(+), 55 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index d53553bf34..8daac35818 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -2,8 +2,19 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Base class for creating DTS test cases.
+"""Features common to all test suites.
+
+The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such
+must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics
+needed by subclasses:
+
+    * Test suite and test case execution flow,
+    * Testbed (SUT, TG) configuration,
+    * Packet sending and verification,
+    * Test case verification.
+
+The module also defines a function, :func:`get_test_suites`,
+for gathering test suites from a Python module.
 """
 
 import importlib
@@ -31,25 +42,44 @@
 
 
 class TestSuite(object):
-    """
-    The base TestSuite class provides methods for handling basic flow of a test suite:
-    * test case filtering and collection
-    * test suite setup/cleanup
-    * test setup/cleanup
-    * test case execution
-    * error handling and results storage
-    Test cases are implemented by derived classes. Test cases are all methods
-    starting with test_, further divided into performance test cases
-    (starting with test_perf_) and functional test cases (all other test cases).
-    By default, all test cases will be executed. A list of testcase str names
-    may be specified in conf.yaml or on the command line
-    to filter which test cases to run.
-    The methods named [set_up|tear_down]_[suite|test_case] should be overridden
-    in derived classes if the appropriate suite/test case fixtures are needed.
+    """The base class with methods for handling the basic flow of a test suite.
+
+        * Test case filtering and collection,
+        * Test suite setup/cleanup,
+        * Test setup/cleanup,
+        * Test case execution,
+        * Error handling and results storage.
+
+    Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``,
+    further divided into performance test cases (starting with ``test_perf_``)
+    and functional test cases (all other test cases).
+
+    By default, all test cases will be executed. A list of testcase names may be specified
+    in the YAML test run configuration file and in the :option:`--test-cases` command line argument
+    or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run.
+    The union of both lists will be used. Any unknown test cases from the latter lists
+    will be silently ignored.
+
+    If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable
+    is set, in case of a test case failure, the test case will be executed again until it passes
+    or it fails that many times in addition of the first failure.
+
+    The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses
+    if the appropriate test suite/test case fixtures are needed.
+
+    The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
+    properly choose the IP addresses and other configuration that must be tailored to the testbed.
+
+    Attributes:
+        sut_node: The SUT node where the test suite is running.
+        tg_node: The TG node where the test suite is running.
+        is_blocking: Whether the test suite is blocking. A failure of a blocking test suite
+            will block the execution of all subsequent test suites in the current build target.
     """
 
     sut_node: SutNode
-    is_blocking = False
+    tg_node: TGNode
+    is_blocking: bool = False
     _logger: DTSLOG
     _test_cases_to_run: list[str]
     _func: bool
@@ -72,6 +102,19 @@ def __init__(
         func: bool,
         build_target_result: BuildTargetResult,
     ):
+        """Initialize the test suite testbed information and basic configuration.
+
+        Process what test cases to run, create the associated :class:`TestSuiteResult`,
+        find links between ports and set up default IP addresses to be used when configuring them.
+
+        Args:
+            sut_node: The SUT node where the test suite will run.
+            tg_node: The TG node where the test suite will run.
+            test_cases: The list of test cases to execute.
+                If empty, all test cases will be executed.
+            func: Whether to run functional tests.
+            build_target_result: The build target result this test suite is run in.
+        """
         self.sut_node = sut_node
         self.tg_node = tg_node
         self._logger = getLogger(self.__class__.__name__)
@@ -95,6 +138,7 @@ def __init__(
         self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
 
     def _process_links(self) -> None:
+        """Construct links between SUT and TG ports."""
         for sut_port in self.sut_node.ports:
             for tg_port in self.tg_node.ports:
                 if (sut_port.identifier, sut_port.peer) == (
@@ -106,27 +150,42 @@ def _process_links(self) -> None:
                     )
 
     def set_up_suite(self) -> None:
-        """
-        Set up test fixtures common to all test cases; this is done before
-        any test case is run.
+        """Set up test fixtures common to all test cases.
+
+        This is done before any test case has been run.
         """
 
     def tear_down_suite(self) -> None:
-        """
-        Tear down the previously created test fixtures common to all test cases.
+        """Tear down the previously created test fixtures common to all test cases.
+
+        This is done after all test have been run.
         """
 
     def set_up_test_case(self) -> None:
-        """
-        Set up test fixtures before each test case.
+        """Set up test fixtures before each test case.
+
+        This is done before *each* test case.
         """
 
     def tear_down_test_case(self) -> None:
-        """
-        Tear down the previously created test fixtures after each test case.
+        """Tear down the previously created test fixtures after each test case.
+
+        This is done after *each* test case.
         """
 
     def configure_testbed_ipv4(self, restore: bool = False) -> None:
+        """Configure IPv4 addresses on all testbed ports.
+
+        The configured ports are:
+
+        * SUT ingress port,
+        * SUT egress port,
+        * TG ingress port,
+        * TG egress port.
+
+        Args:
+            restore: If :data:`True`, will remove the configuration instead.
+        """
         delete = True if restore else False
         enable = False if restore else True
         self._configure_ipv4_forwarding(enable)
@@ -153,11 +212,13 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None:
     def send_packet_and_capture(
         self, packet: Packet, duration: float = 1
     ) -> list[Packet]:
-        """
-        Send a packet through the appropriate interface and
-        receive on the appropriate interface.
-        Modify the packet with l3/l2 addresses corresponding
-        to the testbed and desired traffic.
+        """Send and receive `packet` using the associated TG.
+
+        Send `packet` through the appropriate interface and receive on the appropriate interface.
+        Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic.
+
+        Returns:
+            A list of received packets.
         """
         packet = self._adjust_addresses(packet)
         return self.tg_node.send_packet_and_capture(
@@ -165,13 +226,25 @@ def send_packet_and_capture(
         )
 
     def get_expected_packet(self, packet: Packet) -> Packet:
+        """Inject the proper L2/L3 addresses into `packet`.
+
+        Args:
+            packet: The packet to modify.
+
+        Returns:
+            `packet` with injected L2/L3 addresses.
+        """
         return self._adjust_addresses(packet, expected=True)
 
     def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
-        """
+        """L2 and L3 address additions in both directions.
+
         Assumptions:
-            Two links between SUT and TG, one link is TG -> SUT,
-            the other SUT -> TG.
+            Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG.
+
+        Args:
+            packet: The packet to modify.
+            expected: If True, the direction is SUT -> TG, otherwise the direction is TG -> SUT.
         """
         if expected:
             # The packet enters the TG from SUT
@@ -197,6 +270,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
         return Ether(packet.build())
 
     def verify(self, condition: bool, failure_description: str) -> None:
+        """Verify `condition` and handle failures.
+
+        When `condition` is :data:`False`, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            condition: The condition to check.
+            failure_description: A short description of the failure
+                that will be stored in the raised exception.
+
+        Raises:
+            TestCaseVerifyError: `condition` is :data:`False`.
+        """
         if not condition:
             self._fail_test_case_verify(failure_description)
 
@@ -216,6 +302,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None:
     def verify_packets(
         self, expected_packet: Packet, received_packets: list[Packet]
     ) -> None:
+        """Verify that `expected_packet` has been received.
+
+        Go through `received_packets` and check that `expected_packet` is among them.
+        If not, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            expected_packet: The packet we're expecting to receive.
+            received_packets: The packets where we're looking for `expected_packet`.
+
+        Raises:
+            TestCaseVerifyError: `expected_packet` is not among `received_packets`.
+        """
         for received_packet in received_packets:
             if self._compare_packets(expected_packet, received_packet):
                 break
@@ -303,10 +402,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         return True
 
     def run(self) -> None:
-        """
-        Setup, execute and teardown the whole suite.
-        Suite execution consists of running all test cases scheduled to be executed.
-        A test cast run consists of setup, execution and teardown of said test case.
+        """Set up, execute and tear down the whole suite.
+
+        Test suite execution consists of running all test cases scheduled to be executed.
+        A test case run consists of setup, execution and teardown of said test case.
+
+        Record the setup and the teardown and handle failures.
+
+        The list of scheduled test cases is constructed when creating the :class:`TestSuite` object.
         """
         test_suite_name = self.__class__.__name__
 
@@ -338,9 +441,7 @@ def run(self) -> None:
                 raise BlockingTestSuiteError(test_suite_name)
 
     def _execute_test_suite(self) -> None:
-        """
-        Execute all test cases scheduled to be executed in this suite.
-        """
+        """Execute all test cases scheduled to be executed in this suite."""
         if self._func:
             for test_case_method in self._get_functional_test_cases():
                 test_case_name = test_case_method.__name__
@@ -357,14 +458,18 @@ def _execute_test_suite(self) -> None:
                     self._run_test_case(test_case_method, test_case_result)
 
     def _get_functional_test_cases(self) -> list[MethodType]:
-        """
-        Get all functional test cases.
+        """Get all functional test cases defined in this TestSuite.
+
+        Returns:
+            The list of functional test cases of this TestSuite.
         """
         return self._get_test_cases(r"test_(?!perf_)")
 
     def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
-        """
-        Return a list of test cases matching test_case_regex.
+        """Return a list of test cases matching test_case_regex.
+
+        Returns:
+            The list of test cases matching test_case_regex of this TestSuite.
         """
         self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
         filtered_test_cases = []
@@ -378,9 +483,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
         return filtered_test_cases
 
     def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
-        """
-        Check whether the test case should be executed.
-        """
+        """Check whether the test case should be scheduled to be executed."""
         match = bool(re.match(test_case_regex, test_case_name))
         if self._test_cases_to_run:
             return match and test_case_name in self._test_cases_to_run
@@ -390,9 +493,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
     def _run_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Setup, execute and teardown a test case in this suite.
-        Exceptions are caught and recorded in logs and results.
+        """Setup, execute and teardown a test case in this suite.
+
+        Record the result of the setup and the teardown and handle failures.
         """
         test_case_name = test_case_method.__name__
 
@@ -427,9 +530,7 @@ def _run_test_case(
     def _execute_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Execute one test case and handle failures.
-        """
+        """Execute one test case, record the result and handle failures."""
         test_case_name = test_case_method.__name__
         try:
             self._logger.info(f"Starting test case execution: {test_case_name}")
@@ -452,6 +553,18 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+    r"""Find all :class:`TestSuite`\s in a Python module.
+
+    Args:
+        testsuite_module_path: The path to the Python module.
+
+    Returns:
+        The list of :class:`TestSuite`\s found within the Python module.
+
+    Raises:
+        ConfigurationError: The test suite module was not found.
+    """
+
     def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 09/23] dts: test result docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (7 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 08/23] dts: test suite " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 10/23] dts: config " Juraj Linkeš
                           ` (14 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++-------
 1 file changed, 234 insertions(+), 58 deletions(-)

diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 603e18872c..f553948454 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -2,8 +2,25 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-Generic result container and reporters
+r"""Record and process DTS results.
+
+The results are recorded in a hierarchical manner:
+
+    * :class:`DTSResult` contains
+    * :class:`ExecutionResult` contains
+    * :class:`BuildTargetResult` contains
+    * :class:`TestSuiteResult` contains
+    * :class:`TestCaseResult`
+
+Each result may contain multiple lower level results, e.g. there are multiple
+:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
+The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`,
+which also defines some common behaviors in its methods.
+
+Each result class has its own idiosyncrasies which they implement in overridden methods.
+
+The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+variable modify the directory where the files with results will be stored.
 """
 
 import os.path
@@ -26,26 +43,34 @@
 
 
 class Result(Enum):
-    """
-    An Enum defining the possible states that
-    a setup, a teardown or a test case may end up in.
-    """
+    """The possible states that a setup, a teardown or a test case may end up in."""
 
+    #:
     PASS = auto()
+    #:
     FAIL = auto()
+    #:
     ERROR = auto()
+    #:
     SKIP = auto()
 
     def __bool__(self) -> bool:
+        """Only PASS is True."""
         return self is self.PASS
 
 
 class FixtureResult(object):
-    """
-    A record that stored the result of a setup or a teardown.
-    The default is FAIL because immediately after creating the object
-    the setup of the corresponding stage will be executed, which also guarantees
-    the execution of teardown.
+    """A record that stores the result of a setup or a teardown.
+
+    FAIL is a sensible default since it prevents false positives
+    (which could happen if the default was TRUE).
+
+    Preventing false positives or other false results is preferable since a failure
+    is mostly likely to be investigated (the other false results may not be investigated at all).
+
+    Attributes:
+        result: The associated result.
+        error: The error in case of a failure.
     """
 
     result: Result
@@ -56,21 +81,32 @@ def __init__(
         result: Result = Result.FAIL,
         error: Exception | None = None,
     ):
+        """Initialize the constructor with the fixture result and store a possible error.
+
+        Args:
+            result: The result to store.
+            error: The error which happened when a failure occurred.
+        """
         self.result = result
         self.error = error
 
     def __bool__(self) -> bool:
+        """A wrapper around the stored :class:`Result`."""
         return bool(self.result)
 
 
 class Statistics(dict):
-    """
-    A helper class used to store the number of test cases by its result
-    along a few other basic information.
-    Using a dict provides a convenient way to format the data.
+    """How many test cases ended in which result state along some other basic information.
+
+    Subclassing :class:`dict` provides a convenient way to format the data.
     """
 
     def __init__(self, dpdk_version: str | None):
+        """Extend the constructor with relevant keys.
+
+        Args:
+            dpdk_version: The version of tested DPDK.
+        """
         super(Statistics, self).__init__()
         for result in Result:
             self[result.name] = 0
@@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None):
         self["DPDK VERSION"] = dpdk_version
 
     def __iadd__(self, other: Result) -> "Statistics":
-        """
-        Add a Result to the final count.
+        """Add a Result to the final count.
+
+        Example:
+            stats: Statistics = Statistics()  # empty Statistics
+            stats += Result.PASS  # add a Result to `stats`
+
+        Args:
+            other: The Result to add to this statistics object.
+
+        Returns:
+            The modified statistics object.
         """
         self[other.name] += 1
         self["PASS RATE"] = (
@@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics":
         return self
 
     def __str__(self) -> str:
-        """
-        Provide a string representation of the data.
-        """
+        """Each line contains the formatted key = value pair."""
         stats_str = ""
         for key, value in self.items():
             stats_str += f"{key:<12} = {value}\n"
@@ -102,10 +145,16 @@ def __str__(self) -> str:
 
 
 class BaseResult(object):
-    """
-    The Base class for all results. Stores the results of
-    the setup and teardown portions of the corresponding stage
-    and a list of results from each inner stage in _inner_results.
+    """Common data and behavior of DTS results.
+
+    Stores the results of the setup and teardown portions of the corresponding stage.
+    The hierarchical nature of DTS results is captured recursively in an internal list.
+    A stage is each level in this particular hierarchy (pre-execution or the top-most level,
+    execution, build target, test suite and test case.)
+
+    Attributes:
+        setup_result: The result of the setup of the particular stage.
+        teardown_result: The results of the teardown of the particular stage.
     """
 
     setup_result: FixtureResult
@@ -113,15 +162,28 @@ class BaseResult(object):
     _inner_results: MutableSequence["BaseResult"]
 
     def __init__(self):
+        """Initialize the constructor."""
         self.setup_result = FixtureResult()
         self.teardown_result = FixtureResult()
         self._inner_results = []
 
     def update_setup(self, result: Result, error: Exception | None = None) -> None:
+        """Store the setup result.
+
+        Args:
+            result: The result of the setup.
+            error: The error that occurred in case of a failure.
+        """
         self.setup_result.result = result
         self.setup_result.error = error
 
     def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+        """Store the teardown result.
+
+        Args:
+            result: The result of the teardown.
+            error: The error that occurred in case of a failure.
+        """
         self.teardown_result.result = result
         self.teardown_result.error = error
 
@@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]:
         ]
 
     def get_errors(self) -> list[Exception]:
+        """Compile errors from the whole result hierarchy.
+
+        Returns:
+            The errors from setup, teardown and all errors found in the whole result hierarchy.
+        """
         return self._get_setup_teardown_errors() + self._get_inner_errors()
 
     def add_stats(self, statistics: Statistics) -> None:
+        """Collate stats from the whole result hierarchy.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be collated.
+        """
         for inner_result in self._inner_results:
             inner_result.add_stats(statistics)
 
 
 class TestCaseResult(BaseResult, FixtureResult):
-    """
-    The test case specific result.
-    Stores the result of the actual test case.
-    Also stores the test case name.
+    r"""The test case specific result.
+
+    Stores the result of the actual test case. This is done by adding an extra superclass
+    in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and
+    the class is itself a record of the test case.
+
+    Attributes:
+        test_case_name: The test case name.
     """
 
     test_case_name: str
 
     def __init__(self, test_case_name: str):
+        """Extend the constructor with `test_case_name`.
+
+        Args:
+            test_case_name: The test case's name.
+        """
         super(TestCaseResult, self).__init__()
         self.test_case_name = test_case_name
 
     def update(self, result: Result, error: Exception | None = None) -> None:
+        """Update the test case result.
+
+        This updates the result of the test case itself and doesn't affect
+        the results of the setup and teardown steps in any way.
+
+        Args:
+            result: The result of the test case.
+            error: The error that occurred in case of a failure.
+        """
         self.result = result
         self.error = error
 
@@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]:
         return []
 
     def add_stats(self, statistics: Statistics) -> None:
+        r"""Add the test case result to statistics.
+
+        The base method goes through the hierarchy recursively and this method is here to stop
+        the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be added.
+        """
         statistics += self.result
 
     def __bool__(self) -> bool:
+        """The test case passed only if setup, teardown and the test case itself passed."""
         return (
             bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
         )
 
 
 class TestSuiteResult(BaseResult):
-    """
-    The test suite specific result.
-    The _inner_results list stores results of test cases in a given test suite.
-    Also stores the test suite name.
+    """The test suite specific result.
+
+    The internal list stores the results of all test cases in a given test suite.
+
+    Attributes:
+        suite_name: The test suite name.
     """
 
     suite_name: str
 
     def __init__(self, suite_name: str):
+        """Extend the constructor with `suite_name`.
+
+        Args:
+            suite_name: The test suite's name.
+        """
         super(TestSuiteResult, self).__init__()
         self.suite_name = suite_name
 
     def add_test_case(self, test_case_name: str) -> TestCaseResult:
+        """Add and return the inner result (test case).
+
+        Returns:
+            The test case's result.
+        """
         test_case_result = TestCaseResult(test_case_name)
         self._inner_results.append(test_case_result)
         return test_case_result
 
 
 class BuildTargetResult(BaseResult):
-    """
-    The build target specific result.
-    The _inner_results list stores results of test suites in a given build target.
-    Also stores build target specifics, such as compiler used to build DPDK.
+    """The build target specific result.
+
+    The internal list stores the results of all test suites in a given build target.
+
+    Attributes:
+        arch: The DPDK build target architecture.
+        os: The DPDK build target operating system.
+        cpu: The DPDK build target CPU.
+        compiler: The DPDK build target compiler.
+        compiler_version: The DPDK build target compiler version.
+        dpdk_version: The built DPDK version.
     """
 
     arch: Architecture
@@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult):
     dpdk_version: str | None
 
     def __init__(self, build_target: BuildTargetConfiguration):
+        """Extend the constructor with the `build_target`'s build target config.
+
+        Args:
+            build_target: The build target's test run configuration.
+        """
         super(BuildTargetResult, self).__init__()
         self.arch = build_target.arch
         self.os = build_target.os
@@ -222,20 +345,35 @@ def __init__(self, build_target: BuildTargetConfiguration):
         self.dpdk_version = None
 
     def add_build_target_info(self, versions: BuildTargetInfo) -> None:
+        """Add information about the build target gathered at runtime.
+
+        Args:
+            versions: The additional information.
+        """
         self.compiler_version = versions.compiler_version
         self.dpdk_version = versions.dpdk_version
 
     def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+        """Add and return the inner result (test suite).
+
+        Returns:
+            The test suite's result.
+        """
         test_suite_result = TestSuiteResult(test_suite_name)
         self._inner_results.append(test_suite_result)
         return test_suite_result
 
 
 class ExecutionResult(BaseResult):
-    """
-    The execution specific result.
-    The _inner_results list stores results of build targets in a given execution.
-    Also stores the SUT node configuration.
+    """The execution specific result.
+
+    The internal list stores the results of all build targets in a given execution.
+
+    Attributes:
+        sut_node: The SUT node used in the execution.
+        sut_os_name: The operating system of the SUT node.
+        sut_os_version: The operating system version of the SUT node.
+        sut_kernel_version: The operating system kernel version of the SUT node.
     """
 
     sut_node: NodeConfiguration
@@ -244,36 +382,55 @@ class ExecutionResult(BaseResult):
     sut_kernel_version: str
 
     def __init__(self, sut_node: NodeConfiguration):
+        """Extend the constructor with the `sut_node`'s config.
+
+        Args:
+            sut_node: The SUT node's test run configuration used in the execution.
+        """
         super(ExecutionResult, self).__init__()
         self.sut_node = sut_node
 
     def add_build_target(
         self, build_target: BuildTargetConfiguration
     ) -> BuildTargetResult:
+        """Add and return the inner result (build target).
+
+        Args:
+            build_target: The build target's test run configuration.
+
+        Returns:
+            The build target's result.
+        """
         build_target_result = BuildTargetResult(build_target)
         self._inner_results.append(build_target_result)
         return build_target_result
 
     def add_sut_info(self, sut_info: NodeInfo) -> None:
+        """Add SUT information gathered at runtime.
+
+        Args:
+            sut_info: The additional SUT node information.
+        """
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
 
 
 class DTSResult(BaseResult):
-    """
-    Stores environment information and test results from a DTS run, which are:
-    * Execution level information, such as SUT and TG hardware.
-    * Build target level information, such as compiler, target OS and cpu.
-    * Test suite results.
-    * All errors that are caught and recorded during DTS execution.
+    """Stores environment information and test results from a DTS run.
 
-    The information is stored in nested objects.
+        * Execution level information, such as testbed and the test suite list,
+        * Build target level information, such as compiler, target OS and cpu,
+        * Test suite and test case results,
+        * All errors that are caught and recorded during DTS execution.
 
-    The class is capable of computing the return code used to exit DTS with
-    from the stored error.
+    The information is stored hierarchically. This is the first level of the hierarchy
+    and as such is where the data form the whole hierarchy is collated or processed.
 
-    It also provides a brief statistical summary of passed/failed test cases.
+    The internal list stores the results of all executions.
+
+    Attributes:
+        dpdk_version: The DPDK version to record.
     """
 
     dpdk_version: str | None
@@ -284,6 +441,11 @@ class DTSResult(BaseResult):
     _stats_filename: str
 
     def __init__(self, logger: DTSLOG):
+        """Extend the constructor with top-level specifics.
+
+        Args:
+            logger: The logger instance the whole result will use.
+        """
         super(DTSResult, self).__init__()
         self.dpdk_version = None
         self._logger = logger
@@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG):
         self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
 
     def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+        """Add and return the inner result (execution).
+
+        Args:
+            sut_node: The SUT node's test run configuration.
+
+        Returns:
+            The execution's result.
+        """
         execution_result = ExecutionResult(sut_node)
         self._inner_results.append(execution_result)
         return execution_result
 
     def add_error(self, error: Exception) -> None:
+        """Record an error that occurred outside any execution.
+
+        Args:
+            error: The exception to record.
+        """
         self._errors.append(error)
 
     def process(self) -> None:
-        """
-        Process the data after a DTS run.
-        The data is added to nested objects during runtime and this parent object
-        is not updated at that time. This requires us to process the nested data
-        after it's all been gathered.
+        """Process the data after a whole DTS run.
+
+        The data is added to inner objects during runtime and this object is not updated
+        at that time. This requires us to process the inner data after it's all been gathered.
 
-        The processing gathers all errors and the result statistics of test cases.
+        The processing gathers all errors and the statistics of test case results.
         """
         self._errors += self.get_errors()
         if self._errors and self._logger:
@@ -321,8 +495,10 @@ def process(self) -> None:
             stats_file.write(str(self._stats_result))
 
     def get_return_code(self) -> int:
-        """
-        Go through all stored Exceptions and return the highest error code found.
+        """Go through all stored Exceptions and return the final DTS error code.
+
+        Returns:
+            The highest error code found.
         """
         for error in self._errors:
             error_return_code = ErrorSeverity.GENERIC_ERR
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 10/23] dts: config docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (8 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 09/23] dts: test result " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 11/23] dts: remote session " Juraj Linkeš
                           ` (13 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py | 371 ++++++++++++++++++++++++++-----
 dts/framework/config/types.py    | 132 +++++++++++
 2 files changed, 446 insertions(+), 57 deletions(-)
 create mode 100644 dts/framework/config/types.py

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 2044c82611..0aa149a53d 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,8 +3,34 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Yaml config parsing methods
+"""Testbed configuration and test suite specification.
+
+This package offers classes that hold real-time information about the testbed, hold test run
+configuration describing the tested testbed and a loader function, :func:`load_config`, which loads
+the YAML test run configuration file
+and validates it according to :download:`the schema <conf_yaml_schema.json>`.
+
+The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout
+this package. The allowed keys and types inside this dictionary are defined in
+the :doc:`types <framework.config.types>` module.
+
+The test run configuration has two main sections:
+
+    * The :class:`ExecutionConfiguration` which defines what tests are going to be run
+      and how DPDK will be built. It also references the testbed where these tests and DPDK
+      are going to be run,
+    * The nodes of the testbed are defined in the other section,
+      a :class:`list` of :class:`NodeConfiguration` objects.
+
+The real-time information about testbed is supposed to be gathered at runtime.
+
+The classes defined in this package make heavy use of :mod:`dataclasses`.
+All of them use slots and are frozen:
+
+    * Slots enables some optimizations, by pre-allocating space for the defined
+      attributes in the underlying data structure,
+    * Frozen makes the object immutable. This enables further optimizations,
+      and makes it thread safe should we every want to move in that direction.
 """
 
 import json
@@ -12,11 +38,20 @@
 import pathlib
 from dataclasses import dataclass
 from enum import auto, unique
-from typing import Any, TypedDict, Union
+from typing import Union
 
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.config.types import (
+    BuildTargetConfigDict,
+    ConfigurationDict,
+    ExecutionConfigDict,
+    NodeConfigDict,
+    PortConfigDict,
+    TestSuiteConfigDict,
+    TrafficGeneratorConfigDict,
+)
 from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
@@ -24,55 +59,97 @@
 
 @unique
 class Architecture(StrEnum):
+    r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     i686 = auto()
+    #:
     x86_64 = auto()
+    #:
     x86_32 = auto()
+    #:
     arm64 = auto()
+    #:
     ppc64le = auto()
 
 
 @unique
 class OS(StrEnum):
+    r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     linux = auto()
+    #:
     freebsd = auto()
+    #:
     windows = auto()
 
 
 @unique
 class CPUType(StrEnum):
+    r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     native = auto()
+    #:
     armv8a = auto()
+    #:
     dpaa2 = auto()
+    #:
     thunderx = auto()
+    #:
     xgene1 = auto()
 
 
 @unique
 class Compiler(StrEnum):
+    r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     gcc = auto()
+    #:
     clang = auto()
+    #:
     icc = auto()
+    #:
     msvc = auto()
 
 
 @unique
 class TrafficGeneratorType(StrEnum):
+    """The supported traffic generators."""
+
+    #:
     SCAPY = auto()
 
 
-# Slots enables some optimizations, by pre-allocating space for the defined
-# attributes in the underlying data structure.
-#
-# Frozen makes the object immutable. This enables further optimizations,
-# and makes it thread safe should we every want to move in that direction.
 @dataclass(slots=True, frozen=True)
 class HugepageConfiguration:
+    r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        amount: The number of hugepages.
+        force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node.
+    """
+
     amount: int
     force_first_numa: bool
 
 
 @dataclass(slots=True, frozen=True)
 class PortConfig:
+    r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        node: The :class:`~framework.testbed_model.node.Node` where this port exists.
+        pci: The PCI address of the port.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK.
+        os_driver: The operating system driver name when the operating system controls the port.
+        peer_node: The :class:`~framework.testbed_model.node.Node` of the port
+            connected to this port.
+        peer_pci: The PCI address of the port connected to this port.
+    """
+
     node: str
     pci: str
     os_driver_for_dpdk: str
@@ -81,18 +158,44 @@ class PortConfig:
     peer_pci: str
 
     @staticmethod
-    def from_dict(node: str, d: dict) -> "PortConfig":
+    def from_dict(node: str, d: PortConfigDict) -> "PortConfig":
+        """A convenience method that creates the object from fewer inputs.
+
+        Args:
+            node: The node where this port exists.
+            d: The configuration dictionary.
+
+        Returns:
+            The port configuration instance.
+        """
         return PortConfig(node=node, **d)
 
 
 @dataclass(slots=True, frozen=True)
 class TrafficGeneratorConfig:
+    """The configuration of traffic generators.
+
+    The class will be expanded when more configuration is needed.
+
+    Attributes:
+        traffic_generator_type: The type of the traffic generator.
+    """
+
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
-        # This looks useless now, but is designed to allow expansion to traffic
-        # generators that require more configuration later.
+    def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig":
+        """A convenience method that produces traffic generator config of the proper type.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The traffic generator configuration instance.
+
+        Raises:
+            ConfigurationError: An unknown traffic generator type was encountered.
+        """
         match TrafficGeneratorType(d["type"]):
             case TrafficGeneratorType.SCAPY:
                 return ScapyTrafficGeneratorConfig(
@@ -106,11 +209,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
 
 @dataclass(slots=True, frozen=True)
 class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+    """Scapy traffic generator specific configuration."""
+
     pass
 
 
 @dataclass(slots=True, frozen=True)
 class NodeConfiguration:
+    r"""The configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        name: The name of the :class:`~framework.testbed_model.node.Node`.
+        hostname: The hostname of the :class:`~framework.testbed_model.node.Node`.
+            Can be an IP or a domain name.
+        user: The name of the user used to connect to
+            the :class:`~framework.testbed_model.node.Node`.
+        password: The password of the user. The use of passwords is heavily discouraged.
+            Please use keys instead.
+        arch: The architecture of the :class:`~framework.testbed_model.node.Node`.
+        os: The operating system of the :class:`~framework.testbed_model.node.Node`.
+        lcores: A comma delimited list of logical cores to use when running DPDK.
+        use_first_core: If :data:`True`, the first logical core won't be used.
+        hugepages: An optional hugepage configuration.
+        ports: The ports that can be used in testing.
+    """
+
     name: str
     hostname: str
     user: str
@@ -123,57 +246,91 @@ class NodeConfiguration:
     ports: list[PortConfig]
 
     @staticmethod
-    def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
-        hugepage_config = d.get("hugepages")
-        if hugepage_config:
-            if "force_first_numa" not in hugepage_config:
-                hugepage_config["force_first_numa"] = False
-            hugepage_config = HugepageConfiguration(**hugepage_config)
-
-        common_config = {
-            "name": d["name"],
-            "hostname": d["hostname"],
-            "user": d["user"],
-            "password": d.get("password"),
-            "arch": Architecture(d["arch"]),
-            "os": OS(d["os"]),
-            "lcores": d.get("lcores", "1"),
-            "use_first_core": d.get("use_first_core", False),
-            "hugepages": hugepage_config,
-            "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
-        }
-
+    def from_dict(
+        d: NodeConfigDict,
+    ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
+        """A convenience method that processes the inputs before creating a specialized instance.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            Either an SUT or TG configuration instance.
+        """
+        hugepage_config = None
+        if "hugepages" in d:
+            hugepage_config_dict = d["hugepages"]
+            if "force_first_numa" not in hugepage_config_dict:
+                hugepage_config_dict["force_first_numa"] = False
+            hugepage_config = HugepageConfiguration(**hugepage_config_dict)
+
+        # The calls here contain duplicated code which is here because Mypy doesn't
+        # properly support dictionary unpacking with TypedDicts
         if "traffic_generator" in d:
             return TGNodeConfiguration(
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
                 traffic_generator=TrafficGeneratorConfig.from_dict(
                     d["traffic_generator"]
                 ),
-                **common_config,
             )
         else:
             return SutNodeConfiguration(
-                memory_channels=d.get("memory_channels", 1), **common_config
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
+                memory_channels=d.get("memory_channels", 1),
             )
 
 
 @dataclass(slots=True, frozen=True)
 class SutNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration.
+
+    Attributes:
+        memory_channels: The number of memory channels to use when running DPDK.
+    """
+
     memory_channels: int
 
 
 @dataclass(slots=True, frozen=True)
 class TGNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration.
+
+    Attributes:
+        traffic_generator: The configuration of the traffic generator present on the TG node.
+    """
+
     traffic_generator: ScapyTrafficGeneratorConfig
 
 
 @dataclass(slots=True, frozen=True)
 class NodeInfo:
-    """Class to hold important versions within the node.
-
-    This class, unlike the NodeConfiguration class, cannot be generated at the start.
-    This is because we need to initialize a connection with the node before we can
-    collect the information needed in this class. Therefore, it cannot be a part of
-    the configuration class above.
+    """Supplemental node information.
+
+    Attributes:
+        os_name: The name of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        os_version: The version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        kernel_version: The kernel version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
     """
 
     os_name: str
@@ -183,6 +340,20 @@ class NodeInfo:
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetConfiguration:
+    """DPDK build configuration.
+
+    The configuration used for building DPDK.
+
+    Attributes:
+        arch: The target architecture to build for.
+        os: The target os to build for.
+        cpu: The target CPU to build for.
+        compiler: The compiler executable to use.
+        compiler_wrapper: This string will be put in front of the compiler when
+            executing the build. Useful for adding wrapper commands, such as ``ccache``.
+        name: The name of the compiler.
+    """
+
     arch: Architecture
     os: OS
     cpu: CPUType
@@ -191,7 +362,18 @@ class BuildTargetConfiguration:
     name: str
 
     @staticmethod
-    def from_dict(d: dict) -> "BuildTargetConfiguration":
+    def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration":
+        r"""A convenience method that processes the inputs before creating an instance.
+
+        `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and
+        `name` is constructed from `arch`, `os`, `cpu` and `compiler`.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The build target configuration instance.
+        """
         return BuildTargetConfiguration(
             arch=Architecture(d["arch"]),
             os=OS(d["os"]),
@@ -204,23 +386,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetInfo:
-    """Class to hold important versions within the build target.
+    """Various versions and other information about a build target.
 
-    This is very similar to the NodeInfo class, it just instead holds information
-    for the build target.
+    Attributes:
+        dpdk_version: The DPDK version that was built.
+        compiler_version: The version of the compiler used to build DPDK.
     """
 
     dpdk_version: str
     compiler_version: str
 
 
-class TestSuiteConfigDict(TypedDict):
-    suite: str
-    cases: list[str]
-
-
 @dataclass(slots=True, frozen=True)
 class TestSuiteConfig:
+    """Test suite configuration.
+
+    Information about a single test suite to be executed.
+
+    Attributes:
+        test_suite: The name of the test suite module without the starting ``TestSuite_``.
+        test_cases: The names of test cases from this test suite to execute.
+            If empty, all test cases will be executed.
+    """
+
     test_suite: str
     test_cases: list[str]
 
@@ -228,6 +416,14 @@ class TestSuiteConfig:
     def from_dict(
         entry: str | TestSuiteConfigDict,
     ) -> "TestSuiteConfig":
+        """Create an instance from two different types.
+
+        Args:
+            entry: Either a suite name or a dictionary containing the config.
+
+        Returns:
+            The test suite configuration instance.
+        """
         if isinstance(entry, str):
             return TestSuiteConfig(test_suite=entry, test_cases=[])
         elif isinstance(entry, dict):
@@ -238,19 +434,49 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class ExecutionConfiguration:
+    """The configuration of an execution.
+
+    The configuration contains testbed information, what tests to execute
+    and with what DPDK build.
+
+    Attributes:
+        build_targets: A list of DPDK builds to test.
+        perf: Whether to run performance tests.
+        func: Whether to run functional tests.
+        skip_smoke_tests: Whether to skip smoke tests.
+        test_suites: The names of test suites and/or test cases to execute.
+        system_under_test_node: The SUT node to use in this execution.
+        traffic_generator_node: The TG node to use in this execution.
+        vdevs: The names of virtual devices to test.
+    """
+
     build_targets: list[BuildTargetConfiguration]
     perf: bool
     func: bool
+    skip_smoke_tests: bool
     test_suites: list[TestSuiteConfig]
     system_under_test_node: SutNodeConfiguration
     traffic_generator_node: TGNodeConfiguration
     vdevs: list[str]
-    skip_smoke_tests: bool
 
     @staticmethod
     def from_dict(
-        d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
+        d: ExecutionConfigDict,
+        node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]],
     ) -> "ExecutionConfiguration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        The build target and the test suite config is transformed into their respective objects.
+        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
+        just stored.
+
+        Args:
+            d: The configuration dictionary.
+            node_map: A dictionary mapping node names to their config objects.
+
+        Returns:
+            The execution configuration instance.
+        """
         build_targets: list[BuildTargetConfiguration] = list(
             map(BuildTargetConfiguration.from_dict, d["build_targets"])
         )
@@ -291,10 +517,31 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class Configuration:
+    """DTS testbed and test configuration.
+
+    The node configuration is not stored in this object. Rather, all used node configurations
+    are stored inside the execution configuration where the nodes are actually used.
+
+    Attributes:
+        executions: Execution configurations.
+    """
+
     executions: list[ExecutionConfiguration]
 
     @staticmethod
-    def from_dict(d: dict) -> "Configuration":
+    def from_dict(d: ConfigurationDict) -> "Configuration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        Build target and test suite config is transformed into their respective objects.
+        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
+        just stored.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The whole configuration instance.
+        """
         nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
             map(NodeConfiguration.from_dict, d["nodes"])
         )
@@ -313,9 +560,17 @@ def from_dict(d: dict) -> "Configuration":
 
 
 def load_config() -> Configuration:
-    """
-    Loads the configuration file and the configuration file schema,
-    validates the configuration file, and creates a configuration object.
+    """Load DTS test run configuration from a file.
+
+    Load the YAML test run configuration file
+    and :download:`the configuration file schema <conf_yaml_schema.json>`,
+    validate the test run configuration file, and create a test run configuration object.
+
+    The YAML test run configuration file is specified in the :option:`--config-file` command line
+    argument or the :envvar:`DTS_CFG_FILE` environment variable.
+
+    Returns:
+        The parsed test run configuration.
     """
     with open(SETTINGS.config_file_path, "r") as f:
         config_data = yaml.safe_load(f)
@@ -326,6 +581,8 @@ def load_config() -> Configuration:
 
     with open(schema_path, "r") as f:
         schema = json.load(f)
-    config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
-    config_obj: Configuration = Configuration.from_dict(dict(config))
+    config = warlock.model_factory(schema, name="_Config")(config_data)
+    config_obj: Configuration = Configuration.from_dict(
+        dict(config)  # type: ignore[arg-type]
+    )
     return config_obj
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
new file mode 100644
index 0000000000..1927910d88
--- /dev/null
+++ b/dts/framework/config/types.py
@@ -0,0 +1,132 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Configuration dictionary contents specification.
+
+These type definitions serve as documentation of the configuration dictionary contents.
+
+The definitions use the built-in :class:`~typing.TypedDict` construct.
+"""
+
+from typing import TypedDict
+
+
+class PortConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    pci: str
+    #:
+    os_driver_for_dpdk: str
+    #:
+    os_driver: str
+    #:
+    peer_node: str
+    #:
+    peer_pci: str
+
+
+class TrafficGeneratorConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    type: str
+
+
+class HugepageConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    amount: int
+    #:
+    force_first_numa: bool
+
+
+class NodeConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    hugepages: HugepageConfigurationDict
+    #:
+    name: str
+    #:
+    hostname: str
+    #:
+    user: str
+    #:
+    password: str
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    lcores: str
+    #:
+    use_first_core: bool
+    #:
+    ports: list[PortConfigDict]
+    #:
+    memory_channels: int
+    #:
+    traffic_generator: TrafficGeneratorConfigDict
+
+
+class BuildTargetConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    cpu: str
+    #:
+    compiler: str
+    #:
+    compiler_wrapper: str
+
+
+class TestSuiteConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    suite: str
+    #:
+    cases: list[str]
+
+
+class ExecutionSUTConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    node_name: str
+    #:
+    vdevs: list[str]
+
+
+class ExecutionConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    build_targets: list[BuildTargetConfigDict]
+    #:
+    perf: bool
+    #:
+    func: bool
+    #:
+    skip_smoke_tests: bool
+    #:
+    test_suites: TestSuiteConfigDict
+    #:
+    system_under_test_node: ExecutionSUTConfigDict
+    #:
+    traffic_generator_node: str
+
+
+class ConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    nodes: list[NodeConfigDict]
+    #:
+    executions: list[ExecutionConfigDict]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 11/23] dts: remote session docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (9 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 10/23] dts: config " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 12/23] dts: interactive " Juraj Linkeš
                           ` (12 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/remote_session/__init__.py      |  39 +++++-
 .../remote_session/remote_session.py          | 128 +++++++++++++-----
 dts/framework/remote_session/ssh_session.py   |  16 +--
 3 files changed, 135 insertions(+), 48 deletions(-)

diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 5e7ddb2b05..51a01d6b5e 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -2,12 +2,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-The package provides modules for managing remote connections to a remote host (node),
-differentiated by OS.
-The package provides a factory function, create_session, that returns the appropriate
-remote connection based on the passed configuration. The differences are in the
-underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""Remote interactive and non-interactive sessions.
+
+This package provides modules for managing remote connections to a remote host (node).
+
+The non-interactive sessions send commands and return their output and exit code.
+
+The interactive sessions open an interactive shell which is continuously open,
+allowing it to send and receive data within that particular shell.
 """
 
 # pylama:ignore=W0611
@@ -26,10 +28,35 @@
 def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> RemoteSession:
+    """Factory for non-interactive remote sessions.
+
+    The function returns an SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The SSH remote session.
+    """
     return SSHSession(node_config, name, logger)
 
 
 def create_interactive_session(
     node_config: NodeConfiguration, logger: DTSLOG
 ) -> InteractiveRemoteSession:
+    """Factory for interactive remote sessions.
+
+    The function returns an interactive SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The interactive SSH remote session.
+    """
     return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 0647d93de4..629c2d7b9c 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -3,6 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Base remote session.
+
+This module contains the abstract base class for remote sessions and defines
+the structure of the result of a command execution.
+"""
+
+
 import dataclasses
 from abc import ABC, abstractmethod
 from pathlib import PurePath
@@ -15,8 +22,14 @@
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class CommandResult:
-    """
-    The result of remote execution of a command.
+    """The result of remote execution of a command.
+
+    Attributes:
+        name: The name of the session that executed the command.
+        command: The executed command.
+        stdout: The standard output the command produced.
+        stderr: The standard error output the command produced.
+        return_code: The return code the command exited with.
     """
 
     name: str
@@ -26,6 +39,7 @@ class CommandResult:
     return_code: int
 
     def __str__(self) -> str:
+        """Format the command outputs."""
         return (
             f"stdout: '{self.stdout}'\n"
             f"stderr: '{self.stderr}'\n"
@@ -34,13 +48,24 @@ def __str__(self) -> str:
 
 
 class RemoteSession(ABC):
-    """
-    The base class for defining which methods must be implemented in order to connect
-    to a remote host (node) and maintain a remote session. The derived classes are
-    supposed to implement/use some underlying transport protocol (e.g. SSH) to
-    implement the methods. On top of that, it provides some basic services common to
-    all derived classes, such as keeping history and logging what's being executed
-    on the remote node.
+    """Non-interactive remote session.
+
+    The abstract methods must be implemented in order to connect to a remote host (node)
+    and maintain a remote session.
+    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
+    to implement the methods. On top of that, it provides some basic services common to all
+    subclasses, such as keeping history and logging what's being executed on the remote node.
+
+    Attributes:
+        name: The name of the session.
+        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
+            or a domain name.
+        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
+        port: The port of the node, if given in `hostname`.
+        username: The username used in the connection.
+        password: The password used in the connection. Most frequently empty,
+            as the use of passwords is discouraged.
+        history: The executed commands during this session.
     """
 
     name: str
@@ -59,6 +84,16 @@ def __init__(
         session_name: str,
         logger: DTSLOG,
     ):
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            session_name: The name of the session.
+            logger: The logger instance this session will use.
+
+        Raises:
+            SSHConnectionError: If the connection to the node was not successful.
+        """
         self._node_config = node_config
 
         self.name = session_name
@@ -79,8 +114,13 @@ def __init__(
 
     @abstractmethod
     def _connect(self) -> None:
-        """
-        Create connection to assigned node.
+        """Create a connection to the node.
+
+        The implementation must assign the established session to self.session.
+
+        The implementation must except all exceptions and convert them to an SSHConnectionError.
+
+        The implementation may optionally implement retry attempts.
         """
 
     def send_command(
@@ -90,11 +130,24 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        Send a command to the connected node using optional env vars
-        and return CommandResult.
-        If verify is True, check the return code of the executed command
-        and raise a RemoteCommandExecutionError if the command failed.
+        """Send `command` to the connected node.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds to execute `command`.
+            verify: If :data:`True`, will check the exit code of `command`.
+            env: A dictionary with environment variables to be used with `command` execution.
+
+        Raises:
+            SSHSessionDeadError: If the session isn't alive when sending `command`.
+            SSHTimeoutError: If `command` execution timed out.
+            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
+
+        Returns:
+            The output of the command along with the return code.
         """
         self._logger.info(
             f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
@@ -115,29 +168,36 @@ def send_command(
     def _send_command(
         self, command: str, timeout: float, env: dict | None
     ) -> CommandResult:
-        """
-        Use the underlying protocol to execute the command using optional env vars
-        and return CommandResult.
+        """Send a command to the connected node.
+
+        The implementation must execute the command remotely with `env` environment variables
+        and return the result.
+
+        The implementation must except all exceptions and raise an SSHSessionDeadError if
+        the session is not alive and an SSHTimeoutError if the command execution times out.
         """
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session and free all used resources.
+        """Close the remote session and free all used resources.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
         """
         self._logger.logger_exit()
         self._close(force)
 
     @abstractmethod
     def _close(self, force: bool = False) -> None:
-        """
-        Execute protocol specific steps needed to close the session properly.
+        """Protocol specific steps needed to close the session properly.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
+                This doesn't have to be implemented in the overloaded method.
         """
 
     @abstractmethod
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the remote session is still responding."""
 
     @abstractmethod
     def copy_from(
@@ -147,12 +207,12 @@ def copy_from(
     ) -> None:
         """Copy a file from the remote Node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote Node associated with this remote session
+        to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
-            destination_file: a file or directory path on the local filesystem.
+            source_file: The file on the remote Node.
+            destination_file: A file or directory path on the local filesystem.
         """
 
     @abstractmethod
@@ -163,10 +223,10 @@ def copy_to(
     ) -> None:
         """Copy a file from local filesystem to the remote Node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file` on the remote Node
+        associated with this remote session.
 
         Args:
-            source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            source_file: The file on the local filesystem.
+            destination_file: A file or directory path on the remote Node.
         """
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index cee11d14d6..7186490a9a 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""SSH session remote session."""
+
 import socket
 import traceback
 from pathlib import PurePath
@@ -26,13 +28,8 @@
 class SSHSession(RemoteSession):
     """A persistent SSH connection to a remote Node.
 
-    The connection is implemented with the Fabric Python library.
-
-    Args:
-        node_config: The configuration of the Node to connect to.
-        session_name: The name of the session.
-        logger: The logger used for logging.
-            This should be passed from the parent OSSession.
+    The connection is implemented with
+    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
 
     Attributes:
         session: The underlying Fabric SSH connection.
@@ -80,6 +77,7 @@ def _connect(self) -> None:
             raise SSHConnectionError(self.hostname, errors)
 
     def is_alive(self) -> bool:
+        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
         return self.session.is_connected
 
     def _send_command(
@@ -89,7 +87,7 @@ def _send_command(
 
         Args:
             command: The command to execute.
-            timeout: Wait at most this many seconds for the execution to complete.
+            timeout: Wait at most this long in seconds to execute the command.
             env: Extra environment variables that will be used in command execution.
 
         Raises:
@@ -118,6 +116,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
         self.session.get(str(destination_file), str(source_file))
 
     def copy_to(
@@ -125,6 +124,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
         self.session.put(str(source_file), str(destination_file))
 
     def _close(self, force: bool = False) -> None:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 12/23] dts: interactive remote session docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (10 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 11/23] dts: remote session " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 13/23] dts: port and virtual device " Juraj Linkeš
                           ` (11 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../interactive_remote_session.py             | 36 +++----
 .../remote_session/interactive_shell.py       | 99 +++++++++++--------
 dts/framework/remote_session/python_shell.py  | 26 ++++-
 dts/framework/remote_session/testpmd_shell.py | 61 +++++++++---
 4 files changed, 150 insertions(+), 72 deletions(-)

diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
index 9085a668e8..c1bf30ac61 100644
--- a/dts/framework/remote_session/interactive_remote_session.py
+++ b/dts/framework/remote_session/interactive_remote_session.py
@@ -22,27 +22,23 @@
 class InteractiveRemoteSession:
     """SSH connection dedicated to interactive applications.
 
-    This connection is created using paramiko and is a persistent connection to the
-    host. This class defines methods for connecting to the node and configures this
-    connection to send "keep alive" packets every 30 seconds. Because paramiko attempts
-    to use SSH keys to establish a connection first, providing a password is optional.
-    This session is utilized by InteractiveShells and cannot be interacted with
-    directly.
-
-    Arguments:
-        node_config: Configuration class for the node you are connecting to.
-        _logger: Desired logger for this session to use.
+    The connection is created using `paramiko <https://docs.paramiko.org/en/latest/>`_
+    and is a persistent connection to the host. This class defines the methods for connecting
+    to the node and configures the connection to send "keep alive" packets every 30 seconds.
+    Because paramiko attempts to use SSH keys to establish a connection first, providing
+    a password is optional. This session is utilized by InteractiveShells
+    and cannot be interacted with directly.
 
     Attributes:
-        hostname: Hostname that will be used to initialize a connection to the node.
-        ip: A subsection of hostname that removes the port for the connection if there
+        hostname: The hostname that will be used to initialize a connection to the node.
+        ip: A subsection of `hostname` that removes the port for the connection if there
             is one. If there is no port, this will be the same as hostname.
-        port: Port to use for the ssh connection. This will be extracted from the
-            hostname if there is a port included, otherwise it will default to 22.
+        port: Port to use for the ssh connection. This will be extracted from `hostname`
+            if there is a port included, otherwise it will default to ``22``.
         username: User to connect to the node with.
         password: Password of the user connecting to the host. This will default to an
             empty string if a password is not provided.
-        session: Underlying paramiko connection.
+        session: The underlying paramiko connection.
 
     Raises:
         SSHConnectionError: There is an error creating the SSH connection.
@@ -58,9 +54,15 @@ class InteractiveRemoteSession:
     _node_config: NodeConfiguration
     _transport: Transport | None
 
-    def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None:
+    def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            logger: The logger instance this session will use.
+        """
         self._node_config = node_config
-        self._logger = _logger
+        self._logger = logger
         self.hostname = node_config.hostname
         self.username = node_config.user
         self.password = node_config.password if node_config.password else ""
diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
index c24376b2a8..a98a822e91 100644
--- a/dts/framework/remote_session/interactive_shell.py
+++ b/dts/framework/remote_session/interactive_shell.py
@@ -3,18 +3,20 @@
 
 """Common functionality for interactive shell handling.
 
-This base class, InteractiveShell, is meant to be extended by other classes that
-contain functionality specific to that shell type. These derived classes will often
-modify things like the prompt to expect or the arguments to pass into the application,
-but still utilize the same method for sending a command and collecting output. How
-this output is handled however is often application specific. If an application needs
-elevated privileges to start it is expected that the method for gaining those
-privileges is provided when initializing the class.
+The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain
+functionality specific to that shell type. These subclasses will often modify things like
+the prompt to expect or the arguments to pass into the application, but still utilize
+the same method for sending a command and collecting output. How this output is handled however
+is often application specific. If an application needs elevated privileges to start it is expected
+that the method for gaining those privileges is provided when initializing the class.
+
+The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+environment variable configure the timeout of getting the output from command execution.
 """
 
 from abc import ABC
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from paramiko import Channel, SSHClient, channel  # type: ignore[import]
 
@@ -30,28 +32,6 @@ class InteractiveShell(ABC):
     and collecting input until reaching a certain prompt. All interactive applications
     will use the same SSH connection, but each will create their own channel on that
     session.
-
-    Arguments:
-        interactive_session: The SSH session dedicated to interactive shells.
-        logger: Logger used for displaying information in the console.
-        get_privileged_command: Method for modifying a command to allow it to use
-            elevated privileges. If this is None, the application will not be started
-            with elevated privileges.
-        app_args: Command line arguments to be passed to the application on startup.
-        timeout: Timeout used for the SSH channel that is dedicated to this interactive
-            shell. This timeout is for collecting output, so if reading from the buffer
-            and no output is gathered within the timeout, an exception is thrown.
-
-    Attributes
-        _default_prompt: Prompt to expect at the end of output when sending a command.
-            This is often overridden by derived classes.
-        _command_extra_chars: Extra characters to add to the end of every command
-            before sending them. This is often overridden by derived classes and is
-            most commonly an additional newline character.
-        path: Path to the executable to start the interactive application.
-        dpdk_app: Whether this application is a DPDK app. If it is, the build
-            directory for DPDK on the node will be prepended to the path to the
-            executable.
     """
 
     _interactive_session: SSHClient
@@ -61,10 +41,22 @@ class InteractiveShell(ABC):
     _logger: DTSLOG
     _timeout: float
     _app_args: str
-    _default_prompt: str = ""
-    _command_extra_chars: str = ""
-    path: PurePath
-    dpdk_app: bool = False
+
+    #: Prompt to expect at the end of output when sending a command.
+    #: This is often overridden by subclasses.
+    _default_prompt: ClassVar[str] = ""
+
+    #: Extra characters to add to the end of every command
+    #: before sending them. This is often overridden by subclasses and is
+    #: most commonly an additional newline character.
+    _command_extra_chars: ClassVar[str] = ""
+
+    #: Path to the executable to start the interactive application.
+    path: ClassVar[PurePath]
+
+    #: Whether this application is a DPDK app. If it is, the build directory
+    #: for DPDK on the node will be prepended to the path to the executable.
+    dpdk_app: ClassVar[bool] = False
 
     def __init__(
         self,
@@ -74,6 +66,19 @@ def __init__(
         app_args: str = "",
         timeout: float = SETTINGS.timeout,
     ) -> None:
+        """Create an SSH channel during initialization.
+
+        Args:
+            interactive_session: The SSH session dedicated to interactive shells.
+            logger: The logger instance this session will use.
+            get_privileged_command: A method for modifying a command to allow it to use
+                elevated privileges. If :data:`None`, the application will not be started
+                with elevated privileges.
+            app_args: The command line arguments to be passed to the application on startup.
+            timeout: The timeout used for the SSH channel that is dedicated to this interactive
+                shell. This timeout is for collecting output, so if reading from the buffer
+                and no output is gathered within the timeout, an exception is thrown.
+        """
         self._interactive_session = interactive_session
         self._ssh_channel = self._interactive_session.invoke_shell()
         self._stdin = self._ssh_channel.makefile_stdin("w")
@@ -92,6 +97,10 @@ def _start_application(
 
         This method is often overridden by subclasses as their process for
         starting may look different.
+
+        Args:
+            get_privileged_command: A function (but could be any callable) that produces
+                the version of the command with elevated privileges.
         """
         start_command = f"{self.path} {self._app_args}"
         if get_privileged_command is not None:
@@ -99,16 +108,24 @@ def _start_application(
         self.send_command(start_command)
 
     def send_command(self, command: str, prompt: str | None = None) -> str:
-        """Send a command and get all output before the expected ending string.
+        """Send `command` and get all output before the expected ending string.
 
         Lines that expect input are not included in the stdout buffer, so they cannot
-        be used for expect. For example, if you were prompted to log into something
-        with a username and password, you cannot expect "username:" because it won't
-        yet be in the stdout buffer. A workaround for this could be consuming an
-        extra newline character to force the current prompt into the stdout buffer.
+        be used for expect.
+
+        Example:
+            If you were prompted to log into something with a username and password,
+            you cannot expect ``username:`` because it won't yet be in the stdout buffer.
+            A workaround for this could be consuming an extra newline character to force
+            the current `prompt` into the stdout buffer.
+
+        Args:
+            command: The command to send.
+            prompt: After sending the command, `send_command` will be expecting this string.
+                If :data:`None`, will use the class's default prompt.
 
         Returns:
-            All output in the buffer before expected string
+            All output in the buffer before expected string.
         """
         self._logger.info(f"Sending: '{command}'")
         if prompt is None:
@@ -126,8 +143,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str:
         return out
 
     def close(self) -> None:
+        """Properly free all resources."""
         self._stdin.close()
         self._ssh_channel.close()
 
     def __del__(self) -> None:
+        """Make sure the session is properly closed before deleting the object."""
         self.close()
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index cc3ad48a68..c8e5957ef7 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -1,12 +1,32 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Python interactive shell.
+
+Typical usage example in a TestSuite::
+
+    from framework.remote_session import PythonShell
+    python_shell = self.tg_node.create_interactive_shell(
+        PythonShell, timeout=5, privileged=True
+    )
+    python_shell.send_command("print('Hello World')")
+    pytyon_shell.close()
+"""
+
 from pathlib import PurePath
+from typing import ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class PythonShell(InteractiveShell):
-    _default_prompt: str = ">>>"
-    _command_extra_chars: str = "\n"
-    path: PurePath = PurePath("python3")
+    """Python interactive shell."""
+
+    #: Python's prompt.
+    _default_prompt: ClassVar[str] = ">>>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
+
+    #: The Python executable.
+    path: ClassVar[PurePath] = PurePath("python3")
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 1455b5a199..2632515d74 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -1,45 +1,82 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Testpmd interactive shell.
+
+Typical usage example in a TestSuite::
+
+    testpmd_shell = self.sut_node.create_interactive_shell(
+            TestPmdShell, privileged=True
+        )
+    devices = testpmd_shell.get_devices()
+    for device in devices:
+        print(device)
+    testpmd_shell.close()
+"""
+
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class TestPmdDevice(object):
+    """The data of a device that testpmd can recognize.
+
+    Attributes:
+        pci_address: The PCI address of the device.
+    """
+
     pci_address: str
 
     def __init__(self, pci_address_line: str):
+        """Initialize the device from the testpmd output line string.
+
+        Args:
+            pci_address_line: A line of testpmd output that contains a device.
+        """
         self.pci_address = pci_address_line.strip().split(": ")[1].strip()
 
     def __str__(self) -> str:
+        """The PCI address captures what the device is."""
         return self.pci_address
 
 
 class TestPmdShell(InteractiveShell):
-    path: PurePath = PurePath("app", "dpdk-testpmd")
-    dpdk_app: bool = True
-    _default_prompt: str = "testpmd>"
-    _command_extra_chars: str = (
-        "\n"  # We want to append an extra newline to every command
-    )
+    """Testpmd interactive shell.
+
+    The testpmd shell users should never use
+    the :meth:`~framework.remote_session.interactive_shell.InteractiveShell.send_command` method
+    directly, but rather call specialized methods. If there isn't one that satisfies a need,
+    it should be added.
+    """
+
+    #: The path to the testpmd executable.
+    path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd")
+
+    #: Flag this as a DPDK app so that it's clear this is not a system app and
+    #: needs to be looked in a specific path.
+    dpdk_app: ClassVar[bool] = True
+
+    #: The testpmd's prompt.
+    _default_prompt: ClassVar[str] = "testpmd>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
 
     def _start_application(
         self, get_privileged_command: Callable[[str], str] | None
     ) -> None:
-        """See "_start_application" in InteractiveShell."""
         self._app_args += " -- -i"
         super()._start_application(get_privileged_command)
 
     def get_devices(self) -> list[TestPmdDevice]:
-        """Get a list of device names that are known to testpmd
+        """Get a list of device names that are known to testpmd.
 
-        Uses the device info listed in testpmd and then parses the output to
-        return only the names of the devices.
+        Uses the device info listed in testpmd and then parses the output.
 
         Returns:
-            A list of strings representing device names (e.g. 0000:14:00.1)
+            A list of devices.
         """
         dev_info: str = self.send_command("show device info all")
         dev_list: list[TestPmdDevice] = []
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 13/23] dts: port and virtual device docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (11 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 12/23] dts: interactive " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 14/23] dts: cpu " Juraj Linkeš
                           ` (10 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/__init__.py       | 16 ++++--
 dts/framework/testbed_model/port.py           | 53 +++++++++++++++----
 dts/framework/testbed_model/virtual_device.py | 17 +++++-
 3 files changed, 71 insertions(+), 15 deletions(-)

diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ced05653b..a02be1f2d9 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,9 +2,19 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-This package contains the classes used to model the physical traffic generator,
-system under test and any other components that need to be interacted with.
+"""Testbed modelling.
+
+This package defines the testbed elements DTS works with:
+
+    * A system under test node: :class:`SutNode`,
+    * A traffic generator node: :class:`TGNode`,
+    * The ports of network interface cards (NICs) present on nodes: :class:`Port`,
+    * The logical cores of CPUs present on nodes: :class:`LogicalCore`,
+    * The virtual devices that can be created on nodes: :class:`VirtualDevice`,
+    * The operating systems running on nodes: :class:`LinuxSession` and :class:`PosixSession`.
+
+DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes
+to properly build and test DPDK.
 """
 
 # pylama:ignore=W0611
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 680c29bfe3..817405bea4 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""NIC port model.
+
+Basic port information, such as location (the port are identified by their PCI address on a node),
+drivers and address.
+"""
+
+
 from dataclasses import dataclass
 
 from framework.config import PortConfig
@@ -9,24 +16,35 @@
 
 @dataclass(slots=True, frozen=True)
 class PortIdentifier:
+    """The port identifier.
+
+    Attributes:
+        node: The node where the port resides.
+        pci: The PCI address of the port on `node`.
+    """
+
     node: str
     pci: str
 
 
 @dataclass(slots=True)
 class Port:
-    """
-    identifier: The PCI address of the port on a node.
-
-    os_driver: The driver used by this port when the OS is controlling it.
-        Example: i40e
-    os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it,
-        Example: vfio-pci.
+    """Physical port on a node.
 
-    Note: os_driver and os_driver_for_dpdk may be the same thing.
-        Example: mlx5_core
+    The ports are identified by the node they're on and their PCI addresses. The port on the other
+    side of the connection is also captured here.
+    Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
+    and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
 
-    peer: The identifier of a port this port is connected with.
+    Attributes:
+        identifier: The PCI address of the port on a node.
+        os_driver: The operating system driver name when the operating system controls the port,
+            e.g.: ``i40e``.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
+        peer: The identifier of a port this port is connected with.
+            The `peer` is on a different node.
+        mac_address: The MAC address of the port.
+        logical_name: The logical name of the port. Must be discovered.
     """
 
     identifier: PortIdentifier
@@ -37,6 +55,12 @@ class Port:
     logical_name: str = ""
 
     def __init__(self, node_name: str, config: PortConfig):
+        """Initialize the port from `node_name` and `config`.
+
+        Args:
+            node_name: The name of the port's node.
+            config: The test run configuration of the port.
+        """
         self.identifier = PortIdentifier(
             node=node_name,
             pci=config.pci,
@@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig):
 
     @property
     def node(self) -> str:
+        """The node where the port resides."""
         return self.identifier.node
 
     @property
     def pci(self) -> str:
+        """The PCI address of the port."""
         return self.identifier.pci
 
 
 @dataclass(slots=True, frozen=True)
 class PortLink:
+    """The physical, cabled connection between the ports.
+
+    Attributes:
+        sut_port: The port on the SUT node connected to `tg_port`.
+        tg_port: The port on the TG node connected to `sut_port`.
+    """
+
     sut_port: Port
     tg_port: Port
diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
index eb664d9f17..e9b5e9c3be 100644
--- a/dts/framework/testbed_model/virtual_device.py
+++ b/dts/framework/testbed_model/virtual_device.py
@@ -1,16 +1,29 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Virtual devices model.
+
+Alongside support for physical hardware, DPDK can create various virtual devices.
+"""
+
 
 class VirtualDevice(object):
-    """
-    Base class for virtual devices used by DPDK.
+    """Base class for virtual devices used by DPDK.
+
+    Attributes:
+        name: The name of the virtual device.
     """
 
     name: str
 
     def __init__(self, name: str):
+        """Initialize the virtual device.
+
+        Args:
+            name: The name of the virtual device.
+        """
         self.name = name
 
     def __str__(self) -> str:
+        """This corresponds to the name used for DPDK devices."""
         return self.name
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 14/23] dts: cpu docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (12 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 13/23] dts: port and virtual device " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 15/23] dts: os session " Juraj Linkeš
                           ` (9 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++--------
 1 file changed, 144 insertions(+), 52 deletions(-)

diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 8fe785dfe4..4edeb4a7c2 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -1,6 +1,22 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""CPU core representation and filtering.
+
+This module provides a unified representation of logical CPU cores along
+with filtering capabilities.
+
+When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
+the physical CPU cores are split into logical CPU cores with different IDs.
+
+:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify
+the socket from which to filter the number of logical cores. It's also possible to not use all
+logical CPU cores from each physical core (e.g. only the first logical core of each physical core).
+
+:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that
+the logical cores are actually present on the server.
+"""
+
 import dataclasses
 from abc import ABC, abstractmethod
 from collections.abc import Iterable, ValuesView
@@ -11,9 +27,17 @@
 
 @dataclass(slots=True, frozen=True)
 class LogicalCore(object):
-    """
-    Representation of a CPU core. A physical core is represented in OS
-    by multiple logical cores (lcores) if CPU multithreading is enabled.
+    """Representation of a logical CPU core.
+
+    A physical core is represented in OS by multiple logical cores (lcores)
+    if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same.
+
+    Attributes:
+        lcore: The logical core ID of a CPU core. It's the same as `core` with
+            disabled multithreading.
+        core: The physical core ID of a CPU core.
+        socket: The physical socket ID where the CPU resides.
+        node: The NUMA node ID where the CPU resides.
     """
 
     lcore: int
@@ -22,27 +46,36 @@ class LogicalCore(object):
     node: int
 
     def __int__(self) -> int:
+        """The CPU is best represented by the logical core, as that's what we configure in EAL."""
         return self.lcore
 
 
 class LogicalCoreList(object):
-    """
-    Convert these options into a list of logical core ids.
-    lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
-    lcore_list=[0,1,2,3] - a list of int indices
-    lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
-    lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
-
-    The class creates a unified format used across the framework and allows
-    the user to use either a str representation (using str(instance) or directly
-    in f-strings) or a list representation (by accessing instance.lcore_list).
-    Empty lcore_list is allowed.
+    r"""A unified way to store :class:`LogicalCore`\s.
+
+    Create a unified format used across the framework and allow the user to use
+    either a :class:`str` representation (using ``str(instance)`` or directly in f-strings)
+    or a :class:`list` representation (by accessing the `lcore_list` property,
+    which stores logical core IDs).
     """
 
     _lcore_list: list[int]
     _lcore_str: str
 
     def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+        """Process `lcore_list`, then sort.
+
+        There are four supported logical core list formats::
+
+            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
+            lcore_list=[0,1,2,3]        # a list of int indices
+            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
+            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
+
+        Args:
+            lcore_list: Various ways to represent multiple logical cores.
+                Empty `lcore_list` is allowed.
+        """
         self._lcore_list = []
         if isinstance(lcore_list, str):
             lcore_list = lcore_list.split(",")
@@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
     @property
     def lcore_list(self) -> list[int]:
+        """The logical core IDs."""
         return self._lcore_list
 
     def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
@@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
         return formatted_core_list
 
     def __str__(self) -> str:
+        """The consecutive ranges of logical core IDs."""
         return self._lcore_str
 
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class LogicalCoreCount(object):
-    """
-    Define the number of logical cores to use.
-    If sockets is not None, socket_count is ignored.
-    """
+    """Define the number of logical cores per physical cores per sockets."""
 
+    #: Use this many logical cores per each physical core.
     lcores_per_core: int = 1
+    #: Use this many physical cores per each socket.
     cores_per_socket: int = 2
+    #: Use this many sockets.
     socket_count: int = 1
+    #: Use exactly these sockets. This takes precedence over `socket_count`,
+    #: so when `sockets` is not :data:`None`, `socket_count` is ignored.
     sockets: list[int] | None = None
 
 
 class LogicalCoreFilter(ABC):
-    """
-    Filter according to the input filter specifier. Each filter needs to be
-    implemented in a derived class.
-    This class only implements operations common to all filters, such as sorting
-    the list to be filtered beforehand.
+    """Common filtering class.
+
+    Each filter needs to be implemented in a subclass. This base class sorts the list of cores
+    and defines the filtering method, which must be implemented by subclasses.
     """
 
     _filter_specifier: LogicalCoreCount | LogicalCoreList
@@ -122,6 +158,17 @@ def __init__(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ):
+        """Filter according to the input filter specifier.
+
+        The input `lcore_list` is copied and sorted by physical core before filtering.
+        The list is copied so that the original is left intact.
+
+        Args:
+            lcore_list: The logical CPU cores to filter.
+            filter_specifier: Filter cores from `lcore_list` according to this filter.
+            ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`,
+                sort in descending order.
+        """
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
@@ -132,31 +179,45 @@ def __init__(
 
     @abstractmethod
     def filter(self) -> list[LogicalCore]:
-        """
-        Use self._filter_specifier to filter self._lcores_to_filter
-        and return the list of filtered LogicalCores.
-        self._lcores_to_filter is a sorted copy of the original list,
-        so it may be modified.
+        r"""Filter the cores.
+
+        Use `self._filter_specifier` to filter `self._lcores_to_filter` and return
+        the filtered :class:`LogicalCore`\s.
+        `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified.
+
+        Returns:
+            The filtered cores.
         """
 
 
 class LogicalCoreCountFilter(LogicalCoreFilter):
-    """
+    """Filter cores by specified counts.
+
     Filter the input list of LogicalCores according to specified rules:
-    Use cores from the specified number of sockets or from the specified socket ids.
-    If sockets is specified, it takes precedence over socket_count.
-    From each of those sockets, use only cores_per_socket of cores.
-    And for each core, use lcores_per_core of logical cores. Hypertheading
-    must be enabled for this to take effect.
-    If ascending is True, use cores with the lowest numerical id first
-    and continue in ascending order. If False, start with the highest
-    id and continue in descending order. This ordering affects which
-    sockets to consider first as well.
+
+        * The input `filter_specifier` is :class:`LogicalCoreCount`,
+        * Use cores from the specified number of sockets or from the specified socket ids,
+        * If `sockets` is specified, it takes precedence over `socket_count`,
+        * From each of those sockets, use only `cores_per_socket` of cores,
+        * And for each core, use `lcores_per_core` of logical cores. Hypertheading
+          must be enabled for this to take effect.
     """
 
     _filter_specifier: LogicalCoreCount
 
     def filter(self) -> list[LogicalCore]:
+        """Filter the cores according to :class:`LogicalCoreCount`.
+
+        Start by filtering the allowed sockets. The cores matching the allowed socket are returned.
+        The cores of each socket are stored in separate lists.
+
+        Then filter the allowed physical cores from those lists of cores per socket. When filtering
+        physical cores, store the desired number of logical cores per physical core which then
+        together constitute the final filtered list.
+
+        Returns:
+            The filtered cores.
+        """
         sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
         filtered_lcores = []
         for socket_to_filter in sockets_to_filter:
@@ -166,24 +227,37 @@ def filter(self) -> list[LogicalCore]:
     def _filter_sockets(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> ValuesView[list[LogicalCore]]:
-        """
-        Remove all lcores that don't match the specified socket(s).
-        If self._filter_specifier.sockets is not None, keep lcores from those sockets,
-        otherwise keep lcores from the first
-        self._filter_specifier.socket_count sockets.
+        """Filter a list of cores per each allowed socket.
+
+        The sockets may be specified in two ways, either a number or a specific list of sockets.
+        In case of a specific list, we just need to return the cores from those sockets.
+        If filtering a number of cores, we need to go through all cores and note which sockets
+        appear and only filter from the first n that appear.
+
+        Args:
+            lcores_to_filter: The cores to filter. These must be sorted by the physical core.
+
+        Returns:
+            A list of lists of logical CPU cores. Each list contains cores from one socket.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
         if self._filter_specifier.sockets:
+            # when sockets in filter is specified, the sockets are already set
             socket_count = len(self._filter_specifier.sockets)
             allowed_sockets = set(self._filter_specifier.sockets)
 
+        # filter socket_count sockets from all sockets by checking the socket of each CPU
         filtered_lcores: dict[int, list[LogicalCore]] = {}
         for lcore in lcores_to_filter:
             if not self._filter_specifier.sockets:
+                # this is when sockets is not set, so we do the actual filtering
+                # when it is set, allowed_sockets is already defined and can't be changed
                 if len(allowed_sockets) < socket_count:
+                    # allowed_sockets is a set, so adding an existing socket won't re-add it
                     allowed_sockets.add(lcore.socket)
             if lcore.socket in allowed_sockets:
+                # separate sockets per socket; this makes it easier in further processing
                 if lcore.socket in filtered_lcores:
                     filtered_lcores[lcore.socket].append(lcore)
                 else:
@@ -200,12 +274,13 @@ def _filter_sockets(
     def _filter_cores_from_socket(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> list[LogicalCore]:
-        """
-        Keep only the first self._filter_specifier.cores_per_socket cores.
-        In multithreaded environments, keep only
-        the first self._filter_specifier.lcores_per_core lcores of those cores.
-        """
+        """Filter a list of cores from the given socket.
+
+        Go through the cores and note how many logical cores per physical core have been filtered.
 
+        Returns:
+            The filtered logical CPU cores.
+        """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
         lcore_count_per_core_map: dict[int, int] = {}
@@ -248,15 +323,21 @@ def _filter_cores_from_socket(
 
 
 class LogicalCoreListFilter(LogicalCoreFilter):
-    """
-    Filter the input list of Logical Cores according to the input list of
-    lcore indices.
-    An empty LogicalCoreList won't filter anything.
+    """Filter the logical CPU cores by logical CPU core IDs.
+
+    This is a simple filter that looks at logical CPU IDs and only filter those that match.
+
+    The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything.
     """
 
     _filter_specifier: LogicalCoreList
 
     def filter(self) -> list[LogicalCore]:
+        """Filter based on logical CPU core ID.
+
+        Return:
+            The filtered logical CPU cores.
+        """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
 
@@ -279,6 +360,17 @@ def lcore_filter(
     filter_specifier: LogicalCoreCount | LogicalCoreList,
     ascending: bool,
 ) -> LogicalCoreFilter:
+    """Factory for using the right filter with `filter_specifier`.
+
+    Args:
+        core_list: The logical CPU cores to filter.
+        filter_specifier: The filter to use.
+        ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+            sort in descending order.
+
+    Returns:
+        The filter matching `filter_specifier`.
+    """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
     elif isinstance(filter_specifier, LogicalCoreCount):
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 15/23] dts: os session docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (13 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 14/23] dts: cpu " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 16/23] dts: posix and linux sessions " Juraj Linkeš
                           ` (8 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/os_session.py | 275 ++++++++++++++++------
 1 file changed, 208 insertions(+), 67 deletions(-)

diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 76e595a518..bad75d52e7 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -2,6 +2,29 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""OS-aware remote session.
+
+DPDK supports multiple different operating systems, meaning it can run on these different operating
+systems. This module defines the common API that OS-unaware layers use and translates the API into
+OS-aware calls/utility usage.
+
+Note:
+    Running commands with administrative privileges requires OS awareness. This is the only layer
+    that's aware of OS differences, so this is where non-privileged command get converted
+    to privileged commands.
+
+Example:
+    A user wishes to remove a directory on
+    a remote :class:`~framework.testbed_model.sut_node.SutNode`.
+    The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware what OS the node
+    is running - it delegates the OS translation logic
+    to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node calls
+    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
+    the :attr:`~framework.testbed_model.node.Node.main_session` translates that
+    to ``rm -rf`` if the node's OS is Linux and other commands for other OSs.
+    It also translates the path to match the underlying OS.
+"""
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from ipaddress import IPv4Interface, IPv6Interface
@@ -28,10 +51,16 @@
 
 
 class OSSession(ABC):
-    """
-    The OS classes create a DTS node remote session and implement OS specific
+    """OS-unaware to OS-aware translation API definition.
+
+    The OSSession classes create a remote session to a DTS node and implement OS specific
     behavior. There a few control methods implemented by the base class, the rest need
-    to be implemented by derived classes.
+    to be implemented by subclasses.
+
+    Attributes:
+        name: The name of the session.
+        remote_session: The remote session maintaining the connection to the node.
+        interactive_session: The interactive remote session maintaining the connection to the node.
     """
 
     _config: NodeConfiguration
@@ -46,6 +75,15 @@ def __init__(
         name: str,
         logger: DTSLOG,
     ):
+        """Initialize the OS-aware session.
+
+        Connect to the node right away and also create an interactive remote session.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            name: The name of the session.
+            logger: The logger instance this session will use.
+        """
         self._config = node_config
         self.name = name
         self._logger = logger
@@ -53,15 +91,15 @@ def __init__(
         self.interactive_session = create_interactive_session(node_config, logger)
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session.
+        """Close the underlying remote session.
+
+        Args:
+            force: Force the closure of the connection.
         """
         self.remote_session.close(force)
 
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the underlying remote session is still responding."""
         return self.remote_session.is_alive()
 
     def send_command(
@@ -72,10 +110,23 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        An all-purpose API in case the command to be executed is already
-        OS-agnostic, such as when the path to the executed command has been
-        constructed beforehand.
+        """An all-purpose API for OS-agnostic commands.
+
+        This can be used for an execution of a portable command that's executed the same way
+        on all operating systems, such as Python.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds to execute the command.
+            privileged: Whether to run the command with administrative privileges.
+            verify: If True, will check the exit code of the command.
+            env: A dictionary with environment variables to be used with the command execution.
+
+        Raises:
+            RemoteCommandExecutionError: If verify is True and the command failed.
         """
         if privileged:
             command = self._get_privileged_command(command)
@@ -89,8 +140,20 @@ def create_interactive_shell(
         privileged: bool,
         app_args: str,
     ) -> InteractiveShellType:
-        """
-        See "create_interactive_shell" in SutNode
+        """Factory for interactive session handlers.
+
+        Instantiate `shell_cls` according to the remote OS specifics.
+
+        Args:
+            shell_cls: The class of the shell.
+            timeout: Timeout for reading output from the SSH channel. If you are
+                reading from the buffer and don't receive any data within the timeout
+                it will throw an error.
+            privileged: Whether to run the shell with administrative privileges.
+            app_args: The arguments to be passed to the application.
+
+        Returns:
+            An instance of the desired interactive application shell.
         """
         return shell_cls(
             self.interactive_session.session,
@@ -114,27 +177,42 @@ def _get_privileged_command(command: str) -> str:
 
     @abstractmethod
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
-        """
-        Try to find DPDK remote dir in remote_dir.
+        """Try to find DPDK directory in `remote_dir`.
+
+        The directory is the one which is created after the extraction of the tarball. The files
+        are usually extracted into a directory starting with ``dpdk-``.
+
+        Returns:
+            The absolute path of the DPDK remote directory, empty path if not found.
         """
 
     @abstractmethod
     def get_remote_tmp_dir(self) -> PurePath:
-        """
-        Get the path of the temporary directory of the remote OS.
+        """Get the path of the temporary directory of the remote OS.
+
+        Returns:
+            The absolute path of the temporary directory.
         """
 
     @abstractmethod
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for the target architecture. Get
-        information from the node if needed.
+        """Create extra environment variables needed for the target architecture.
+
+        Different architectures may require different configuration, such as setting 32-bit CFLAGS.
+
+        Returns:
+            A dictionary with keys as environment variables.
         """
 
     @abstractmethod
     def join_remote_path(self, *args: str | PurePath) -> PurePath:
-        """
-        Join path parts using the path separator that fits the remote OS.
+        """Join path parts using the path separator that fits the remote OS.
+
+        Args:
+            args: Any number of paths to join.
+
+        Returns:
+            The resulting joined path.
         """
 
     @abstractmethod
@@ -143,13 +221,13 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from the remote Node to the local filesystem.
+        """Copy a file from the remote node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote node associated with this remote
+        session to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
+            source_file: the file on the remote node.
             destination_file: a file or directory path on the local filesystem.
         """
 
@@ -159,14 +237,14 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from local filesystem to the remote Node.
+        """Copy a file from local filesystem to the remote node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file`
+        on the remote node associated with this remote session.
 
         Args:
             source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            destination_file: a file or directory path on the remote node.
         """
 
     @abstractmethod
@@ -176,8 +254,12 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
-        """
-        Remove remote directory, by default remove recursively and forcefully.
+        """Remove remote directory, by default remove recursively and forcefully.
+
+        Args:
+            remote_dir_path: The path of the directory to remove.
+            recursive: If :data:`True`, also remove all contents inside the directory.
+            force: If :data:`True`, ignore all warnings and try to remove at all costs.
         """
 
     @abstractmethod
@@ -186,9 +268,12 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
-        """
-        Extract remote tarball in place. If expected_dir is a non-empty string, check
-        whether the dir exists after extracting the archive.
+        """Extract remote tarball in its remote directory.
+
+        Args:
+            remote_tarball_path: The path of the tarball on the remote node.
+            expected_dir: If non-empty, check whether `expected_dir` exists after extracting
+                the archive.
         """
 
     @abstractmethod
@@ -201,69 +286,119 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """
-        Build DPDK in the input dir with specified environment variables and meson
-        arguments.
+        """Build DPDK on the remote node.
+
+        An extracted DPDK tarball must be present on the node. The build consists of two steps::
+
+            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
+            ninja -C remote_dpdk_build_dir
+
+        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
+        environment variable configure the timeout of DPDK build.
+
+        Args:
+            env_vars: Use these environment variables then building DPDK.
+            meson_args: Use these meson arguments when building DPDK.
+            remote_dpdk_dir: The directory on the remote node where DPDK will be built.
+            remote_dpdk_build_dir: The target build directory on the remote node.
+            rebuild: If True, do a subsequent build with ``meson configure`` instead
+                of ``meson setup``.
+            timeout: Wait at most this long in seconds for the build to execute.
         """
 
     @abstractmethod
     def get_dpdk_version(self, version_path: str | PurePath) -> str:
-        """
-        Inspect DPDK version on the remote node from version_path.
+        """Inspect the DPDK version on the remote node.
+
+        Args:
+            version_path: The path to the VERSION file containing the DPDK version.
+
+        Returns:
+            The DPDK version.
         """
 
     @abstractmethod
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
-        """
-        Compose a list of LogicalCores present on the remote node.
-        If use_first_core is False, the first physical core won't be used.
+        r"""Get the list of :class:`~framework.testbed_model.cpu.LogicalCore`\s on the remote node.
+
+        Args:
+            use_first_core: If :data:`False`, the first physical core won't be used.
+
+        Returns:
+            The logical cores present on the node.
         """
 
     @abstractmethod
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
-        """
-        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
-        dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+        """Kill and cleanup all DPDK apps.
+
+        Args:
+            dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`.
+                If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean.
         """
 
     @abstractmethod
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
-        """
-        Get the DPDK file prefix that will be used when running DPDK apps.
+        """Make OS-specific modification to the DPDK file prefix.
+
+        Args:
+           dpdk_prefix: The OS-unaware file prefix.
+
+        Returns:
+            The OS-specific file prefix.
         """
 
     @abstractmethod
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
-        """
-        Get the node's Hugepage Size, configure the specified amount of hugepages
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Configure hugepages on the node.
+
+        Get the node's Hugepage Size, configure the specified count of hugepages
         if needed and mount the hugepages if needed.
-        If force_first_numa is True, configure hugepages just on the first socket.
+
+        Args:
+            hugepage_count: Configure this many hugepages.
+            force_first_numa:  If :data:`True`, configure hugepages just on the first socket.
         """
 
     @abstractmethod
     def get_compiler_version(self, compiler_name: str) -> str:
-        """
-        Get installed version of compiler used for DPDK
+        """Get installed version of compiler used for DPDK.
+
+        Args:
+            compiler_name: The name of the compiler executable.
+
+        Returns:
+            The compiler's version.
         """
 
     @abstractmethod
     def get_node_info(self) -> NodeInfo:
-        """
-        Collect information about the node
+        """Collect additional information about the node.
+
+        Returns:
+            Node information.
         """
 
     @abstractmethod
     def update_ports(self, ports: list[Port]) -> None:
-        """
-        Get additional information about ports:
-            Logical name (e.g. enp7s0) if applicable
-            Mac address
+        """Get additional information about ports from the operating system and update them.
+
+        The additional information is:
+
+            * Logical name (e.g. ``enp7s0``) if applicable,
+            * Mac address.
+
+        Args:
+            ports: The ports to update.
         """
 
     @abstractmethod
     def configure_port_state(self, port: Port, enable: bool) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port` in the operating system.
+
+        Args:
+            port: The port to configure.
+            enable: If :data:`True`, enable the port, otherwise shut it down.
         """
 
     @abstractmethod
@@ -273,12 +408,18 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
-        """
-        Configure (add or delete) an IP address of the input port.
+        """Configure an IP address on `port` in the operating system.
+
+        Args:
+            address: The address to configure.
+            port: The port to configure.
+            delete: If :data:`True`, remove the IP address, otherwise configure it.
         """
 
     @abstractmethod
     def configure_ipv4_forwarding(self, enable: bool) -> None:
-        """
-        Enable IPv4 forwarding in the underlying OS.
+        """Enable IPv4 forwarding in the operating system.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
         """
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 16/23] dts: posix and linux sessions docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (14 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 15/23] dts: os session " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 17/23] dts: node " Juraj Linkeš
                           ` (7 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/linux_session.py | 63 ++++++++++-----
 dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++---
 2 files changed, 113 insertions(+), 31 deletions(-)

diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index f472bb8f0f..279954ff63 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""Linux OS translator.
+
+Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly
+compliant with POSIX standards, so this module only implements the parts that aren't.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import json
 from ipaddress import IPv4Interface, IPv6Interface
 from typing import TypedDict, Union
@@ -17,43 +24,51 @@
 
 
 class LshwConfigurationOutput(TypedDict):
+    """The relevant parts of ``lshw``'s ``configuration`` section."""
+
+    #:
     link: str
 
 
 class LshwOutput(TypedDict):
-    """
-    A model of the relevant information from json lshw output, e.g.:
-    {
-    ...
-    "businfo" : "pci@0000:08:00.0",
-    "logicalname" : "enp8s0",
-    "version" : "00",
-    "serial" : "52:54:00:59:e1:ac",
-    ...
-    "configuration" : {
-      ...
-      "link" : "yes",
-      ...
-    },
-    ...
+    """A model of the relevant information from ``lshw``'s json output.
+
+    e.g.::
+
+        {
+        ...
+        "businfo" : "pci@0000:08:00.0",
+        "logicalname" : "enp8s0",
+        "version" : "00",
+        "serial" : "52:54:00:59:e1:ac",
+        ...
+        "configuration" : {
+          ...
+          "link" : "yes",
+          ...
+        },
+        ...
     """
 
+    #:
     businfo: str
+    #:
     logicalname: NotRequired[str]
+    #:
     serial: NotRequired[str]
+    #:
     configuration: LshwConfigurationOutput
 
 
 class LinuxSession(PosixSession):
-    """
-    The implementation of non-Posix compliant parts of Linux remote sessions.
-    """
+    """The implementation of non-Posix compliant parts of Linux."""
 
     @staticmethod
     def _get_privileged_command(command: str) -> str:
         return f"sudo -- sh -c '{command}'"
 
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
         cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout
         lcores = []
         for cpu_line in cpu_info.splitlines():
@@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
         return lcores
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
         self._logger.info("Getting Hugepage information.")
         hugepage_size = self._get_hugepage_size()
         hugepages_total = self._get_hugepages_total()
         self._numa_nodes = self._get_numa_nodes()
 
-        if force_first_numa or hugepages_total != hugepage_amount:
+        if force_first_numa or hugepages_total != hugepage_count:
             # when forcing numa, we need to clear existing hugepages regardless
             # of size, so they can be moved to the first numa node
-            self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+            self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
         else:
             self._logger.info("Hugepages already configured.")
         self._mount_huge_pages()
@@ -140,6 +157,7 @@ def _configure_huge_pages(
         )
 
     def update_ports(self, ports: list[Port]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
             assert (
@@ -178,6 +196,7 @@ def _update_port_attr(
             )
 
     def configure_port_state(self, port: Port, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
         state = "up" if enable else "down"
         self.send_command(
             f"ip link set dev {port.logical_name} {state}", privileged=True
@@ -189,6 +208,7 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`."""
         command = "del" if delete else "add"
         self.send_command(
             f"ip address {command} {address} dev {port.logical_name}",
@@ -197,5 +217,6 @@ def configure_port_ip_address(
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`."""
         state = 1 if enable else 0
         self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index 1d1d5b1b26..a4824aa274 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -2,6 +2,15 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""POSIX compliant OS translator.
+
+Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards
+for portability between Unix operating systems which not all Linux distributions
+(or the tools most frequently bundled with said distributions) adhere to. Most of Linux
+distributions are mostly compliant though.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import re
 from collections.abc import Iterable
 from pathlib import PurePath, PurePosixPath
@@ -15,13 +24,21 @@
 
 
 class PosixSession(OSSession):
-    """
-    An intermediary class implementing the Posix compliant parts of
-    Linux and other OS remote sessions.
-    """
+    """An intermediary class implementing the POSIX standard."""
 
     @staticmethod
     def combine_short_options(**opts: bool) -> str:
+        """Combine shell options into one argument.
+
+        These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``.
+
+        Args:
+            opts: The keys are option names (usually one letter) and the bool values indicate
+                whether to include the option in the resulting argument.
+
+        Returns:
+            The options combined into one argument.
+        """
         ret_opts = ""
         for opt, include in opts.items():
             if include:
@@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str:
         return ret_opts
 
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`."""
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
 
     def get_remote_tmp_dir(self) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`."""
         return PurePosixPath("/tmp")
 
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for i686 arch build. Get information
-        from the node if needed.
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`.
+
+        Supported architecture: ``i686``.
         """
         env_vars = {}
         if arch == Architecture.i686:
@@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
         return env_vars
 
     def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.join_remote_path`."""
         return PurePosixPath(*args)
 
     def copy_from(
@@ -70,6 +90,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_file)
 
     def copy_to(
@@ -77,6 +98,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_to`."""
         self.remote_session.copy_to(source_file, destination_file)
 
     def remove_remote_dir(
@@ -85,6 +107,7 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(r=recursive, f=force)
         self.send_command(f"rm{opts} {remote_dir_path}")
 
@@ -93,6 +116,7 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
             f"tar xfm {remote_tarball_path} "
             f"-C {PurePosixPath(remote_tarball_path).parent}",
@@ -110,6 +134,7 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
         try:
             if rebuild:
                 # reconfigure, then build
@@ -140,12 +165,14 @@ def build_dpdk(
             raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
         out = self.send_command(
             f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
         )
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`."""
         self._logger.info("Cleaning up DPDK apps.")
         dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
         if dpdk_runtime_dirs:
@@ -159,6 +186,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
     def _get_dpdk_runtime_dirs(
         self, dpdk_prefix_list: Iterable[str]
     ) -> list[PurePosixPath]:
+        """Find runtime directories DPDK apps are currently using.
+
+        Args:
+              dpdk_prefix_list: The prefixes DPDK apps were started with.
+
+        Returns:
+            The paths of DPDK apps' runtime dirs.
+        """
         prefix = PurePosixPath("/var", "run", "dpdk")
         if not dpdk_prefix_list:
             remote_prefixes = self._list_remote_dirs(prefix)
@@ -170,9 +205,13 @@ def _get_dpdk_runtime_dirs(
         return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
 
     def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
-        """
-        Return a list of directories of the remote_dir.
-        If remote_path doesn't exist, return None.
+        """Contents of remote_path.
+
+        Args:
+            remote_path: List the contents of this path.
+
+        Returns:
+            The contents of remote_path. If remote_path doesn't exist, return None.
         """
         out = self.send_command(
             f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
@@ -183,6 +222,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
             return out.splitlines()
 
     def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+        """Find PIDs of running DPDK apps.
+
+        Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes
+        that opened those file.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+
+        Returns:
+            The PIDs of running DPDK apps.
+        """
         pids = []
         pid_regex = r"p(\d+)"
         for dpdk_runtime_dir in dpdk_runtime_dirs:
@@ -203,6 +253,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool:
     def _check_dpdk_hugepages(
         self, dpdk_runtime_dirs: Iterable[str | PurePath]
     ) -> None:
+        """Check there aren't any leftover hugepages.
+
+        If any hugegapes are found, emit a warning. The hugepages are investigated in the
+        "hugepage_info" file of dpdk_runtime_dirs.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+        """
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
             if self._remote_files_exists(hugepage_info):
@@ -220,9 +278,11 @@ def _remove_dpdk_runtime_dirs(
             self.remove_remote_dir(dpdk_runtime_dir)
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -240,6 +300,7 @@ def get_compiler_version(self, compiler_name: str) -> str:
                 raise ValueError(f"Unknown compiler {compiler_name}")
 
     def get_node_info(self) -> NodeInfo:
+        """Overrides :meth:`~.os_session.OSSession.get_node_info`."""
         os_release_info = self.send_command(
             "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release",
             SETTINGS.timeout,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 17/23] dts: node docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (15 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 16/23] dts: posix and linux sessions " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 18/23] dts: sut and tg nodes " Juraj Linkeš
                           ` (6 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 191 +++++++++++++++++++---------
 1 file changed, 131 insertions(+), 60 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 7571e7b98d..abf86793a7 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+A node is any host/server DTS connects to.
+
+The base class, :class:`Node`, provides functionality common to all nodes and is supposed
+to be extended by subclasses with functionality specific to each node type.
+The decorator :func:`Node.skip_setup` can be used without subclassing.
 """
 
 from abc import ABC
@@ -35,10 +40,22 @@
 
 
 class Node(ABC):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather subclassed.
+    It implements common methods to manage any node:
+
+        * Connection to the node,
+        * Hugepages setup.
+
+    Attributes:
+        main_session: The primary OS-aware remote session used to communicate with the node.
+        config: The node configuration.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and the test run configuration.
+        ports: The ports of this node specified in the test run configuration.
+        virtual_devices: The virtual devices used on the node.
     """
 
     main_session: OSSession
@@ -52,6 +69,17 @@ class Node(ABC):
     virtual_devices: list[VirtualDevice]
 
     def __init__(self, node_config: NodeConfiguration):
+        """Connect to the node and gather info during initialization.
+
+        Extra gathered information:
+
+        * The list of available logical CPUs. This is then filtered by
+          the ``lcores`` configuration in the YAML test run configuration file,
+        * Information about ports from the YAML test run configuration file.
+
+        Args:
+            node_config: The node's test run configuration.
+        """
         self.config = node_config
         self.name = node_config.name
         self._logger = getLogger(self.name)
@@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Connected to node: {self.name}")
 
         self._get_remote_cpus()
-        # filter the node lcores according to user config
+        # filter the node lcores according to the test run configuration
         self.lcores = LogicalCoreListFilter(
             self.lcores, LogicalCoreList(self.config.lcores)
         ).filter()
@@ -77,9 +105,14 @@ def _init_ports(self) -> None:
             self.configure_port_state(port)
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call :meth:`_set_up_execution` where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution test run configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
@@ -88,58 +121,74 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
             self.virtual_devices.append(VirtualDevice(vdev))
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps common to all DTS node types.
         """
         self.virtual_devices = []
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution teardown steps.
         """
 
     def set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps common to all DTS node types.
+
+        Args:
+            build_target_config: The build target test run configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-aware remote session.
+
+        The returned session won't be used by the node creating it. The session must be used by
+        the caller. The session will be maintained for the entire lifecycle of the node object,
+        at the end of which the session will be cleaned up automatically.
+
+        Note:
+            Any number of these supplementary sessions may be created.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-aware remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -157,19 +206,19 @@ def create_interactive_shell(
         privileged: bool = False,
         app_args: str = "",
     ) -> InteractiveShellType:
-        """Create a handler for an interactive session.
+        """Factory for interactive session handlers.
 
-        Instantiate shell_cls according to the remote OS specifics.
+        Instantiate `shell_cls` according to the remote OS specifics.
 
         Args:
             shell_cls: The class of the shell.
-            timeout: Timeout for reading output from the SSH channel. If you are
-                reading from the buffer and don't receive any data within the timeout
-                it will throw an error.
+            timeout: Timeout for reading output from the SSH channel. If you are reading from
+                the buffer and don't receive any data within the timeout it will throw an error.
             privileged: Whether to run the shell with administrative privileges.
             app_args: The arguments to be passed to the application.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not shell_cls.dpdk_app:
             shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
@@ -186,14 +235,22 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
+
+        Logical cores that DTS can use are the ones that are present on the node, but filtered
+        according to the test run configuration. The `filter_specifier` will filter cores from
+        those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies the number
+                of logical cores per core, cores per socket and the number of sockets,
+                and another one that specifies a logical core list.
+            ascending: If :data:`True`, use cores with the lowest numerical id first and continue
+                in ascending order. If :data:`False`, start with the highest id and continue
+                in descending order. This ordering affects which sockets to consider first as well.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Returns:
+            The filtered logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -203,17 +260,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self) -> None:
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the node.
+
+        Configure the hugepages only if they're specified in the node's test run configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -221,8 +275,11 @@ def _setup_hugepages(self) -> None:
             )
 
     def configure_port_state(self, port: Port, enable: bool = True) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port`.
+
+        Args:
+            port: The port to enable/disable.
+            enable: :data:`True` to enable, :data:`False` to disable.
         """
         self.main_session.configure_port_state(port, enable)
 
@@ -232,15 +289,17 @@ def configure_port_ip_address(
         port: Port,
         delete: bool = False,
     ) -> None:
-        """
-        Configure the IP address of a port on this node.
+        """Add an IP address to `port` on this node.
+
+        Args:
+            address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6.
+            port: The port to which to add the address.
+            delete: If :data:`True`, will delete the address from the port instead of adding it.
         """
         self.main_session.configure_port_ip_address(address, port, delete)
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -249,6 +308,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """Skip the decorated function.
+
+        The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP`
+        environment variable enable the decorator.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
@@ -258,6 +322,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
 def create_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> OSSession:
+    """Factory for OS-aware sessions.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+    """
     match node_config.os:
         case OS.linux:
             return LinuxSession(node_config, name, logger)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 18/23] dts: sut and tg nodes docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (16 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 17/23] dts: node " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 19/23] dts: base traffic generators " Juraj Linkeš
                           ` (5 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/sut_node.py | 219 ++++++++++++++++--------
 dts/framework/testbed_model/tg_node.py  |  42 +++--
 2 files changed, 170 insertions(+), 91 deletions(-)

diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 4e33cf02ea..b57d48fd31 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -3,6 +3,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""System under test (DPDK + hardware) node.
+
+A system under test (SUT) is the combination of DPDK
+and the hardware we're testing with DPDK (NICs, crypto and other devices).
+An SUT node is where this SUT runs.
+"""
+
+
 import os
 import tarfile
 import time
@@ -26,6 +34,11 @@
 
 
 class EalParameters(object):
+    """The environment abstraction layer parameters.
+
+    The string representation can be created by converting the instance to a string.
+    """
+
     def __init__(
         self,
         lcore_list: LogicalCoreList,
@@ -35,21 +48,23 @@ def __init__(
         vdevs: list[VirtualDevice],
         other_eal_param: str,
     ):
-        """
-        Generate eal parameters character string;
-        :param lcore_list: the list of logical cores to use.
-        :param memory_channels: the number of memory channels to use.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
+        """Initialize the parameters according to inputs.
+
+        Process the parameters into the format used on the command line.
+
+        Args:
+            lcore_list: The list of logical cores to use.
+            memory_channels: The number of memory channels to use.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``
         """
         self._lcore_list = f"-l {lcore_list}"
         self._memory_channels = f"-n {memory_channels}"
@@ -61,6 +76,7 @@ def __init__(
         self._other_eal_param = other_eal_param
 
     def __str__(self) -> str:
+        """Create the EAL string."""
         return (
             f"{self._lcore_list} "
             f"{self._memory_channels} "
@@ -72,11 +88,21 @@ def __str__(self) -> str:
 
 
 class SutNode(Node):
-    """
-    A class for managing connections to the System under Test, providing
-    methods that retrieve the necessary information about the node (such as
-    CPU, memory and NIC details) and configuration capabilities.
-    Another key capability is building DPDK according to given build target.
+    """The system under test node.
+
+    The SUT node extends :class:`Node` with DPDK specific features:
+
+        * DPDK build,
+        * Gathering of DPDK build info,
+        * The running of DPDK apps, interactively or one-time execution,
+        * DPDK apps cleanup.
+
+    The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL`
+    environment variable configure the path to the DPDK tarball
+    or the git commit ID, tag ID or tree ID to test.
+
+    Attributes:
+        config: The SUT node configuration
     """
 
     config: SutNodeConfiguration
@@ -93,6 +119,11 @@ class SutNode(Node):
     _compiler_version: str | None
 
     def __init__(self, node_config: SutNodeConfiguration):
+        """Extend the constructor with SUT node specifics.
+
+        Args:
+            node_config: The SUT node's test run configuration.
+        """
         super(SutNode, self).__init__(node_config)
         self._dpdk_prefix_list = []
         self._build_target_config = None
@@ -111,6 +142,12 @@ def __init__(self, node_config: SutNodeConfiguration):
 
     @property
     def _remote_dpdk_dir(self) -> PurePath:
+        """The remote DPDK dir.
+
+        This internal property should be set after extracting the DPDK tarball. If it's not set,
+        that implies the DPDK setup step has been skipped, in which case we can guess where
+        a previous build was located.
+        """
         if self.__remote_dpdk_dir is None:
             self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
         return self.__remote_dpdk_dir
@@ -121,6 +158,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
 
     @property
     def remote_dpdk_build_dir(self) -> PurePath:
+        """The remote DPDK build directory.
+
+        This is the directory where DPDK was built.
+        We assume it was built in a subdirectory of the extracted tarball.
+        """
         if self._build_target_config:
             return self.main_session.join_remote_path(
                 self._remote_dpdk_dir, self._build_target_config.name
@@ -130,6 +172,7 @@ def remote_dpdk_build_dir(self) -> PurePath:
 
     @property
     def dpdk_version(self) -> str:
+        """Last built DPDK version."""
         if self._dpdk_version is None:
             self._dpdk_version = self.main_session.get_dpdk_version(
                 self._remote_dpdk_dir
@@ -138,12 +181,14 @@ def dpdk_version(self) -> str:
 
     @property
     def node_info(self) -> NodeInfo:
+        """Additional node information."""
         if self._node_info is None:
             self._node_info = self.main_session.get_node_info()
         return self._node_info
 
     @property
     def compiler_version(self) -> str:
+        """The node's compiler version."""
         if self._compiler_version is None:
             if self._build_target_config is not None:
                 self._compiler_version = self.main_session.get_compiler_version(
@@ -158,6 +203,11 @@ def compiler_version(self) -> str:
         return self._compiler_version
 
     def get_build_target_info(self) -> BuildTargetInfo:
+        """Get additional build target information.
+
+        Returns:
+            The build target information,
+        """
         return BuildTargetInfo(
             dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
         )
@@ -168,8 +218,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Setup DPDK on the SUT node.
+        """Setup DPDK on the SUT node.
+
+        Additional build target setup steps on top of those in :class:`Node`.
         """
         # we want to ensure that dpdk_version and compiler_version is reset for new
         # build targets
@@ -182,9 +233,7 @@ def _set_up_build_target(
     def _configure_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Populate common environment variables and set build target config.
-        """
+        """Populate common environment variables and set build target config."""
         self._env_vars = {}
         self._build_target_config = build_target_config
         self._env_vars.update(
@@ -199,9 +248,7 @@ def _configure_build_target(
 
     @Node.skip_setup
     def _copy_dpdk_tarball(self) -> None:
-        """
-        Copy to and extract DPDK tarball on the SUT node.
-        """
+        """Copy to and extract DPDK tarball on the SUT node."""
         self._logger.info("Copying DPDK tarball to SUT.")
         self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
 
@@ -232,8 +279,9 @@ def _copy_dpdk_tarball(self) -> None:
 
     @Node.skip_setup
     def _build_dpdk(self) -> None:
-        """
-        Build DPDK. Uses the already configured target. Assumes that the tarball has
+        """Build DPDK.
+
+        Uses the already configured target. Assumes that the tarball has
         already been copied to and extracted on the SUT node.
         """
         self.main_session.build_dpdk(
@@ -244,15 +292,19 @@ def _build_dpdk(self) -> None:
         )
 
     def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
-        """
-        Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
-        When app_name is 'all', build all example apps.
-        When app_name is any other string, tries to build that example app.
-        Return the directory path of the built app. If building all apps, return
-        the path to the examples directory (where all apps reside).
-        The meson_dpdk_args are keyword arguments
-        found in meson_option.txt in root DPDK directory. Do not use -D with them,
-        for example: enable_kmods=True.
+        """Build one or all DPDK apps.
+
+        Requires DPDK to be already built on the SUT node.
+
+        Args:
+            app_name: The name of the DPDK app to build.
+                When `app_name` is ``all``, build all example apps.
+            meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Returns:
+            The directory path of the built app. If building all apps, return
+            the path to the examples directory (where all apps reside).
         """
         self.main_session.build_dpdk(
             self._env_vars,
@@ -273,9 +325,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
     def kill_cleanup_dpdk_apps(self) -> None:
-        """
-        Kill all dpdk applications on the SUT. Cleanup hugepages.
-        """
+        """Kill all dpdk applications on the SUT, then clean up hugepages."""
         if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
             # we can use the session if it exists and responds
             self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
@@ -294,33 +344,34 @@ def create_eal_parameters(
         vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
-        """
-        Generate eal parameters character string;
-        :param lcore_filter_specifier: a number of lcores/cores/sockets to use
-                        or a list of lcore ids to use.
-                        The default will select one lcore for each of two cores
-                        on one socket, in ascending order of core ids.
-        :param ascending_cores: True, use cores with the lowest numerical id first
-                        and continue in ascending order. If False, start with the
-                        highest id and continue in descending order. This ordering
-                        affects which sockets to consider first as well.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param append_prefix_timestamp: if True, will append a timestamp to
-                        DPDK file prefix.
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
-        :return: eal param string, eg:
-                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
-        """
+        """Compose the EAL parameters.
+
+        Process the list of cores and the DPDK prefix and pass that along with
+        the rest of the arguments.
 
+        Args:
+            lcore_filter_specifier: A number of lcores/cores/sockets to use
+                or a list of lcore ids to use.
+                The default will select one lcore for each of two cores
+                on one socket, in ascending order of core ids.
+            ascending_cores: Sort cores in ascending order (lowest to highest IDs).
+                If :data:`False`, sort in descending order.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``.
+
+        Returns:
+            An EAL param string, such as
+            ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``.
+        """
         lcore_list = LogicalCoreList(
             self.filter_lcores(lcore_filter_specifier, ascending_cores)
         )
@@ -346,14 +397,29 @@ def create_eal_parameters(
     def run_dpdk_app(
         self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
     ) -> CommandResult:
-        """
-        Run DPDK application on the remote node.
+        """Run DPDK application on the remote node.
+
+        The application is not run interactively - the command that starts the application
+        is executed and then the call waits for it to finish execution.
+
+        Args:
+            app_path: The remote path to the DPDK application.
+            eal_args: EAL parameters to run the DPDK application with.
+            timeout: Wait at most this long in seconds to execute the command.
+
+        Returns:
+            The result of the DPDK app execution.
         """
         return self.main_session.send_command(
             f"{app_path} {eal_args}", timeout, privileged=True, verify=True
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Enable/disable IPv4 forwarding on the node.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
+        """
         self.main_session.configure_ipv4_forwarding(enable)
 
     def create_interactive_shell(
@@ -363,9 +429,13 @@ def create_interactive_shell(
         privileged: bool = False,
         eal_parameters: EalParameters | str | None = None,
     ) -> InteractiveShellType:
-        """Factory method for creating a handler for an interactive session.
+        """Extend the factory for interactive session handlers.
+
+        The extensions are SUT node specific:
 
-        Instantiate shell_cls according to the remote OS specifics.
+            * The default for `eal_parameters`,
+            * The interactive shell path `shell_cls.path` is prepended with path to the remote
+              DPDK build directory for DPDK apps.
 
         Args:
             shell_cls: The class of the shell.
@@ -375,9 +445,10 @@ def create_interactive_shell(
             privileged: Whether to run the shell with administrative privileges.
             eal_parameters: List of EAL parameters to use to launch the app. If this
                 isn't provided or an empty string is passed, it will default to calling
-                create_eal_parameters().
+                :meth:`create_eal_parameters`.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not eal_parameters:
             eal_parameters = self.create_eal_parameters()
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 166eb8430e..69eb33ccb1 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -5,13 +5,8 @@
 
 """Traffic generator node.
 
-This is the node where the traffic generator resides.
-The distinction between a node and a traffic generator is as follows:
-A node is a host that DTS connects to. It could be a baremetal server,
-a VM or a container.
-A traffic generator is software running on the node.
-A traffic generator node is a node running a traffic generator.
-A node can be a traffic generator node as well as system under test node.
+A traffic generator (TG) generates traffic that's sent towards the SUT node.
+A TG node is where the TG runs.
 """
 
 from scapy.packet import Packet  # type: ignore[import]
@@ -24,13 +19,16 @@
 
 
 class TGNode(Node):
-    """Manage connections to a node with a traffic generator.
+    """The traffic generator node.
 
-    Apart from basic node management capabilities, the Traffic Generator node has
-    specialized methods for handling the traffic generator running on it.
+    The TG node extends :class:`Node` with TG specific features:
 
-    Arguments:
-        node_config: The user configuration of the traffic generator node.
+        * Traffic generator initialization,
+        * The sending of traffic and receiving packets,
+        * The sending of traffic without receiving packets.
+
+    Not all traffic generators are capable of capturing traffic, which is why there
+    must be a way to send traffic without that.
 
     Attributes:
         traffic_generator: The traffic generator running on the node.
@@ -39,6 +37,13 @@ class TGNode(Node):
     traffic_generator: CapturingTrafficGenerator
 
     def __init__(self, node_config: TGNodeConfiguration):
+        """Extend the constructor with TG node specifics.
+
+        Initialize the traffic generator on the TG node.
+
+        Args:
+            node_config: The TG node's test run configuration.
+        """
         super(TGNode, self).__init__(node_config)
         self.traffic_generator = create_traffic_generator(
             self, node_config.traffic_generator
@@ -52,17 +57,17 @@ def send_packet_and_capture(
         receive_port: Port,
         duration: float = 1,
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet`, return received traffic.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given duration. Also record the captured traffic
         in a pcap file.
 
         Args:
             packet: The packet to send.
             send_port: The egress port on the TG node.
             receive_port: The ingress port in the TG node.
-            duration: Capture traffic for this amount of time after sending the packet.
+            duration: Capture traffic for this amount of time after sending `packet`.
 
         Returns:
              A list of received packets. May be empty if no packets are captured.
@@ -72,6 +77,9 @@ def send_packet_and_capture(
         )
 
     def close(self) -> None:
-        """Free all resources used by the node"""
+        """Free all resources used by the node.
+
+        This extends the superclass method with TG cleanup.
+        """
         self.traffic_generator.close()
         super(TGNode, self).close()
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 19/23] dts: base traffic generators docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (17 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 18/23] dts: sut and tg nodes " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 20/23] dts: scapy tg " Juraj Linkeš
                           ` (4 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../traffic_generator/__init__.py             | 22 ++++++++-
 .../capturing_traffic_generator.py            | 46 +++++++++++--------
 .../traffic_generator/traffic_generator.py    | 33 +++++++------
 3 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 11bfa1ee0f..51cca77da4 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -1,6 +1,19 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""DTS traffic generators.
+
+A traffic generator is capable of generating traffic and then monitor returning traffic.
+A traffic generator may just count the number of received packets
+and it may additionally capture individual packets.
+
+A traffic generator may be software running on generic hardware or it could be specialized hardware.
+
+The traffic generators that only count the number of received packets are suitable only for
+performance testing. In functional testing, we need to be able to dissect each arrived packet
+and a capturing traffic generator is required.
+"""
+
 from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
 from framework.exception import ConfigurationError
 from framework.testbed_model.node import Node
@@ -12,8 +25,15 @@
 def create_traffic_generator(
     tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
 ) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
+    """The factory function for creating traffic generator objects from the test run configuration.
+
+    Args:
+        tg_node: The traffic generator node where the created traffic generator will be running.
+        traffic_generator_config: The traffic generator config.
 
+    Returns:
+        A traffic generator capable of capturing received packets.
+    """
     match traffic_generator_config.traffic_generator_type:
         case TrafficGeneratorType.SCAPY:
             return ScapyTrafficGenerator(tg_node, traffic_generator_config)
diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index e521211ef0..b0a43ad003 100644
--- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -23,19 +23,22 @@
 
 
 def _get_default_capture_name() -> str:
-    """
-    This is the function used for the default implementation of capture names.
-    """
     return str(uuid.uuid4())
 
 
 class CapturingTrafficGenerator(TrafficGenerator):
     """Capture packets after sending traffic.
 
-    A mixin interface which enables a packet generator to declare that it can capture
+    The intermediary interface which enables a packet generator to declare that it can capture
     packets and return them to the user.
 
+    Similarly to
+    :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
+    this class exposes the public methods specific to capturing traffic generators and defines
+    a private method that must implement the traffic generation and capturing logic in subclasses.
+
     The methods of capturing traffic generators obey the following workflow:
+
         1. send packets
         2. capture packets
         3. write the capture to a .pcap file
@@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
 
     @property
     def is_capturing(self) -> bool:
+        """This traffic generator can capture traffic."""
         return True
 
     def send_packet_and_capture(
@@ -54,11 +58,12 @@ def send_packet_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet` and capture received traffic.
+
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        The captured traffic is recorded in the `capture_name`.pcap file.
 
         Args:
             packet: The packet to send.
@@ -68,7 +73,7 @@ def send_packet_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         return self.send_packets_and_capture(
             [packet], send_port, receive_port, duration, capture_name
@@ -82,11 +87,14 @@ def send_packets_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send packets, return received traffic.
+        """Send `packets` and capture received traffic.
 
-        Send packets on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        Send `packets` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
+
+        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
+        can be configured with the :option:`--output-dir` command line argument or
+        the :envvar:`DTS_OUTPUT_DIR` environment variable.
 
         Args:
             packets: The packets to send.
@@ -96,7 +104,7 @@ def send_packets_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         self._logger.debug(get_packet_summaries(packets))
         self._logger.debug(
@@ -124,10 +132,12 @@ def _send_packets_and_capture(
         receive_port: Port,
         duration: float,
     ) -> list[Packet]:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port and receives packets on the receive_port
-        for the specified duration. It must be able to handle no received packets.
+        """The implementation of :method:`send_packets_and_capture`.
+
+        The subclasses must implement this method which sends `packets` on `send_port`
+        and receives packets on `receive_port` for the specified `duration`.
+
+        It must be able to handle no received packets.
         """
 
     def _write_capture_from_packets(
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index ea7c3963da..ed396c6a2f 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -22,7 +22,8 @@
 class TrafficGenerator(ABC):
     """The base traffic generator.
 
-    Defines the few basic methods that each traffic generator must implement.
+    Exposes the common public methods of all traffic generators and defines private methods
+    that must implement the traffic generation logic in subclasses.
     """
 
     _config: TrafficGeneratorConfig
@@ -30,6 +31,12 @@ class TrafficGenerator(ABC):
     _logger: DTSLOG
 
     def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        """Initialize the traffic generator.
+
+        Args:
+            tg_node: The traffic generator node where the created traffic generator will be running.
+            config: The traffic generator's test run configuration.
+        """
         self._config = config
         self._tg_node = tg_node
         self._logger = getLogger(
@@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
         )
 
     def send_packet(self, packet: Packet, port: Port) -> None:
-        """Send a packet and block until it is fully sent.
+        """Send `packet` and block until it is fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packet` on `port`, then wait until `packet` is fully sent.
 
         Args:
             packet: The packet to send.
@@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
         self.send_packets([packet], port)
 
     def send_packets(self, packets: list[Packet], port: Port) -> None:
-        """Send packets and block until they are fully sent.
+        """Send `packets` and block until they are fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packets` on `port`, then wait until `packets` are fully sent.
 
         Args:
             packets: The packets to send.
@@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
 
     @abstractmethod
     def _send_packets(self, packets: list[Packet], port: Port) -> None:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port. The method should block until all packets
-        are fully sent.
+        """The implementation of :method:`send_packets`.
+
+        The subclasses must implement this method which sends `packets` on `port`.
+        The method should block until all `packets` are fully sent.
+
+        What full sent means is defined by the traffic generator.
         """
 
     @property
     def is_capturing(self) -> bool:
-        """Whether this traffic generator can capture traffic.
-
-        Returns:
-            True if the traffic generator can capture traffic, False otherwise.
-        """
+        """This traffic generator can't capture traffic."""
         return False
 
     @abstractmethod
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 20/23] dts: scapy tg docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (18 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 19/23] dts: base traffic generators " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:15         ` [PATCH v5 21/23] dts: test suites " Juraj Linkeš
                           ` (3 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
 1 file changed, 54 insertions(+), 37 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 51864b6e6b..d0fe03055a 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -2,14 +2,15 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""Scapy traffic generator.
+"""The Scapy traffic generator.
 
-Traffic generator used for functional testing, implemented using the Scapy library.
+A traffic generator used for functional testing, implemented with
+`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
 The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
 
-The XML-RPC server runs in an interactive remote SSH session running Python console,
-where we start the server. The communication with the server is facilitated with
-a local server proxy.
+The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
+in an interactive remote Python SSH session. The communication with the server is facilitated
+with a local server proxy from the :mod:`xmlrpc.client` module.
 """
 
 import inspect
@@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
     recv_iface: str,
     duration: float,
 ) -> list[bytes]:
-    """RPC function to send and capture packets.
+    """The RPC function to send and capture packets.
 
-    The function is meant to be executed on the remote TG node.
+    The function is meant to be executed on the remote TG node via the server proxy.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
         recv_iface: The logical name of the ingress interface.
         duration: Capture for this amount of time, in seconds.
 
     Returns:
         A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
+        to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     sniffer = scapy.all.AsyncSniffer(
@@ -98,19 +99,15 @@ def scapy_send_packets_and_capture(
 def scapy_send_packets(
     xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
 ) -> None:
-    """RPC function to send packets.
+    """The RPC function to send packets.
 
-    The function is meant to be executed on the remote TG node.
-    It doesn't return anything, only sends packets.
+    The function is meant to be executed on the remote TG node via the server proxy.
+    It only sends `xmlrpc_packets`, without capturing them.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
-
-    Returns:
-        A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
@@ -130,11 +127,19 @@ def scapy_send_packets(
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-    """Basic XML-RPC server that may be extended
-    by functions serializable by the marshal module.
+    r"""Basic XML-RPC server.
+
+    The server may be augmented by functions serializable by the :mod:`marshal` module.
     """
 
     def __init__(self, *args, **kwargs):
+        """Extend the XML-RPC server initialization.
+
+        Args:
+            args: The positional arguments that will be passed to the superclass's constructor.
+            kwargs: The keyword arguments that will be passed to the superclass's constructor.
+                The `allow_none` argument will be set to ``True``.
+        """
         kwargs["allow_none"] = True
         super().__init__(*args, **kwargs)
         self.register_introspection_functions()
@@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs):
         self.register_function(self.add_rpc_function)
 
     def quit(self) -> None:
+        """Quit the server."""
         self._BaseServer__shutdown_request = True
         return None
 
     def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
-        """Add a function to the server.
-
-        This is meant to be executed remotely.
+        """Add a function to the server from the local server proxy.
 
         Args:
               name: The name of the function.
@@ -159,6 +163,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
         self.register_function(function)
 
     def serve_forever(self, poll_interval: float = 0.5) -> None:
+        """Extend the superclass method with an additional print.
+
+        Once executed in the local server proxy, the print gives us a clear string to expect
+        when starting the server. The print means the function was executed on the XML-RPC server.
+        """
         print("XMLRPC OK")
         super().serve_forever(poll_interval)
 
@@ -166,19 +175,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
 class ScapyTrafficGenerator(CapturingTrafficGenerator):
     """Provides access to scapy functions via an RPC interface.
 
-    The traffic generator first starts an XML-RPC on the remote TG node.
-    Then it populates the server with functions which use the Scapy library
-    to send/receive traffic.
-
-    Any packets sent to the remote server are first converted to bytes.
-    They are received as xmlrpc.client.Binary objects on the server side.
-    When the server sends the packets back, they are also received as
-    xmlrpc.client.Binary object on the client side, are converted back to Scapy
-    packets and only then returned from the methods.
+    The class extends the base with remote execution of scapy functions.
 
-    Arguments:
-        tg_node: The node where the traffic generator resides.
-        config: The user configuration of the traffic generator.
+    Any packets sent to the remote server are first converted to bytes. They are received as
+    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
+    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
+    converted back to :class:`scapy.packet.Packet` objects and only then returned from the methods.
 
     Attributes:
         session: The exclusive interactive remote session created by the Scapy
@@ -192,6 +194,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     _config: ScapyTrafficGeneratorConfig
 
     def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        """Extend the constructor with Scapy TG specifics.
+
+        The traffic generator first starts an XML-RPC on the remote `tg_node`.
+        Then it populates the server with functions which use the Scapy library
+        to send/receive traffic:
+
+            * :func:`scapy_send_packets_and_capture`
+            * :func:`scapy_send_packets`
+
+        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
+        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
+
+        Args:
+            tg_node: The node where the traffic generator resides.
+            config: The traffic generator's test run configuration.
+        """
         super().__init__(tg_node, config)
 
         assert (
@@ -237,10 +255,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
             [line for line in src.splitlines() if not line.isspace() and line != ""]
         )
 
-        spacing = "\n" * 4
-
         # execute it in the python terminal
-        self.session.send_command(spacing + src + spacing)
+        self.session.send_command(src + "\n")
         self.session.send_command(
             f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
             f"server.serve_forever()",
@@ -274,6 +290,7 @@ def _send_packets_and_capture(
         return scapy_packets
 
     def close(self) -> None:
+        """Close the traffic generator."""
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 21/23] dts: test suites docstring update
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (19 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 20/23] dts: scapy tg " Juraj Linkeš
@ 2023-11-06 17:15         ` Juraj Linkeš
  2023-11-06 17:16         ` [PATCH v5 22/23] dts: add doc generation dependencies Juraj Linkeš
                           ` (2 subsequent siblings)
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:15 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/tests/TestSuite_hello_world.py | 16 +++++----
 dts/tests/TestSuite_os_udp.py      | 16 +++++----
 dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++---
 3 files changed, 68 insertions(+), 17 deletions(-)

diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..662a8f8726 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
 
-"""
+"""The DPDK hello world app test suite.
+
 Run the helloworld example app and verify it prints a message for each used core.
 No other EAL parameters apart from cores are used.
 """
@@ -15,22 +16,25 @@
 
 
 class TestHelloWorld(TestSuite):
+    """DPDK hello world app test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Build the app we're about to test - helloworld.
         """
         self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
 
     def test_hello_world_single_core(self) -> None:
-        """
+        """Single core test case.
+
         Steps:
             Run the helloworld app on the first usable logical core.
         Verify:
             The app prints a message from the used core:
             "hello from core <core_id>"
         """
-
         # get the first usable core
         lcore_amount = LogicalCoreCount(1, 1, 1)
         lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
@@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
         )
 
     def test_hello_world_all_cores(self) -> None:
-        """
+        """All cores test case.
+
         Steps:
             Run the helloworld app on all usable logical cores.
         Verify:
             The app prints a message from all used cores:
             "hello from core <core_id>"
         """
-
         # get the maximum logical core number
         eal_para = self.sut_node.create_eal_parameters(
             lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
index 9b5f39711d..f99c4d76e3 100644
--- a/dts/tests/TestSuite_os_udp.py
+++ b/dts/tests/TestSuite_os_udp.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
+"""Basic IPv4 OS routing test suite.
+
 Configure SUT node to route traffic from if1 to if2.
 Send a packet to the SUT node, verify it comes back on the second port on the TG node.
 """
@@ -13,22 +14,24 @@
 
 
 class TestOSUdp(TestSuite):
+    """IPv4 UDP OS routing test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Configure SUT ports and SUT to route traffic from if1 to if2.
         """
-
         self.configure_testbed_ipv4()
 
     def test_os_udp(self) -> None:
-        """
+        """Basic UDP IPv4 traffic test case.
+
         Steps:
             Send a UDP packet.
         Verify:
             The packet with proper addresses arrives at the other TG port.
         """
-
         packet = Ether() / IP() / UDP()
 
         received_packets = self.send_packet_and_capture(packet)
@@ -38,7 +41,8 @@ def test_os_udp(self) -> None:
         self.verify_packets(expected_packet, received_packets)
 
     def tear_down_suite(self) -> None:
-        """
+        """Tear down the test suite.
+
         Teardown:
             Remove the SUT port configuration configured in setup.
         """
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 4a269df75b..36ff10a862 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -1,6 +1,17 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Smoke test suite.
+
+Smoke tests are a class of tests which are used for validating a minimal set of important features.
+These are the most important features without which (or when they're faulty) the software wouldn't
+work properly. Thus, if any failure occurs while testing these features,
+there isn't that much of a reason to continue testing, as the software is fundamentally broken.
+
+These tests don't have to include only DPDK tests, as the reason for failures could be
+in the infrastructure (a faulty link between NICs or a misconfiguration).
+"""
+
 import re
 
 from framework.config import PortConfig
@@ -11,13 +22,25 @@
 
 
 class SmokeTests(TestSuite):
+    """DPDK and infrastructure smoke test suite.
+
+    The test cases validate the most basic DPDK functionality needed for all other test suites.
+    The infrastructure also needs to be tested, as that is also used by all other test suites.
+
+    Attributes:
+        is_blocking: This test suite will block the execution of all other test suites
+            in the build target after it.
+        nics_in_node: The NICs present on the SUT node.
+    """
+
     is_blocking = True
     # dicts in this list are expected to have two keys:
     # "pci_address" and "current_driver"
     nics_in_node: list[PortConfig] = []
 
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Set the build directory path and generate a list of NICs in the SUT node.
         """
@@ -25,7 +48,13 @@ def set_up_suite(self) -> None:
         self.nics_in_node = self.sut_node.config.ports
 
     def test_unit_tests(self) -> None:
-        """
+        """DPDK meson fast-tests unit tests.
+
+        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
+        These need to be addressed before other testing.
+
+        The fast-tests unit tests are a subset with only the most basic tests.
+
         Test:
             Run the fast-test unit-test suite through meson.
         """
@@ -37,7 +66,14 @@ def test_unit_tests(self) -> None:
         )
 
     def test_driver_tests(self) -> None:
-        """
+        """DPDK meson driver-tests unit tests.
+
+        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
+        These need to be addressed before other testing.
+
+        The driver-tests unit tests are a subset that test only drivers. These may be run
+        with virtual devices as well.
+
         Test:
             Run the driver-test unit-test suite through meson.
         """
@@ -63,7 +99,10 @@ def test_driver_tests(self) -> None:
         )
 
     def test_devices_listed_in_testpmd(self) -> None:
-        """
+        """Testpmd device discovery.
+
+        If the configured devices can't be found in testpmd, they can't be tested.
+
         Test:
             Uses testpmd driver to verify that devices have been found by testpmd.
         """
@@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None:
             )
 
     def test_device_bound_to_driver(self) -> None:
-        """
+        """Device driver in OS.
+
+        The devices must be bound to the proper driver, otherwise they can't be used by DPDK
+        or the traffic generators.
+
         Test:
             Ensure that all drivers listed in the config are bound to the correct
             driver.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 22/23] dts: add doc generation dependencies
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (20 preceding siblings ...)
  2023-11-06 17:15         ` [PATCH v5 21/23] dts: test suites " Juraj Linkeš
@ 2023-11-06 17:16         ` Juraj Linkeš
  2023-11-06 17:16         ` [PATCH v5 23/23] dts: add doc generation Juraj Linkeš
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:16 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 499 ++++++++++++++++++++++++++++++++++++++++++++-
 dts/pyproject.toml |   7 +
 2 files changed, 505 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index a734fa71f0..dea98f6913 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.13.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
+    {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
+]
+
+[package.dependencies]
+setuptools = {version = "*", markers = "python_version >= \"3.12\""}
+
+[package.extras]
+dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +201,105 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.3.2"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"},
+    {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.6"
@@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"]
 test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
 test-randomorder = ["pytest-randomly"]
 
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
 [[package]]
 name = "fabric"
 version = "2.7.1"
@@ -252,6 +401,28 @@ pathlib2 = "*"
 pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
 testing = ["mock (>=2.0.0,<3.0)"]
 
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
+
 [[package]]
 name = "invoke"
 version = "1.7.3"
@@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
 plugins = ["setuptools"]
 requirements-deprecated-finder = ["pip-api", "pipreqs"]
 
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
 [[package]]
 name = "jsonpatch"
 version = "1.33"
@@ -340,6 +528,65 @@ files = [
 [package.dependencies]
 referencing = ">=0.28.0"
 
+[[package]]
+name = "markupsafe"
+version = "2.1.3"
+description = "Safely add untrusted strings to HTML/XML markup."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
+    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -404,6 +651,17 @@ files = [
     {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
 ]
 
+[[package]]
+name = "packaging"
+version = "23.2"
+description = "Core utilities for Python packages"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
+    {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
+]
+
 [[package]]
 name = "paramiko"
 version = "3.2.0"
@@ -515,6 +773,20 @@ files = [
     {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
 ]
 
+[[package]]
+name = "pygments"
+version = "2.16.1"
+description = "Pygments is a syntax highlighting package written in Python."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"},
+    {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
 [[package]]
 name = "pylama"
 version = "8.4.1"
@@ -632,6 +904,27 @@ files = [
 attrs = ">=22.2.0"
 rpds-py = ">=0.7.0"
 
+[[package]]
+name = "requests"
+version = "2.31.0"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
+    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
 [[package]]
 name = "rpds-py"
 version = "0.9.2"
@@ -753,6 +1046,22 @@ basic = ["ipython"]
 complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
 docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
 
+[[package]]
+name = "setuptools"
+version = "68.2.2"
+description = "Easily download, build, install, upgrade, and uninstall Python packages"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"},
+    {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
+testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
+testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
+
 [[package]]
 name = "six"
 version = "1.16.0"
@@ -775,6 +1084,177 @@ files = [
     {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
 ]
 
+[[package]]
+name = "sphinx"
+version = "6.2.1"
+description = "Python documentation generator"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
+    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18.1,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.2"
+description = "Read the Docs theme for Sphinx"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
+    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = ">=4,<5"
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.7"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"},
+    {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.5"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"},
+    {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.4"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"},
+    {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.6"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"},
+    {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.9"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"},
+    {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
 [[package]]
 name = "toml"
 version = "0.10.2"
@@ -819,6 +1299,23 @@ files = [
     {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
 ]
 
+[[package]]
+name = "urllib3"
+version = "2.0.7"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
+    {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
+
 [[package]]
 name = "warlock"
 version = "2.0.1"
@@ -837,4 +1334,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
+content-hash = "5faad2e53833e9b8a353ad3554c58de991801a9ebe8f9712fc9c839b35e7a789"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 3943c87c87..98df431b3b 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -35,6 +35,13 @@ pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+sphinx = "<7"
+sphinx-rtd-theme = "^1.2.2"
+
 [build-system]
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v5 23/23] dts: add doc generation
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (21 preceding siblings ...)
  2023-11-06 17:16         ` [PATCH v5 22/23] dts: add doc generation dependencies Juraj Linkeš
@ 2023-11-06 17:16         ` Juraj Linkeš
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
  23 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-06 17:16 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is Sphinx, which is already
used in DPDK. The same configuration is used to preserve style, but it's
been augmented with doc-generating configuration and a change to how the
sidebar displays the content hierarchy.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 29 ++++++++++++-------
 doc/api/meson.build             |  1 +
 doc/guides/conf.py              | 34 +++++++++++++++++++----
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 32 ++++++++++++++++++++-
 dts/doc/conf_yaml_schema.json   |  1 +
 dts/doc/index.rst               | 17 ++++++++++++
 dts/doc/meson.build             | 49 +++++++++++++++++++++++++++++++++
 dts/meson.build                 | 16 +++++++++++
 meson.build                     |  1 +
 10 files changed, 165 insertions(+), 16 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..c2f3acfb1d 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,46 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default='.')
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..92fe10d9e7 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..169b1d24bc 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,31 @@
           file=stderr)
     pass
 
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_typehints_description_target = 'documented'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_preprocess_types = True
+add_module_names = False
+toc_object_entries = False
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
+
+# Sidebar config
+html_theme_options = {
+    'collapse_navigation': False,
+    'navigation_depth': -1,
+}
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +59,8 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+path.append(environ.get('DTS_ROOT'))
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index b1e99107c3..2b96bb11f1 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -283,7 +283,10 @@ When adding code to the DTS framework, pay attention to the rest of the code
 and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
 warnings when some of the basics are not met.
 
-The code must be properly documented with docstrings. The style must conform to
+The API documentation, which is a helpful reference when developing, may be accessed
+in the code directly or generated with the `API docs build steps <building_api_docs>`_.
+
+Speaking of which, the code must be properly documented with docstrings. The style must conform to
 the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
 See an example of the style
 `here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
@@ -408,3 +411,30 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+.. _building_api_docs:
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+.. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+The documentation is built using the standard DPDK build system. After executing the meson command
+and entering Poetry's shell, build the documentation with:
+
+.. code-block:: console
+
+   ninja -C build dts-doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run
+   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json
new file mode 120000
index 0000000000..d89eb81b72
--- /dev/null
+++ b/dts/doc/conf_yaml_schema.json
@@ -0,0 +1 @@
+../framework/config/conf_yaml_schema.json
\ No newline at end of file
diff --git a/dts/doc/index.rst b/dts/doc/index.rst
new file mode 100644
index 0000000000..f5dcd553f2
--- /dev/null
+++ b/dts/doc/index.rst
@@ -0,0 +1,17 @@
+.. DPDK Test Suite documentation.
+
+Welcome to DPDK Test Suite's documentation!
+===========================================
+
+.. toctree::
+   :titlesonly:
+   :caption: Contents:
+
+   framework
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..d2e8c19789
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,49 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build')
+sphinx_apidoc = find_program('sphinx-apidoc')
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_api_framework_dir = join_paths(dts_dir, 'framework')
+dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+dts_api_src = custom_target('dts_api_src',
+        output: 'modules.rst',
+        command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
+            sphinx_apidoc, '--append-syspath', '--force',
+            '--module-first', '--separate', '-V', meson.project_version(),
+            '--output-dir', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
+            dts_api_framework_dir],
+        build_by_default: false)
+doc_targets += dts_api_src
+doc_target_names += 'DTS_API_sphinx_sources'
+
+cp = find_program('cp')
+cp_index = custom_target('cp_index',
+        input: ['index.rst', 'conf_yaml_schema.json'],
+        output: 'index.rst',
+        depends: dts_api_src,
+        command: [cp, '--dereference', '@INPUT@', dts_api_build_dir],
+        build_by_default: false)
+doc_targets += cp_index
+doc_target_names += 'DTS_API_sphinx_index'
+
+extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        depends: cp_index,
+        command: [sphinx_wrapper, sphinx, meson.project_version(),
+            dts_api_build_dir, dts_api_build_dir, extra_sphinx_args],
+        build_by_default: false,
+        install: false,
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..e8ce0f06ac
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts-doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 2e6e546d20..c391bf8c71 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v5 03/23] dts: add basic developer docs
  2023-11-06 17:15         ` [PATCH v5 03/23] dts: add basic developer docs Juraj Linkeš
@ 2023-11-07 14:39           ` Yoan Picchi
  2023-11-08  9:01             ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-07 14:39 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek
  Cc: dev

On 11/6/23 17:15, Juraj Linkeš wrote:
> Expand the framework contribution guidelines and add how to document the
> code with Python docstrings.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 73 insertions(+)
> 
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index 32c18ee472..b1e99107c3 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
>   The results contain basic statistics of passed/failed test cases and DPDK version.
>   
>   
> +Contributing to DTS
> +-------------------
> +
> +There are two areas of contribution: The DTS framework and DTS test suites.
> +
> +The framework contains the logic needed to run test cases, such as connecting to nodes,
> +running DPDK apps and collecting results.
> +
> +The test cases call APIs from the framework to test their scenarios. Adding test cases may
> +require adding code to the framework as well.
> +
> +
> +Framework Coding Guidelines
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +When adding code to the DTS framework, pay attention to the rest of the code
> +and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
> +warnings when some of the basics are not met.
> +
> +The code must be properly documented with docstrings. The style must conform to
> +the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
> +See an example of the style
> +`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
> +For cases which are not covered by the Google style, refer
> +to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
> +the two style guides, where we deviate or where some additional clarification is helpful:
> +
> +   * The __init__() methods of classes are documented separately from the docstring of the class
> +     itself.
> +   * The docstrigs of implemented abstract methods should refer to the superclass's definition
> +     if there's no deviation.
> +   * Instance variables/attributes should be documented in the docstring of the class
> +     in the ``Attributes:`` section.
> +   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
> +     attributes which result in instance variables/attributes should also be recorded
> +     in the ``Attributes:`` section.
> +   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
> +     the type annotated line. The description may be omitted if the meaning is obvious.
> +   * The Enum and TypedDict also process the attributes in particular ways and should be documented
> +     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
> +   * When referencing a parameter of a function or a method in their docstring, don't use
> +     any articles and put the parameter into single backticks. This mimics the style of
> +     `Python's documentation <https://docs.python.org/3/index.html>`_.
> +   * When specifying a value, use double backticks::
> +
> +        def foo(greet: bool) -> None:
> +            """Demonstration of single and double backticks.
> +
> +            `greet` controls whether ``Hello World`` is printed.
> +
> +            Args:
> +               greet: Whether to print the ``Hello World`` message.
> +            """
> +            if greet:
> +               print(f"Hello World")
> +
> +   * The docstring maximum line length is the same as the code maximum line length.
> +
> +
>   How To Write a Test Suite
>   -------------------------
>   
> @@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
>      | These methods don't need to be implemented if there's no need for them in a test suite.
>        In that case, nothing will happen when they're is executed.

Not your change, but it does highlight a previous mistake : "they're is"

>   
> +#. **Configuration, traffic and other logic**
> +
> +   The ``TestSuite`` class contains a variety of methods for anything that
> +   a test suite setup or teardown or a test case may need to do.

Three way or. There's a need for an oxford coma: setup, teardown, or a 
test case

> +
> +   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
> +   and use the interactive shell instances directly.
> +
> +   These are the two main ways to call the framework logic in test suites. If there's any
> +   functionality or logic missing from the framework, it should be implemented so that
> +   the test suites can use one of these two ways.
> +
>   #. **Test case verification**
>   
>      Test case verification should be done with the ``verify`` method, which records the result.
> @@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
>      and used by the test suite via the ``sut_node`` field.
>   
>   
> +.. _dts_dev_tools:
> +
>   DTS Developer Tools
>   -------------------
>   


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v5 02/23] dts: add docstring checker
  2023-11-06 17:15         ` [PATCH v5 02/23] dts: add docstring checker Juraj Linkeš
@ 2023-11-07 17:38           ` Yoan Picchi
  0 siblings, 0 replies; 255+ messages in thread
From: Yoan Picchi @ 2023-11-07 17:38 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek
  Cc: dev

On 11/6/23 17:15, Juraj Linkeš wrote:
> Python docstrings are the in-code way to document the code. The
> docstring checker of choice is pydocstyle which we're executing from
> Pylama, but the current latest versions are not complatible due to [0],
> so pin the pydocstyle version to the latest working version.
> 
> [0] https://github.com/klen/pylama/issues/232
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/poetry.lock    | 12 ++++++------
>   dts/pyproject.toml |  6 +++++-
>   2 files changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/dts/poetry.lock b/dts/poetry.lock
> index f7b3b6d602..a734fa71f0 100644
> --- a/dts/poetry.lock
> +++ b/dts/poetry.lock
> @@ -489,20 +489,20 @@ files = [
>   
>   [[package]]
>   name = "pydocstyle"
> -version = "6.3.0"
> +version = "6.1.1"
>   description = "Python docstring style checker"
>   optional = false
>   python-versions = ">=3.6"
>   files = [
> -    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
> -    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
> +    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
> +    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
>   ]
>   
>   [package.dependencies]
> -snowballstemmer = ">=2.2.0"
> +snowballstemmer = "*"
>   
>   [package.extras]
> -toml = ["tomli (>=1.2.3)"]
> +toml = ["toml"]
>   
>   [[package]]
>   name = "pyflakes"
> @@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
>   [metadata]
>   lock-version = "2.0"
>   python-versions = "^3.10"
> -content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
> +content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
> diff --git a/dts/pyproject.toml b/dts/pyproject.toml
> index 6762edfa6b..3943c87c87 100644
> --- a/dts/pyproject.toml
> +++ b/dts/pyproject.toml
> @@ -25,6 +25,7 @@ PyYAML = "^6.0"
>   types-PyYAML = "^6.0.8"
>   fabric = "^2.7.1"
>   scapy = "^2.5.0"
> +pydocstyle = "6.1.1"
>   
>   [tool.poetry.group.dev.dependencies]
>   mypy = "^0.961"
> @@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
>   build-backend = "poetry.core.masonry.api"
>   
>   [tool.pylama]
> -linters = "mccabe,pycodestyle,pyflakes"
> +linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
>   format = "pylint"
>   max_line_length = 88 # https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length
>   
> +[tool.pylama.linter.pydocstyle]
> +convention = "google"
> +
>   [tool.mypy]
>   python_version = "3.10"
>   enable_error_code = ["ignore-without-code"]

Reviewed-by: Yoan Picchi <yoan.picchi@arm.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v5 03/23] dts: add basic developer docs
  2023-11-07 14:39           ` Yoan Picchi
@ 2023-11-08  9:01             ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08  9:01 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, dev

On Tue, Nov 7, 2023 at 3:40 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/6/23 17:15, Juraj Linkeš wrote:
> > Expand the framework contribution guidelines and add how to document the
> > code with Python docstrings.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 73 insertions(+)
> >
> > diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> > index 32c18ee472..b1e99107c3 100644
> > --- a/doc/guides/tools/dts.rst
> > +++ b/doc/guides/tools/dts.rst
> > @@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
> >   The results contain basic statistics of passed/failed test cases and DPDK version.
> >
> >
> > +Contributing to DTS
> > +-------------------
> > +
> > +There are two areas of contribution: The DTS framework and DTS test suites.
> > +
> > +The framework contains the logic needed to run test cases, such as connecting to nodes,
> > +running DPDK apps and collecting results.
> > +
> > +The test cases call APIs from the framework to test their scenarios. Adding test cases may
> > +require adding code to the framework as well.
> > +
> > +
> > +Framework Coding Guidelines
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +When adding code to the DTS framework, pay attention to the rest of the code
> > +and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
> > +warnings when some of the basics are not met.
> > +
> > +The code must be properly documented with docstrings. The style must conform to
> > +the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
> > +See an example of the style
> > +`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
> > +For cases which are not covered by the Google style, refer
> > +to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
> > +the two style guides, where we deviate or where some additional clarification is helpful:
> > +
> > +   * The __init__() methods of classes are documented separately from the docstring of the class
> > +     itself.
> > +   * The docstrigs of implemented abstract methods should refer to the superclass's definition
> > +     if there's no deviation.
> > +   * Instance variables/attributes should be documented in the docstring of the class
> > +     in the ``Attributes:`` section.
> > +   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
> > +     attributes which result in instance variables/attributes should also be recorded
> > +     in the ``Attributes:`` section.
> > +   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
> > +     the type annotated line. The description may be omitted if the meaning is obvious.
> > +   * The Enum and TypedDict also process the attributes in particular ways and should be documented
> > +     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
> > +   * When referencing a parameter of a function or a method in their docstring, don't use
> > +     any articles and put the parameter into single backticks. This mimics the style of
> > +     `Python's documentation <https://docs.python.org/3/index.html>`_.
> > +   * When specifying a value, use double backticks::
> > +
> > +        def foo(greet: bool) -> None:
> > +            """Demonstration of single and double backticks.
> > +
> > +            `greet` controls whether ``Hello World`` is printed.
> > +
> > +            Args:
> > +               greet: Whether to print the ``Hello World`` message.
> > +            """
> > +            if greet:
> > +               print(f"Hello World")
> > +
> > +   * The docstring maximum line length is the same as the code maximum line length.
> > +
> > +
> >   How To Write a Test Suite
> >   -------------------------
> >
> > @@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
> >      | These methods don't need to be implemented if there's no need for them in a test suite.
> >        In that case, nothing will happen when they're is executed.
>
> Not your change, but it does highlight a previous mistake : "they're is"
>

Good catch - we'll be adding to this guide in the future so we can fix it then.

> >
> > +#. **Configuration, traffic and other logic**
> > +
> > +   The ``TestSuite`` class contains a variety of methods for anything that
> > +   a test suite setup or teardown or a test case may need to do.
>
> Three way or. There's a need for an oxford coma: setup, teardown, or a
> test case
>

Thanks, I'll change this.

> > +
> > +   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
> > +   and use the interactive shell instances directly.
> > +
> > +   These are the two main ways to call the framework logic in test suites. If there's any
> > +   functionality or logic missing from the framework, it should be implemented so that
> > +   the test suites can use one of these two ways.
> > +
> >   #. **Test case verification**
> >
> >      Test case verification should be done with the ``verify`` method, which records the result.
> > @@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
> >      and used by the test suite via the ``sut_node`` field.
> >
> >
> > +.. _dts_dev_tools:
> > +
> >   DTS Developer Tools
> >   -------------------
> >
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 01/23] dts: code adjustments for doc generation
  2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
                           ` (22 preceding siblings ...)
  2023-11-06 17:16         ` [PATCH v5 23/23] dts: add doc generation Juraj Linkeš
@ 2023-11-08 12:53         ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 02/23] dts: add docstring checker Juraj Linkeš
                             ` (21 more replies)
  23 siblings, 22 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

The standard Python tool for generating API documentation, Sphinx,
imports modules one-by-one when generating the documentation. This
requires code changes:
* properly guarding argument parsing in the if __name__ == '__main__'
  block,
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create log files outside of a DTS run,
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere,
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by reorganizing the code,
* some code reorganization was done because the resulting structure
  makes more sense, improving documentation clarity.

The are some other changes which are documentation related:
* added missing type annotation so they appear in the generated docs,
* reordered arguments in some methods,
* removed superfluous arguments and attributes,
* change private functions/methods/attributes to private and vice-versa.

The above all appear in the generated documentation and the with them,
the documentation is improved.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              | 10 ++-
 dts/framework/dts.py                          | 33 +++++--
 dts/framework/exception.py                    | 54 +++++-------
 dts/framework/remote_session/__init__.py      | 41 ++++-----
 .../interactive_remote_session.py             |  0
 .../{remote => }/interactive_shell.py         |  0
 .../{remote => }/python_shell.py              |  0
 .../remote_session/remote/__init__.py         | 27 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               | 12 +--
 .../{remote => }/testpmd_shell.py             |  0
 dts/framework/settings.py                     | 87 +++++++++++--------
 dts/framework/test_result.py                  |  4 +-
 dts/framework/test_suite.py                   |  7 +-
 dts/framework/testbed_model/__init__.py       | 12 +--
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py    | 27 ------
 .../linux_session.py                          |  6 +-
 dts/framework/testbed_model/node.py           | 26 ++++--
 .../os_session.py                             | 22 ++---
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py                          |  4 +-
 dts/framework/testbed_model/sut_node.py       |  8 +-
 dts/framework/testbed_model/tg_node.py        | 30 +------
 .../traffic_generator/__init__.py             | 24 +++++
 .../capturing_traffic_generator.py            |  6 +-
 .../{ => traffic_generator}/scapy.py          | 23 ++---
 .../traffic_generator.py                      | 16 +++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py                        | 46 +++-------
 dts/main.py                                   |  9 +-
 31 files changed, 259 insertions(+), 288 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index cb7e00ba34..2044c82611 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -17,6 +17,7 @@
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
 
@@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict):
+    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
         # This looks useless now, but is designed to allow expansion to traffic
         # generators that require more configuration later.
         match TrafficGeneratorType(d["type"]):
@@ -97,6 +98,10 @@ def from_dict(d: dict):
                 return ScapyTrafficGeneratorConfig(
                     traffic_generator_type=TrafficGeneratorType.SCAPY
                 )
+            case _:
+                raise ConfigurationError(
+                    f'Unknown traffic generator type "{d["type"]}".'
+                )
 
 
 @dataclass(slots=True, frozen=True)
@@ -324,6 +329,3 @@ def load_config() -> Configuration:
     config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
     config_obj: Configuration = Configuration.from_dict(dict(config))
     return config_obj
-
-
-CONFIGURATION = load_config()
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f773f0c38d..4c7fb0c40a 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,19 +6,19 @@
 import sys
 
 from .config import (
-    CONFIGURATION,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     TestSuiteConfig,
+    load_config,
 )
 from .exception import BlockingTestSuiteError
 from .logger import DTSLOG, getLogger
 from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
 from .test_suite import get_test_suites
 from .testbed_model import SutNode, TGNode
-from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG = None  # type: ignore[assignment]
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -30,14 +30,18 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
-    check_dts_python_version()
+    _check_dts_python_version()
 
     sut_nodes: dict[str, SutNode] = {}
     tg_nodes: dict[str, TGNode] = {}
     try:
         # for all Execution sections
-        for execution in CONFIGURATION.executions:
+        for execution in load_config().executions:
             sut_node = sut_nodes.get(execution.system_under_test_node.name)
             tg_node = tg_nodes.get(execution.traffic_generator_node.name)
 
@@ -82,6 +86,25 @@ def run_all() -> None:
     _exit_dts()
 
 
+def _check_dts_python_version() -> None:
+    def RED(text: str) -> str:
+        return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+    if sys.version_info.major < 3 or (
+        sys.version_info.major == 3 and sys.version_info.minor < 10
+    ):
+        print(
+            RED(
+                (
+                    "WARNING: DTS execution node's python version is lower than"
+                    "python 3.10, is deprecated and will not work in future releases."
+                )
+            ),
+            file=sys.stderr,
+        )
+        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+
+
 def _run_execution(
     sut_node: SutNode,
     tg_node: TGNode,
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 001a5a5496..7489c03570 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
     Command execution timeout.
     """
 
-    command: str
-    output: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _command: str
 
-    def __init__(self, command: str, output: str):
-        self.command = command
-        self.output = output
+    def __init__(self, command: str):
+        self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self.command}"
-
-    def get_output(self) -> str:
-        return self.output
+        return f"TIMEOUT on {self._command}"
 
 
 class SSHConnectionError(DTSError):
@@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
     SSH connection error.
     """
 
-    host: str
-    errors: list[str]
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
+    _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
-        self.host = host
-        self.errors = [] if errors is None else errors
+        self._host = host
+        self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
-        message = f"Error trying to connect with {self.host}."
-        if self.errors:
-            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
+        message = f"Error trying to connect with {self._host}."
+        if self._errors:
+            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
 
         return message
 
@@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
     It can no longer be used.
     """
 
-    host: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
 
     def __init__(self, host: str):
-        self.host = host
+        self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self.host} has died"
+        return f"SSH session with {self._host} has died"
 
 
 class ConfigurationError(DTSError):
@@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError):
     Raised when a command executed on a Node returns a non-zero exit status.
     """
 
-    command: str
-    command_return_code: int
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    command: str
+    _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
         self.command = command
-        self.command_return_code = command_return_code
+        self._command_return_code = command_return_code
 
     def __str__(self) -> str:
         return (
             f"Command {self.command} returned a non-zero exit code: "
-            f"{self.command_return_code}"
+            f"{self._command_return_code}"
         )
 
 
@@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError):
     Used in test cases to verify the expected behavior.
     """
 
-    value: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
-    def __init__(self, value: str):
-        self.value = value
-
-    def __str__(self) -> str:
-        return repr(self.value)
-
 
 class BlockingTestSuiteError(DTSError):
-    suite_name: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
+    _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
-        self.suite_name = suite_name
+        self._suite_name = suite_name
 
     def __str__(self) -> str:
-        return f"Blocking suite {self.suite_name} failed."
+        return f"Blocking suite {self._suite_name} failed."
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 00b6d1f03a..5e7ddb2b05 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -12,29 +12,24 @@
 
 # pylama:ignore=W0611
 
-from framework.config import OS, NodeConfiguration
-from framework.exception import ConfigurationError
+from framework.config import NodeConfiguration
 from framework.logger import DTSLOG
 
-from .linux_session import LinuxSession
-from .os_session import InteractiveShellType, OSSession
-from .remote import (
-    CommandResult,
-    InteractiveRemoteSession,
-    InteractiveShell,
-    PythonShell,
-    RemoteSession,
-    SSHSession,
-    TestPmdDevice,
-    TestPmdShell,
-)
-
-
-def create_session(
+from .interactive_remote_session import InteractiveRemoteSession
+from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
+from .testpmd_shell import TestPmdShell
+
+
+def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> OSSession:
-    match node_config.os:
-        case OS.linux:
-            return LinuxSession(node_config, name, logger)
-        case _:
-            raise ConfigurationError(f"Unsupported OS {node_config.os}")
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
+
+
+def create_interactive_session(
+    node_config: NodeConfiguration, logger: DTSLOG
+) -> InteractiveRemoteSession:
+    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_remote_session.py
rename to dts/framework/remote_session/interactive_remote_session.py
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_shell.py
rename to dts/framework/remote_session/interactive_shell.py
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/python_shell.py
rename to dts/framework/remote_session/python_shell.py
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 06403691a5..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-# Copyright(c) 2023 University of New Hampshire
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .interactive_remote_session import InteractiveRemoteSession
-from .interactive_shell import InteractiveShell
-from .python_shell import PythonShell
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-from .testpmd_shell import TestPmdDevice, TestPmdShell
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
-
-
-def create_interactive_session(
-    node_config: NodeConfiguration, logger: DTSLOG
-) -> InteractiveRemoteSession:
-    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
index 8d127f1601..cee11d14d6 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -18,9 +18,7 @@
     SSHException,
 )
 
-from framework.config import NodeConfiguration
 from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
-from framework.logger import DTSLOG
 
 from .remote_session import CommandResult, RemoteSession
 
@@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
 
     session: Connection
 
-    def __init__(
-        self,
-        node_config: NodeConfiguration,
-        session_name: str,
-        logger: DTSLOG,
-    ):
-        super(SSHSession, self).__init__(node_config, session_name, logger)
-
     def _connect(self) -> None:
         errors = []
         retry_attempts = 10
@@ -117,7 +107,7 @@ def _send_command(
 
         except CommandTimedOut as e:
             self._logger.exception(e)
-            raise SSHTimeoutError(command, e.result.stderr) from e
+            raise SSHTimeoutError(command) from e
 
         return CommandResult(
             self.name, command, output.stdout, output.stderr, output.return_code
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/testpmd_shell.py
rename to dts/framework/remote_session/testpmd_shell.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index cfa39d011b..7f5841d073 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -22,8 +22,8 @@ def __init__(
             option_strings: Sequence[str],
             dest: str,
             nargs: str | int | None = None,
-            const: str | None = None,
-            default: str = None,
+            const: bool | None = None,
+            default: Any = None,
             type: Callable[[str], _T | argparse.FileType | None] = None,
             choices: Iterable[_T] | None = None,
             required: bool = False,
@@ -32,6 +32,12 @@ def __init__(
         ) -> None:
             env_var_value = os.environ.get(env_var)
             default = env_var_value or default
+            if const is not None:
+                nargs = 0
+                default = const if env_var_value else default
+                type = None
+                choices = None
+                metavar = None
             super(_EnvironmentArgument, self).__init__(
                 option_strings,
                 dest,
@@ -52,22 +58,28 @@ def __call__(
             values: Any,
             option_string: str = None,
         ) -> None:
-            setattr(namespace, self.dest, values)
+            if self.const is not None:
+                setattr(namespace, self.dest, self.const)
+            else:
+                setattr(namespace, self.dest, values)
 
     return _EnvironmentArgument
 
 
-@dataclass(slots=True, frozen=True)
-class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+@dataclass(slots=True)
+class Settings:
+    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: Settings = Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
         "and targets.",
     )
@@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
         "compiling DPDK.",
@@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
-        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
+        default=SETTINGS.verbose,
+        const=True,
+        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
         "to the console.",
     )
 
@@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
-        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+        const=True,
+        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
     )
 
     parser.add_argument(
@@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--snapshot",
         "--git-ref",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
         "tag ID or tree ID to test. To test local changes, first commit them, "
@@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
     return parser
 
 
-def _get_settings() -> _Settings:
+def get_settings() -> Settings:
     parsed_args = _get_parser().parse_args()
-    return _Settings(
+    return Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
-        verbose=(parsed_args.verbose == "Y"),
-        skip_setup=(parsed_args.skip_setup == "Y"),
+        verbose=parsed_args.verbose,
+        skip_setup=parsed_args.skip_setup,
         dpdk_tarball_path=Path(
-            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
-        )
-        if not os.path.exists(parsed_args.tarball)
-        else Path(parsed_args.tarball),
+            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
+            if not os.path.exists(parsed_args.tarball)
+            else Path(parsed_args.tarball)
+        ),
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=(
+            parsed_args.test_cases.split(",") if parsed_args.test_cases else []
+        ),
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index f0fbe80f6f..603e18872c 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -254,7 +254,7 @@ def add_build_target(
         self._inner_results.append(build_target_result)
         return build_target_result
 
-    def add_sut_info(self, sut_info: NodeInfo):
+    def add_sut_info(self, sut_info: NodeInfo) -> None:
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
@@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
         self._inner_results.append(execution_result)
         return execution_result
 
-    def add_error(self, error) -> None:
+    def add_error(self, error: Exception) -> None:
         self._errors.append(error)
 
     def process(self) -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3b890c0451..d53553bf34 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -11,7 +11,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Union
+from typing import Any, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -26,8 +26,7 @@
 from .logger import DTSLOG, getLogger
 from .settings import SETTINGS
 from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode, TGNode
-from .testbed_model.hw.port import Port, PortLink
+from .testbed_model import Port, PortLink, SutNode, TGNode
 from .utils import get_packet_summaries
 
 
@@ -453,7 +452,7 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
-    def is_test_suite(object) -> bool:
+    def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
                 return True
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 5cbb859e47..8ced05653b 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,15 +9,9 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
+from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from .node import Node
+from .port import Port, PortLink
 from .sut_node import SutNode
 from .tg_node import TGNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/testbed_model/cpu.py
index d1918a12dc..8fe785dfe4 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
similarity index 97%
rename from dts/framework/remote_session/linux_session.py
rename to dts/framework/testbed_model/linux_session.py
index a3f1a6bf3b..f472bb8f0f 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -9,10 +9,10 @@
 from typing_extensions import NotRequired
 
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
 from framework.utils import expand_range
 
+from .cpu import LogicalCore
+from .port import Port
 from .posix_session import PosixSession
 
 
@@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
             lcores.append(LogicalCore(lcore, core, socket, node))
         return lcores
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return dpdk_prefix
 
     def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index fc01e0bf8e..7571e7b98d 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,23 +12,26 @@
 from typing import Any, Callable, Type, Union
 
 from framework.config import (
+    OS,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     NodeConfiguration,
 )
+from framework.exception import ConfigurationError
 from framework.logger import DTSLOG, getLogger
-from framework.remote_session import InteractiveShellType, OSSession, create_session
 from framework.settings import SETTINGS
 
-from .hw import (
+from .cpu import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
-    VirtualDevice,
     lcore_filter,
 )
-from .hw.port import Port
+from .linux_session import LinuxSession
+from .os_session import InteractiveShellType, OSSession
+from .port import Port
+from .virtual_device import VirtualDevice
 
 
 class Node(ABC):
@@ -69,6 +72,7 @@ def __init__(self, node_config: NodeConfiguration):
     def _init_ports(self) -> None:
         self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
         self.main_session.update_ports(self.ports)
+
         for port in self.ports:
             self.configure_port_state(port)
 
@@ -172,9 +176,9 @@ def create_interactive_shell(
 
         return self.main_session.create_interactive_shell(
             shell_cls,
-            app_args,
             timeout,
             privileged,
+            app_args,
         )
 
     def filter_lcores(
@@ -205,7 +209,7 @@ def _get_remote_cpus(self) -> None:
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
-    def _setup_hugepages(self):
+    def _setup_hugepages(self) -> None:
         """
         Setup hugepages on the Node. Different architectures can supply different
         amounts of memory for hugepages and numa-based hugepage allocation may need
@@ -249,3 +253,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
             return lambda *args: None
         else:
             return func
+
+
+def create_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> OSSession:
+    match node_config.os:
+        case OS.linux:
+            return LinuxSession(node_config, name, logger)
+        case _:
+            raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
similarity index 95%
rename from dts/framework/remote_session/os_session.py
rename to dts/framework/testbed_model/os_session.py
index 8a709eac1c..76e595a518 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -10,19 +10,19 @@
 
 from framework.config import Architecture, NodeConfiguration, NodeInfo
 from framework.logger import DTSLOG
-from framework.remote_session.remote import InteractiveShell
-from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
-from framework.utils import MesonArgs
-
-from .remote import (
+from framework.remote_session import (
     CommandResult,
     InteractiveRemoteSession,
+    InteractiveShell,
     RemoteSession,
     create_interactive_session,
     create_remote_session,
 )
+from framework.settings import SETTINGS
+from framework.utils import MesonArgs
+
+from .cpu import LogicalCore
+from .port import Port
 
 InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
 
@@ -85,9 +85,9 @@ def send_command(
     def create_interactive_shell(
         self,
         shell_cls: Type[InteractiveShellType],
-        eal_parameters: str,
         timeout: float,
         privileged: bool,
+        app_args: str,
     ) -> InteractiveShellType:
         """
         See "create_interactive_shell" in SutNode
@@ -96,7 +96,7 @@ def create_interactive_shell(
             self.interactive_session.session,
             self._logger,
             self._get_privileged_command if privileged else None,
-            eal_parameters,
+            app_args,
             timeout,
         )
 
@@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
         """
 
     @abstractmethod
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
         """
         Try to find DPDK remote dir in remote_dir.
         """
@@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
         """
 
     @abstractmethod
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
         Get the DPDK file prefix that will be used when running DPDK apps.
         """
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
similarity index 100%
rename from dts/framework/testbed_model/hw/port.py
rename to dts/framework/testbed_model/port.py
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
similarity index 98%
rename from dts/framework/remote_session/posix_session.py
rename to dts/framework/testbed_model/posix_session.py
index 5da0516e05..1d1d5b1b26 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
 
         return ret_opts
 
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
@@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs(
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 202aebfd06..4e33cf02ea 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -15,12 +15,14 @@
     NodeInfo,
     SutNodeConfiguration,
 )
-from framework.remote_session import CommandResult, InteractiveShellType, OSSession
+from framework.remote_session import CommandResult
 from framework.settings import SETTINGS
 from framework.utils import MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
+from .cpu import LogicalCoreCount, LogicalCoreList
 from .node import Node
+from .os_session import InteractiveShellType, OSSession
+from .virtual_device import VirtualDevice
 
 
 class EalParameters(object):
@@ -289,7 +291,7 @@ def create_eal_parameters(
         prefix: str = "dpdk",
         append_prefix_timestamp: bool = True,
         no_pci: bool = False,
-        vdevs: list[VirtualDevice] = None,
+        vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
         """
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 27025cfa31..166eb8430e 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -16,16 +16,11 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.config import (
-    ScapyTrafficGeneratorConfig,
-    TGNodeConfiguration,
-    TrafficGeneratorType,
-)
-from framework.exception import ConfigurationError
-
-from .capturing_traffic_generator import CapturingTrafficGenerator
-from .hw.port import Port
+from framework.config import TGNodeConfiguration
+
 from .node import Node
+from .port import Port
+from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
 
 
 class TGNode(Node):
@@ -80,20 +75,3 @@ def close(self) -> None:
         """Free all resources used by the node"""
         self.traffic_generator.close()
         super(TGNode, self).close()
-
-
-def create_traffic_generator(
-    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
-) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
-
-    from .scapy import ScapyTrafficGenerator
-
-    match traffic_generator_config.traffic_generator_type:
-        case TrafficGeneratorType.SCAPY:
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
-        case _:
-            raise ConfigurationError(
-                "Unknown traffic generator: "
-                f"{traffic_generator_config.traffic_generator_type}"
-            )
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
new file mode 100644
index 0000000000..11bfa1ee0f
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
+from framework.exception import ConfigurationError
+from framework.testbed_model.node import Node
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .scapy import ScapyTrafficGenerator
+
+
+def create_traffic_generator(
+    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+    """A factory function for creating traffic generator object from user config."""
+
+    match traffic_generator_config.traffic_generator_type:
+        case TrafficGeneratorType.SCAPY:
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+        case _:
+            raise ConfigurationError(
+                "Unknown traffic generator: "
+                f"{traffic_generator_config.traffic_generator_type}"
+            )
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
similarity index 96%
rename from dts/framework/testbed_model/capturing_traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index ab98987f8e..e521211ef0 100644
--- a/dts/framework/testbed_model/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -16,9 +16,9 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.settings import SETTINGS
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
 from .traffic_generator import TrafficGenerator
 
 
@@ -130,7 +130,9 @@ def _send_packets_and_capture(
         for the specified duration. It must be able to handle no received packets.
         """
 
-    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+    def _write_capture_from_packets(
+        self, capture_name: str, packets: list[Packet]
+    ) -> None:
         file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
         self._logger.debug(f"Writing packets to {file_name}.")
         scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
similarity index 95%
rename from dts/framework/testbed_model/scapy.py
rename to dts/framework/testbed_model/traffic_generator/scapy.py
index af0d4dbb25..51864b6e6b 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -24,16 +24,15 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import DTSLOG, getLogger
 from framework.remote_session import PythonShell
 from framework.settings import SETTINGS
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 
 from .capturing_traffic_generator import (
     CapturingTrafficGenerator,
     _get_default_capture_name,
 )
-from .hw.port import Port
-from .tg_node import TGNode
 
 """
 ========= BEGIN RPC FUNCTIONS =========
@@ -146,7 +145,7 @@ def quit(self) -> None:
         self._BaseServer__shutdown_request = True
         return None
 
-    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
         """Add a function to the server.
 
         This is meant to be executed remotely.
@@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     session: PythonShell
     rpc_server_proxy: xmlrpc.client.ServerProxy
     _config: ScapyTrafficGeneratorConfig
-    _tg_node: TGNode
-    _logger: DTSLOG
-
-    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
-        self._config = config
-        self._tg_node = tg_node
-        self._logger = getLogger(
-            f"{self._tg_node.name} {self._config.traffic_generator_type}"
-        )
+
+    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        super().__init__(tg_node, config)
 
         assert (
             self._tg_node.config.os == OS.linux
@@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
             function_bytes = marshal.dumps(function.__code__)
             self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
 
-    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
+    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
         # load the source of the function
         src = inspect.getsource(QuittableXMLRPCServer)
         # Lines with only whitespace break the repl if in the middle of a function
@@ -280,7 +273,7 @@ def _send_packets_and_capture(
         scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
         return scapy_packets
 
-    def close(self):
+    def close(self) -> None:
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
similarity index 80%
rename from dts/framework/testbed_model/traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 28c35d3ce4..ea7c3963da 100644
--- a/dts/framework/testbed_model/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -12,11 +12,12 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.logger import DTSLOG
+from framework.config import TrafficGeneratorConfig
+from framework.logger import DTSLOG, getLogger
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
-
 
 class TrafficGenerator(ABC):
     """The base traffic generator.
@@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
     Defines the few basic methods that each traffic generator must implement.
     """
 
+    _config: TrafficGeneratorConfig
+    _tg_node: Node
     _logger: DTSLOG
 
+    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        self._config = config
+        self._tg_node = tg_node
+        self._logger = getLogger(
+            f"{self._tg_node.name} {self._config.traffic_generator_type}"
+        )
+
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send a packet and block until it is fully sent.
 
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index d27c2c5b5f..f0c916471c 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -7,7 +7,6 @@
 import json
 import os
 import subprocess
-import sys
 from enum import Enum
 from pathlib import Path
 from subprocess import SubprocessError
@@ -16,35 +15,7 @@
 
 from .exception import ConfigurationError
 
-
-class StrEnum(Enum):
-    @staticmethod
-    def _generate_next_value_(
-        name: str, start: int, count: int, last_values: object
-    ) -> str:
-        return name
-
-    def __str__(self) -> str:
-        return self.name
-
-
-REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
-
-
-def check_dts_python_version() -> None:
-    if sys.version_info.major < 3 or (
-        sys.version_info.major == 3 and sys.version_info.minor < 10
-    ):
-        print(
-            RED(
-                (
-                    "WARNING: DTS execution node's python version is lower than"
-                    "python 3.10, is deprecated and will not work in future releases."
-                )
-            ),
-            file=sys.stderr,
-        )
-        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
 
 
 def expand_range(range_str: str) -> list[int]:
@@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
     return expanded_range
 
 
-def get_packet_summaries(packets: list[Packet]):
+def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
     return f"Packet contents: \n{packet_summaries}"
 
 
-def RED(text: str) -> str:
-    return f"\u001B[31;1m{str(text)}\u001B[0m"
+class StrEnum(Enum):
+    @staticmethod
+    def _generate_next_value_(
+        name: str, start: int, count: int, last_values: object
+    ) -> str:
+        return name
+
+    def __str__(self) -> str:
+        return self.name
 
 
 class MesonArgs(object):
@@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
         if self._tarball_path and os.path.exists(self._tarball_path):
             os.remove(self._tarball_path)
 
-    def __fspath__(self):
+    def __fspath__(self) -> str:
         return str(self._tarball_path)
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..5d4714b0c3 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,17 @@
 
 import logging
 
-from framework import dts
+from framework import settings
 
 
 def main() -> None:
+    """Set DTS settings, then run DTS.
+
+    The DTS settings are taken from the command line arguments and the environment variables.
+    """
+    settings.SETTINGS = settings.get_settings()
+    from framework import dts
+
     dts.run_all()
 
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 02/23] dts: add docstring checker
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 03/23] dts: add basic developer docs Juraj Linkeš
                             ` (20 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Python docstrings are the in-code way to document the code. The
docstring checker of choice is pydocstyle which we're executing from
Pylama, but the current latest versions are not complatible due to [0],
so pin the pydocstyle version to the latest working version.

[0] https://github.com/klen/pylama/issues/232

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 12 ++++++------
 dts/pyproject.toml |  6 +++++-
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..a734fa71f0 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -489,20 +489,20 @@ files = [
 
 [[package]]
 name = "pydocstyle"
-version = "6.3.0"
+version = "6.1.1"
 description = "Python docstring style checker"
 optional = false
 python-versions = ">=3.6"
 files = [
-    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
-    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
+    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
 ]
 
 [package.dependencies]
-snowballstemmer = ">=2.2.0"
+snowballstemmer = "*"
 
 [package.extras]
-toml = ["tomli (>=1.2.3)"]
+toml = ["toml"]
 
 [[package]]
 name = "pyflakes"
@@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 6762edfa6b..3943c87c87 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -25,6 +25,7 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
+pydocstyle = "6.1.1"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^0.961"
@@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
 [tool.pylama]
-linters = "mccabe,pycodestyle,pyflakes"
+linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
 max_line_length = 88 # https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length
 
+[tool.pylama.linter.pydocstyle]
+convention = "google"
+
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 03/23] dts: add basic developer docs
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 02/23] dts: add docstring checker Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 04/23] dts: exceptions docstring update Juraj Linkeš
                             ` (19 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Expand the framework contribution guidelines and add how to document the
code with Python docstrings.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..cd771a428c 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
 The results contain basic statistics of passed/failed test cases and DPDK version.
 
 
+Contributing to DTS
+-------------------
+
+There are two areas of contribution: The DTS framework and DTS test suites.
+
+The framework contains the logic needed to run test cases, such as connecting to nodes,
+running DPDK apps and collecting results.
+
+The test cases call APIs from the framework to test their scenarios. Adding test cases may
+require adding code to the framework as well.
+
+
+Framework Coding Guidelines
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When adding code to the DTS framework, pay attention to the rest of the code
+and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
+warnings when some of the basics are not met.
+
+The code must be properly documented with docstrings. The style must conform to
+the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
+See an example of the style
+`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
+For cases which are not covered by the Google style, refer
+to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
+the two style guides, where we deviate or where some additional clarification is helpful:
+
+   * The __init__() methods of classes are documented separately from the docstring of the class
+     itself.
+   * The docstrigs of implemented abstract methods should refer to the superclass's definition
+     if there's no deviation.
+   * Instance variables/attributes should be documented in the docstring of the class
+     in the ``Attributes:`` section.
+   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
+     attributes which result in instance variables/attributes should also be recorded
+     in the ``Attributes:`` section.
+   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
+     the type annotated line. The description may be omitted if the meaning is obvious.
+   * The Enum and TypedDict also process the attributes in particular ways and should be documented
+     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
+   * When referencing a parameter of a function or a method in their docstring, don't use
+     any articles and put the parameter into single backticks. This mimics the style of
+     `Python's documentation <https://docs.python.org/3/index.html>`_.
+   * When specifying a value, use double backticks::
+
+        def foo(greet: bool) -> None:
+            """Demonstration of single and double backticks.
+
+            `greet` controls whether ``Hello World`` is printed.
+
+            Args:
+               greet: Whether to print the ``Hello World`` message.
+            """
+            if greet:
+               print(f"Hello World")
+
+   * The docstring maximum line length is the same as the code maximum line length.
+
+
 How To Write a Test Suite
 -------------------------
 
@@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
    | These methods don't need to be implemented if there's no need for them in a test suite.
      In that case, nothing will happen when they're is executed.
 
+#. **Configuration, traffic and other logic**
+
+   The ``TestSuite`` class contains a variety of methods for anything that
+   a test suite setup, a teardown, or a test case may need to do.
+
+   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
+   and use the interactive shell instances directly.
+
+   These are the two main ways to call the framework logic in test suites. If there's any
+   functionality or logic missing from the framework, it should be implemented so that
+   the test suites can use one of these two ways.
+
 #. **Test case verification**
 
    Test case verification should be done with the ``verify`` method, which records the result.
@@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
    and used by the test suite via the ``sut_node`` field.
 
 
+.. _dts_dev_tools:
+
 DTS Developer Tools
 -------------------
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 04/23] dts: exceptions docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 02/23] dts: add docstring checker Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 03/23] dts: add basic developer docs Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 05/23] dts: settings " Juraj Linkeš
                             ` (18 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/__init__.py  |  12 ++++-
 dts/framework/exception.py | 106 +++++++++++++++++++++++++------------
 2 files changed, 83 insertions(+), 35 deletions(-)

diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
index d551ad4bf0..662e6ccad2 100644
--- a/dts/framework/__init__.py
+++ b/dts/framework/__init__.py
@@ -1,3 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
+
+"""Libraries and utilities for running DPDK Test Suite (DTS).
+
+The various modules in the DTS framework offer:
+
+* Connections to nodes, both interactive and non-interactive,
+* A straightforward way to add support for different operating systems of remote nodes,
+* Test suite setup, execution and teardown, along with test case setup, execution and teardown,
+* Pre-test suite setup and post-test suite teardown.
+"""
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 7489c03570..ee1562c672 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -3,8 +3,10 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-User-defined exceptions used across the framework.
+"""DTS exceptions.
+
+The exceptions all have different severities expressed as an integer.
+The highest severity of all raised exception is used as the exit code of DTS.
 """
 
 from enum import IntEnum, unique
@@ -13,59 +15,79 @@
 
 @unique
 class ErrorSeverity(IntEnum):
-    """
-    The severity of errors that occur during DTS execution.
+    """The severity of errors that occur during DTS execution.
+
     All exceptions are caught and the most severe error is used as return code.
     """
 
+    #:
     NO_ERR = 0
+    #:
     GENERIC_ERR = 1
+    #:
     CONFIG_ERR = 2
+    #:
     REMOTE_CMD_EXEC_ERR = 3
+    #:
     SSH_ERR = 4
+    #:
     DPDK_BUILD_ERR = 10
+    #:
     TESTCASE_VERIFY_ERR = 20
+    #:
     BLOCKING_TESTSUITE_ERR = 25
 
 
 class DTSError(Exception):
-    """
-    The base exception from which all DTS exceptions are derived.
-    Stores error severity.
+    """The base exception from which all DTS exceptions are subclassed.
+
+    Do not use this exception, only use subclassed exceptions.
     """
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
 class SSHTimeoutError(DTSError):
-    """
-    Command execution timeout.
-    """
+    """The SSH execution of a command timed out."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _command: str
 
     def __init__(self, command: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            command: The executed command.
+        """
         self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self._command}"
+        """Add some context to the string representation."""
+        return f"{self._command} execution timed out."
 
 
 class SSHConnectionError(DTSError):
-    """
-    SSH connection error.
-    """
+    """An unsuccessful SSH connection."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
     _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            host: The hostname to which we're trying to connect.
+            errors: Any errors that occurred during the connection attempt.
+        """
         self._host = host
         self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
+        """Include the errors in the string representation."""
         message = f"Error trying to connect with {self._host}."
         if self._errors:
             message += f" Errors encountered while retrying: {', '.join(self._errors)}"
@@ -74,43 +96,53 @@ def __str__(self) -> str:
 
 
 class SSHSessionDeadError(DTSError):
-    """
-    SSH session is not alive.
-    It can no longer be used.
-    """
+    """The SSH session is no longer alive."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
 
     def __init__(self, host: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            host: The hostname of the disconnected node.
+        """
         self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self._host} has died"
+        """Add some context to the string representation."""
+        return f"SSH session with {self._host} has died."
 
 
 class ConfigurationError(DTSError):
-    """
-    Raised when an invalid configuration is encountered.
-    """
+    """An invalid configuration."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
 
 
 class RemoteCommandExecutionError(DTSError):
-    """
-    Raised when a command executed on a Node returns a non-zero exit status.
-    """
+    """An unsuccessful execution of a remote command."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    #: The executed command.
     command: str
     _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            command: The executed command.
+            command_return_code: The return code of the executed command.
+        """
         self.command = command
         self._command_return_code = command_return_code
 
     def __str__(self) -> str:
+        """Include both the command and return code in the string representation."""
         return (
             f"Command {self.command} returned a non-zero exit code: "
             f"{self._command_return_code}"
@@ -118,35 +150,41 @@ def __str__(self) -> str:
 
 
 class RemoteDirectoryExistsError(DTSError):
-    """
-    Raised when a remote directory to be created already exists.
-    """
+    """A directory that exists on a remote node."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
 
 
 class DPDKBuildError(DTSError):
-    """
-    Raised when DPDK build fails for any reason.
-    """
+    """A DPDK build failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
 
 
 class TestCaseVerifyError(DTSError):
-    """
-    Used in test cases to verify the expected behavior.
-    """
+    """A test case failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
 
 class BlockingTestSuiteError(DTSError):
+    """A failure in a blocking test suite."""
+
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
     _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
+        """Define the meaning of the first argument.
+
+        Args:
+            suite_name: The blocking test suite.
+        """
         self._suite_name = suite_name
 
     def __str__(self) -> str:
+        """Add some context to the string representation."""
         return f"Blocking suite {self._suite_name} failed."
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 05/23] dts: settings docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (2 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 04/23] dts: exceptions docstring update Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 16:17             ` Yoan Picchi
  2023-11-08 12:53           ` [PATCH v6 06/23] dts: logger and " Juraj Linkeš
                             ` (17 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/settings.py | 101 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 7f5841d073..787db7c198 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -3,6 +3,70 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
+"""Environment variables and command line arguments parsing.
+
+This is a simple module utilizing the built-in argparse module to parse command line arguments,
+augment them with values from environment variables and make them available across the framework.
+
+The command line value takes precedence, followed by the environment variable value,
+followed by the default value defined in this module.
+
+The command line arguments along with the supported environment variables are:
+
+.. option:: --config-file
+.. envvar:: DTS_CFG_FILE
+
+    The path to the YAML test run configuration file.
+
+.. option:: --output-dir, --output
+.. envvar:: DTS_OUTPUT_DIR
+
+    The directory where DTS logs and results are saved.
+
+.. option:: --compile-timeout
+.. envvar:: DTS_COMPILE_TIMEOUT
+
+    The timeout for compiling DPDK.
+
+.. option:: -t, --timeout
+.. envvar:: DTS_TIMEOUT
+
+    The timeout for all DTS operation except for compiling DPDK.
+
+.. option:: -v, --verbose
+.. envvar:: DTS_VERBOSE
+
+    Set to any value to enable logging everything to the console.
+
+.. option:: -s, --skip-setup
+.. envvar:: DTS_SKIP_SETUP
+
+    Set to any value to skip building DPDK.
+
+.. option:: --tarball, --snapshot, --git-ref
+.. envvar:: DTS_DPDK_TARBALL
+
+    The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
+
+.. option:: --test-cases
+.. envvar:: DTS_TESTCASES
+
+    A comma-separated list of test cases to execute. Unknown test cases will be silently ignored.
+
+.. option:: --re-run, --re_run
+.. envvar:: DTS_RERUN
+
+    Re-run each test case this many times in case of a failure.
+
+Attributes:
+    SETTINGS: The module level variable storing framework-wide DTS settings.
+
+Typical usage example::
+
+  from framework.settings import SETTINGS
+  foo = SETTINGS.foo
+"""
+
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
@@ -16,6 +80,23 @@
 
 
 def _env_arg(env_var: str) -> Any:
+    """A helper method augmenting the argparse Action with environment variables.
+
+    If the supplied environment variable is defined, then the default value
+    of the argument is modified. This satisfies the priority order of
+    command line argument > environment variable > default value.
+
+    Arguments with no values (flags) should be defined using the const keyword argument
+    (True or False). When the argument is specified, it will be set to const, if not specified,
+    the default will be stored (possibly modified by the corresponding environment variable).
+
+    Other arguments work the same as default argparse arguments, that is using
+    the default 'store' action.
+
+    Returns:
+          The modified argparse.Action.
+    """
+
     class _EnvironmentArgument(argparse.Action):
         def __init__(
             self,
@@ -68,14 +149,28 @@ def __call__(
 
 @dataclass(slots=True)
 class Settings:
+    """Default framework-wide user settings.
+
+    The defaults may be modified at the start of the run.
+    """
+
+    #:
     config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    #:
     output_dir: str = "output"
+    #:
     timeout: float = 15
+    #:
     verbose: bool = False
+    #:
     skip_setup: bool = False
+    #:
     dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    #:
     compile_timeout: float = 1200
+    #:
     test_cases: list[str] = field(default_factory=list)
+    #:
     re_run: int = 0
 
 
@@ -169,7 +264,7 @@ def _get_parser() -> argparse.ArgumentParser:
         action=_env_arg("DTS_RERUN"),
         default=SETTINGS.re_run,
         type=int,
-        help="[DTS_RERUN] Re-run each test case the specified amount of times "
+        help="[DTS_RERUN] Re-run each test case the specified number of times "
         "if a test failure occurs",
     )
 
@@ -177,6 +272,10 @@ def _get_parser() -> argparse.ArgumentParser:
 
 
 def get_settings() -> Settings:
+    """Create new settings with inputs from the user.
+
+    The inputs are taken from the command line and from environment variables.
+    """
     parsed_args = _get_parser().parse_args()
     return Settings(
         config_file_path=parsed_args.config_file,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 06/23] dts: logger and settings docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (3 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 05/23] dts: settings " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 17:14             ` Yoan Picchi
  2023-11-08 12:53           ` [PATCH v6 07/23] dts: dts runner and main " Juraj Linkeš
                             ` (16 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/logger.py | 72 +++++++++++++++++++++----------
 dts/framework/utils.py  | 96 ++++++++++++++++++++++++++++++-----------
 2 files changed, 121 insertions(+), 47 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index bb2991e994..d3eb75a4e4 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -3,9 +3,9 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-DTS logger module with several log level. DTS framework and TestSuite logs
-are saved in different log files.
+"""DTS logger module.
+
+DTS framework and TestSuite logs are saved in different log files.
 """
 
 import logging
@@ -18,19 +18,21 @@
 stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 
 
-class LoggerDictType(TypedDict):
-    logger: "DTSLOG"
-    name: str
-    node: str
-
+class DTSLOG(logging.LoggerAdapter):
+    """DTS logger adapter class for framework and testsuites.
 
-# List for saving all using loggers
-Loggers: list[LoggerDictType] = []
+    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
+    variable control the verbosity of output. If enabled, all messages will be emitted to the
+    console.
 
+    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+    variable modify the directory where the logs will be stored.
 
-class DTSLOG(logging.LoggerAdapter):
-    """
-    DTS log class for framework and testsuite.
+    Attributes:
+        node: The additional identifier. Currently unused.
+        sh: The handler which emits logs to console.
+        fh: The handler which emits logs to a file.
+        verbose_fh: Just as fh, but logs with a different, more verbose, format.
     """
 
     _logger: logging.Logger
@@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
     verbose_fh: logging.FileHandler
 
     def __init__(self, logger: logging.Logger, node: str = "suite"):
+        """Extend the constructor with additional handlers.
+
+        One handler logs to the console, the other one to a file, with either a regular or verbose
+        format.
+
+        Args:
+            logger: The logger from which to create the logger adapter.
+            node: An additional identifier. Currently unused.
+        """
         self._logger = logger
         # 1 means log everything, this will be used by file handlers if their level
         # is not set
@@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
         super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
 
     def logger_exit(self) -> None:
-        """
-        Remove stream handler and logfile handler.
-        """
+        """Remove the stream handler and the logfile handler."""
         for handler in (self.sh, self.fh, self.verbose_fh):
             handler.flush()
             self._logger.removeHandler(handler)
 
 
+class _LoggerDictType(TypedDict):
+    logger: DTSLOG
+    name: str
+    node: str
+
+
+# List for saving all loggers in use
+_Loggers: list[_LoggerDictType] = []
+
+
 def getLogger(name: str, node: str = "suite") -> DTSLOG:
+    """Get DTS logger adapter identified by name and node.
+
+    An existing logger will be return if one with the exact name and node already exists.
+    A new one will be created and stored otherwise.
+
+    Args:
+        name: The name of the logger.
+        node: An additional identifier for the logger.
+
+    Returns:
+        A logger uniquely identified by both name and node.
     """
-    Get logger handler and if there's no handler for specified Node will create one.
-    """
-    global Loggers
+    global _Loggers
     # return saved logger
-    logger: LoggerDictType
-    for logger in Loggers:
+    logger: _LoggerDictType
+    for logger in _Loggers:
         if logger["name"] == name and logger["node"] == node:
             return logger["logger"]
 
     # return new logger
     dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
-    Loggers.append({"logger": dts_logger, "name": name, "node": node})
+    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
     return dts_logger
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index f0c916471c..0613adf7ad 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -3,6 +3,16 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Various utility classes and functions.
+
+These are used in multiple modules across the framework. They're here because
+they provide some non-specific functionality, greatly simplify imports or just don't
+fit elsewhere.
+
+Attributes:
+    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
+"""
+
 import atexit
 import json
 import os
@@ -19,12 +29,20 @@
 
 
 def expand_range(range_str: str) -> list[int]:
-    """
-    Process range string into a list of integers. There are two possible formats:
-    n - a single integer
-    n-m - a range of integers
+    """Process `range_str` into a list of integers.
+
+    There are two possible formats of `range_str`:
+
+        * ``n`` - a single integer,
+        * ``n-m`` - a range of integers.
 
-    The returned range includes both n and m. Empty string returns an empty list.
+    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
+
+    Args:
+        range_str: The range to expand.
+
+    Returns:
+        All the numbers from the range.
     """
     expanded_range: list[int] = []
     if range_str:
@@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]:
 
 
 def get_packet_summaries(packets: list[Packet]) -> str:
+    """Format a string summary from `packets`.
+
+    Args:
+        packets: The packets to format.
+
+    Returns:
+        The summary of `packets`.
+    """
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str:
 
 
 class StrEnum(Enum):
+    """Enum with members stored as strings."""
+
     @staticmethod
     def _generate_next_value_(
         name: str, start: int, count: int, last_values: object
@@ -56,22 +84,29 @@ def _generate_next_value_(
         return name
 
     def __str__(self) -> str:
+        """The string representation is the name of the member."""
         return self.name
 
 
 class MesonArgs(object):
-    """
-    Aggregate the arguments needed to build DPDK:
-    default_library: Default library type, Meson allows "shared", "static" and "both".
-               Defaults to None, in which case the argument won't be used.
-    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
-               Do not use -D with them, for example:
-               meson_args = MesonArgs(enable_kmods=True).
-    """
+    """Aggregate the arguments needed to build DPDK."""
 
     _default_library: str
 
     def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+        """Initialize the meson arguments.
+
+        Args:
+            default_library: The default library type, Meson supports ``shared``, ``static`` and
+                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
+            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Example:
+            ::
+
+                meson_args = MesonArgs(enable_kmods=True).
+        """
         self._default_library = (
             f"--default-library={default_library}" if default_library else ""
         )
@@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
         )
 
     def __str__(self) -> str:
+        """The actual args."""
         return " ".join(f"{self._default_library} {self._dpdk_args}".split())
 
 
@@ -93,35 +129,33 @@ class _TarCompressionFormat(StrEnum):
     and Enum values are the associated file extensions.
     """
 
+    #:
     gzip = "gz"
+    #:
     compress = "Z"
+    #:
     bzip2 = "bz2"
+    #:
     lzip = "lz"
+    #:
     lzma = "lzma"
+    #:
     lzop = "lzo"
+    #:
     xz = "xz"
+    #:
     zstd = "zst"
 
 
 class DPDKGitTarball(object):
-    """Create a compressed tarball of DPDK from the repository.
-
-    The DPDK version is specified with git object git_ref.
-    The tarball will be compressed with _TarCompressionFormat,
-    which must be supported by the DTS execution environment.
-    The resulting tarball will be put into output_dir.
+    """Compressed tarball of DPDK from the repository.
 
-    The class supports the os.PathLike protocol,
+    The class supports the :class:`os.PathLike` protocol,
     which is used to get the Path of the tarball::
 
         from pathlib import Path
         tarball = DPDKGitTarball("HEAD", "output")
         tarball_path = Path(tarball)
-
-    Arguments:
-        git_ref: A git commit ID, tag ID or tree ID.
-        output_dir: The directory where to put the resulting tarball.
-        tar_compression_format: The compression format to use.
     """
 
     _git_ref: str
@@ -136,6 +170,17 @@ def __init__(
         output_dir: str,
         tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
     ):
+        """Create the tarball during initialization.
+
+        The DPDK version is specified with `git_ref`. The tarball will be compressed with
+        `tar_compression_format`, which must be supported by the DTS execution environment.
+        The resulting tarball will be put into `output_dir`.
+
+        Args:
+            git_ref: A git commit ID, tag ID or tree ID.
+            output_dir: The directory where to put the resulting tarball.
+            tar_compression_format: The compression format to use.
+        """
         self._git_ref = git_ref
         self._tar_compression_format = tar_compression_format
 
@@ -204,4 +249,5 @@ def _delete_tarball(self) -> None:
             os.remove(self._tarball_path)
 
     def __fspath__(self) -> str:
+        """The os.PathLike protocol implementation."""
         return str(self._tarball_path)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 07/23] dts: dts runner and main docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (4 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 06/23] dts: logger and " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 08/23] dts: test suite " Juraj Linkeš
                             ` (15 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++-------
 dts/main.py          |   8 ++-
 2 files changed, 112 insertions(+), 24 deletions(-)

diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 4c7fb0c40a..331fed7dc4 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,33 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+r"""Test suite runner module.
+
+A DTS run is split into stages:
+
+    #. Execution stage,
+    #. Build target stage,
+    #. Test suite stage,
+    #. Test case stage.
+
+The module is responsible for running tests on testbeds defined in the test run configuration.
+Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or
+one of its subclasses. The test case results are also recorded.
+
+If an error occurs, the current stage is aborted, the error is recorded and the run continues in
+the next iteration of the same stage. The return code is the highest `severity` of all
+:class:`~.framework.exception.DTSError`\s.
+
+Example:
+    An error occurs in a build target setup. The current build target is aborted and the run
+    continues with the next build target. If the errored build target was the last one in the given
+    execution, the next execution begins.
+
+Attributes:
+    dts_logger: The logger instance used in this module.
+    result: The top level result used in the module.
+"""
+
 import sys
 
 from .config import (
@@ -23,9 +50,38 @@
 
 
 def run_all() -> None:
-    """
-    The main process of DTS. Runs all build targets in all executions from the main
-    config file.
+    """Run all build targets in all executions from the test run configuration.
+
+    Before running test suites, executions and build targets are first set up.
+    The executions and build targets defined in the test run configuration are iterated over.
+    The executions define which tests to run and where to run them and build targets define
+    the DPDK build setup.
+
+    The tests suites are set up for each execution/build target tuple and each scheduled
+    test case within the test suite is set up, executed and torn down. After all test cases
+    have been executed, the test suite is torn down and the next build target will be tested.
+
+    All the nested steps look like this:
+
+        #. Execution setup
+
+            #. Build target setup
+
+                #. Test suite setup
+
+                    #. Test case setup
+                    #. Test case logic
+                    #. Test case teardown
+
+                #. Test suite teardown
+
+            #. Build target teardown
+
+        #. Execution teardown
+
+    The test cases are filtered according to the specification in the test run configuration and
+    the :option:`--test-cases` command line argument or
+    the :envvar:`DTS_TESTCASES` environment variable.
     """
     global dts_logger
     global result
@@ -87,6 +143,8 @@ def run_all() -> None:
 
 
 def _check_dts_python_version() -> None:
+    """Check the required Python version - v3.10."""
+
     def RED(text: str) -> str:
         return f"\u001B[31;1m{str(text)}\u001B[0m"
 
@@ -111,9 +169,16 @@ def _run_execution(
     execution: ExecutionConfiguration,
     result: DTSResult,
 ) -> None:
-    """
-    Run the given execution. This involves running the execution setup as well as
-    running all build targets in the given execution.
+    """Run the given execution.
+
+    This involves running the execution setup as well as running all build targets
+    in the given execution. After that, execution teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: An execution's test run configuration.
+        result: The top level result object.
     """
     dts_logger.info(
         f"Running execution with SUT '{execution.system_under_test_node.name}'."
@@ -150,8 +215,18 @@ def _run_build_target(
     execution: ExecutionConfiguration,
     execution_result: ExecutionResult,
 ) -> None:
-    """
-    Run the given build target.
+    """Run the given build target.
+
+    This involves running the build target setup as well as running all test suites
+    in the given execution the build target is defined in.
+    After that, build target teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        build_target: A build target's test run configuration.
+        execution: The build target's execution's test run configuration.
+        execution_result: The execution level result object associated with the execution.
     """
     dts_logger.info(f"Running build target '{build_target.name}'.")
     build_target_result = execution_result.add_build_target(build_target)
@@ -183,10 +258,17 @@ def _run_all_suites(
     execution: ExecutionConfiguration,
     build_target_result: BuildTargetResult,
 ) -> None:
-    """
-    Use the given build_target to run execution's test suites
-    with possibly only a subset of test cases.
-    If no subset is specified, run all test cases.
+    """Run the execution's (possibly a subset) test suites using the current build_target.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
     """
     end_build_target = False
     if not execution.skip_smoke_tests:
@@ -215,16 +297,22 @@ def _run_single_suite(
     build_target_result: BuildTargetResult,
     test_suite_config: TestSuiteConfig,
 ) -> None:
-    """Runs a single test suite.
+    """Run all test suite in a single test suite module.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
 
     Args:
-        sut_node: Node to run tests on.
-        execution: Execution the test case belongs to.
-        build_target_result: Build target configuration test case is run on
-        test_suite_config: Test suite configuration
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
+        test_suite_config: Test suite test run configuration specifying the test suite module
+            and possibly a subset of test cases of test suites in that module.
 
     Raises:
-        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
+        BlockingTestSuiteError: If a blocking test suite fails.
     """
     try:
         full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
@@ -248,9 +336,7 @@ def _run_single_suite(
 
 
 def _exit_dts() -> None:
-    """
-    Process all errors and exit with the proper exit code.
-    """
+    """Process all errors and exit with the proper exit code."""
     result.process()
 
     if dts_logger:
diff --git a/dts/main.py b/dts/main.py
index 5d4714b0c3..f703615d11 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -4,9 +4,7 @@
 # Copyright(c) 2022 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
-"""
-A test framework for testing DPDK.
-"""
+"""The DTS executable."""
 
 import logging
 
@@ -17,6 +15,10 @@ def main() -> None:
     """Set DTS settings, then run DTS.
 
     The DTS settings are taken from the command line arguments and the environment variables.
+    The settings object is stored in the module-level variable settings.SETTINGS which the entire
+    framework uses. After importing the module (or the variable), any changes to the variable are
+    not going to be reflected without a re-import. This means that the SETTINGS variable must
+    be modified before the settings module is imported anywhere else in the framework.
     """
     settings.SETTINGS = settings.get_settings()
     from framework import dts
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 08/23] dts: test suite docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (5 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 07/23] dts: dts runner and main " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 09/23] dts: test result " Juraj Linkeš
                             ` (14 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_suite.py | 223 +++++++++++++++++++++++++++---------
 1 file changed, 168 insertions(+), 55 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index d53553bf34..8daac35818 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -2,8 +2,19 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Base class for creating DTS test cases.
+"""Features common to all test suites.
+
+The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such
+must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics
+needed by subclasses:
+
+    * Test suite and test case execution flow,
+    * Testbed (SUT, TG) configuration,
+    * Packet sending and verification,
+    * Test case verification.
+
+The module also defines a function, :func:`get_test_suites`,
+for gathering test suites from a Python module.
 """
 
 import importlib
@@ -31,25 +42,44 @@
 
 
 class TestSuite(object):
-    """
-    The base TestSuite class provides methods for handling basic flow of a test suite:
-    * test case filtering and collection
-    * test suite setup/cleanup
-    * test setup/cleanup
-    * test case execution
-    * error handling and results storage
-    Test cases are implemented by derived classes. Test cases are all methods
-    starting with test_, further divided into performance test cases
-    (starting with test_perf_) and functional test cases (all other test cases).
-    By default, all test cases will be executed. A list of testcase str names
-    may be specified in conf.yaml or on the command line
-    to filter which test cases to run.
-    The methods named [set_up|tear_down]_[suite|test_case] should be overridden
-    in derived classes if the appropriate suite/test case fixtures are needed.
+    """The base class with methods for handling the basic flow of a test suite.
+
+        * Test case filtering and collection,
+        * Test suite setup/cleanup,
+        * Test setup/cleanup,
+        * Test case execution,
+        * Error handling and results storage.
+
+    Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``,
+    further divided into performance test cases (starting with ``test_perf_``)
+    and functional test cases (all other test cases).
+
+    By default, all test cases will be executed. A list of testcase names may be specified
+    in the YAML test run configuration file and in the :option:`--test-cases` command line argument
+    or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run.
+    The union of both lists will be used. Any unknown test cases from the latter lists
+    will be silently ignored.
+
+    If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable
+    is set, in case of a test case failure, the test case will be executed again until it passes
+    or it fails that many times in addition of the first failure.
+
+    The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses
+    if the appropriate test suite/test case fixtures are needed.
+
+    The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
+    properly choose the IP addresses and other configuration that must be tailored to the testbed.
+
+    Attributes:
+        sut_node: The SUT node where the test suite is running.
+        tg_node: The TG node where the test suite is running.
+        is_blocking: Whether the test suite is blocking. A failure of a blocking test suite
+            will block the execution of all subsequent test suites in the current build target.
     """
 
     sut_node: SutNode
-    is_blocking = False
+    tg_node: TGNode
+    is_blocking: bool = False
     _logger: DTSLOG
     _test_cases_to_run: list[str]
     _func: bool
@@ -72,6 +102,19 @@ def __init__(
         func: bool,
         build_target_result: BuildTargetResult,
     ):
+        """Initialize the test suite testbed information and basic configuration.
+
+        Process what test cases to run, create the associated :class:`TestSuiteResult`,
+        find links between ports and set up default IP addresses to be used when configuring them.
+
+        Args:
+            sut_node: The SUT node where the test suite will run.
+            tg_node: The TG node where the test suite will run.
+            test_cases: The list of test cases to execute.
+                If empty, all test cases will be executed.
+            func: Whether to run functional tests.
+            build_target_result: The build target result this test suite is run in.
+        """
         self.sut_node = sut_node
         self.tg_node = tg_node
         self._logger = getLogger(self.__class__.__name__)
@@ -95,6 +138,7 @@ def __init__(
         self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
 
     def _process_links(self) -> None:
+        """Construct links between SUT and TG ports."""
         for sut_port in self.sut_node.ports:
             for tg_port in self.tg_node.ports:
                 if (sut_port.identifier, sut_port.peer) == (
@@ -106,27 +150,42 @@ def _process_links(self) -> None:
                     )
 
     def set_up_suite(self) -> None:
-        """
-        Set up test fixtures common to all test cases; this is done before
-        any test case is run.
+        """Set up test fixtures common to all test cases.
+
+        This is done before any test case has been run.
         """
 
     def tear_down_suite(self) -> None:
-        """
-        Tear down the previously created test fixtures common to all test cases.
+        """Tear down the previously created test fixtures common to all test cases.
+
+        This is done after all test have been run.
         """
 
     def set_up_test_case(self) -> None:
-        """
-        Set up test fixtures before each test case.
+        """Set up test fixtures before each test case.
+
+        This is done before *each* test case.
         """
 
     def tear_down_test_case(self) -> None:
-        """
-        Tear down the previously created test fixtures after each test case.
+        """Tear down the previously created test fixtures after each test case.
+
+        This is done after *each* test case.
         """
 
     def configure_testbed_ipv4(self, restore: bool = False) -> None:
+        """Configure IPv4 addresses on all testbed ports.
+
+        The configured ports are:
+
+        * SUT ingress port,
+        * SUT egress port,
+        * TG ingress port,
+        * TG egress port.
+
+        Args:
+            restore: If :data:`True`, will remove the configuration instead.
+        """
         delete = True if restore else False
         enable = False if restore else True
         self._configure_ipv4_forwarding(enable)
@@ -153,11 +212,13 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None:
     def send_packet_and_capture(
         self, packet: Packet, duration: float = 1
     ) -> list[Packet]:
-        """
-        Send a packet through the appropriate interface and
-        receive on the appropriate interface.
-        Modify the packet with l3/l2 addresses corresponding
-        to the testbed and desired traffic.
+        """Send and receive `packet` using the associated TG.
+
+        Send `packet` through the appropriate interface and receive on the appropriate interface.
+        Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic.
+
+        Returns:
+            A list of received packets.
         """
         packet = self._adjust_addresses(packet)
         return self.tg_node.send_packet_and_capture(
@@ -165,13 +226,25 @@ def send_packet_and_capture(
         )
 
     def get_expected_packet(self, packet: Packet) -> Packet:
+        """Inject the proper L2/L3 addresses into `packet`.
+
+        Args:
+            packet: The packet to modify.
+
+        Returns:
+            `packet` with injected L2/L3 addresses.
+        """
         return self._adjust_addresses(packet, expected=True)
 
     def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
-        """
+        """L2 and L3 address additions in both directions.
+
         Assumptions:
-            Two links between SUT and TG, one link is TG -> SUT,
-            the other SUT -> TG.
+            Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG.
+
+        Args:
+            packet: The packet to modify.
+            expected: If True, the direction is SUT -> TG, otherwise the direction is TG -> SUT.
         """
         if expected:
             # The packet enters the TG from SUT
@@ -197,6 +270,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
         return Ether(packet.build())
 
     def verify(self, condition: bool, failure_description: str) -> None:
+        """Verify `condition` and handle failures.
+
+        When `condition` is :data:`False`, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            condition: The condition to check.
+            failure_description: A short description of the failure
+                that will be stored in the raised exception.
+
+        Raises:
+            TestCaseVerifyError: `condition` is :data:`False`.
+        """
         if not condition:
             self._fail_test_case_verify(failure_description)
 
@@ -216,6 +302,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None:
     def verify_packets(
         self, expected_packet: Packet, received_packets: list[Packet]
     ) -> None:
+        """Verify that `expected_packet` has been received.
+
+        Go through `received_packets` and check that `expected_packet` is among them.
+        If not, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            expected_packet: The packet we're expecting to receive.
+            received_packets: The packets where we're looking for `expected_packet`.
+
+        Raises:
+            TestCaseVerifyError: `expected_packet` is not among `received_packets`.
+        """
         for received_packet in received_packets:
             if self._compare_packets(expected_packet, received_packet):
                 break
@@ -303,10 +402,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         return True
 
     def run(self) -> None:
-        """
-        Setup, execute and teardown the whole suite.
-        Suite execution consists of running all test cases scheduled to be executed.
-        A test cast run consists of setup, execution and teardown of said test case.
+        """Set up, execute and tear down the whole suite.
+
+        Test suite execution consists of running all test cases scheduled to be executed.
+        A test case run consists of setup, execution and teardown of said test case.
+
+        Record the setup and the teardown and handle failures.
+
+        The list of scheduled test cases is constructed when creating the :class:`TestSuite` object.
         """
         test_suite_name = self.__class__.__name__
 
@@ -338,9 +441,7 @@ def run(self) -> None:
                 raise BlockingTestSuiteError(test_suite_name)
 
     def _execute_test_suite(self) -> None:
-        """
-        Execute all test cases scheduled to be executed in this suite.
-        """
+        """Execute all test cases scheduled to be executed in this suite."""
         if self._func:
             for test_case_method in self._get_functional_test_cases():
                 test_case_name = test_case_method.__name__
@@ -357,14 +458,18 @@ def _execute_test_suite(self) -> None:
                     self._run_test_case(test_case_method, test_case_result)
 
     def _get_functional_test_cases(self) -> list[MethodType]:
-        """
-        Get all functional test cases.
+        """Get all functional test cases defined in this TestSuite.
+
+        Returns:
+            The list of functional test cases of this TestSuite.
         """
         return self._get_test_cases(r"test_(?!perf_)")
 
     def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
-        """
-        Return a list of test cases matching test_case_regex.
+        """Return a list of test cases matching test_case_regex.
+
+        Returns:
+            The list of test cases matching test_case_regex of this TestSuite.
         """
         self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
         filtered_test_cases = []
@@ -378,9 +483,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
         return filtered_test_cases
 
     def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
-        """
-        Check whether the test case should be executed.
-        """
+        """Check whether the test case should be scheduled to be executed."""
         match = bool(re.match(test_case_regex, test_case_name))
         if self._test_cases_to_run:
             return match and test_case_name in self._test_cases_to_run
@@ -390,9 +493,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
     def _run_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Setup, execute and teardown a test case in this suite.
-        Exceptions are caught and recorded in logs and results.
+        """Setup, execute and teardown a test case in this suite.
+
+        Record the result of the setup and the teardown and handle failures.
         """
         test_case_name = test_case_method.__name__
 
@@ -427,9 +530,7 @@ def _run_test_case(
     def _execute_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Execute one test case and handle failures.
-        """
+        """Execute one test case, record the result and handle failures."""
         test_case_name = test_case_method.__name__
         try:
             self._logger.info(f"Starting test case execution: {test_case_name}")
@@ -452,6 +553,18 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+    r"""Find all :class:`TestSuite`\s in a Python module.
+
+    Args:
+        testsuite_module_path: The path to the Python module.
+
+    Returns:
+        The list of :class:`TestSuite`\s found within the Python module.
+
+    Raises:
+        ConfigurationError: The test suite module was not found.
+    """
+
     def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 09/23] dts: test result docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (6 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 08/23] dts: test suite " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 10/23] dts: config " Juraj Linkeš
                             ` (13 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++-------
 1 file changed, 234 insertions(+), 58 deletions(-)

diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 603e18872c..f553948454 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -2,8 +2,25 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-Generic result container and reporters
+r"""Record and process DTS results.
+
+The results are recorded in a hierarchical manner:
+
+    * :class:`DTSResult` contains
+    * :class:`ExecutionResult` contains
+    * :class:`BuildTargetResult` contains
+    * :class:`TestSuiteResult` contains
+    * :class:`TestCaseResult`
+
+Each result may contain multiple lower level results, e.g. there are multiple
+:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
+The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`,
+which also defines some common behaviors in its methods.
+
+Each result class has its own idiosyncrasies which they implement in overridden methods.
+
+The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+variable modify the directory where the files with results will be stored.
 """
 
 import os.path
@@ -26,26 +43,34 @@
 
 
 class Result(Enum):
-    """
-    An Enum defining the possible states that
-    a setup, a teardown or a test case may end up in.
-    """
+    """The possible states that a setup, a teardown or a test case may end up in."""
 
+    #:
     PASS = auto()
+    #:
     FAIL = auto()
+    #:
     ERROR = auto()
+    #:
     SKIP = auto()
 
     def __bool__(self) -> bool:
+        """Only PASS is True."""
         return self is self.PASS
 
 
 class FixtureResult(object):
-    """
-    A record that stored the result of a setup or a teardown.
-    The default is FAIL because immediately after creating the object
-    the setup of the corresponding stage will be executed, which also guarantees
-    the execution of teardown.
+    """A record that stores the result of a setup or a teardown.
+
+    FAIL is a sensible default since it prevents false positives
+    (which could happen if the default was TRUE).
+
+    Preventing false positives or other false results is preferable since a failure
+    is mostly likely to be investigated (the other false results may not be investigated at all).
+
+    Attributes:
+        result: The associated result.
+        error: The error in case of a failure.
     """
 
     result: Result
@@ -56,21 +81,32 @@ def __init__(
         result: Result = Result.FAIL,
         error: Exception | None = None,
     ):
+        """Initialize the constructor with the fixture result and store a possible error.
+
+        Args:
+            result: The result to store.
+            error: The error which happened when a failure occurred.
+        """
         self.result = result
         self.error = error
 
     def __bool__(self) -> bool:
+        """A wrapper around the stored :class:`Result`."""
         return bool(self.result)
 
 
 class Statistics(dict):
-    """
-    A helper class used to store the number of test cases by its result
-    along a few other basic information.
-    Using a dict provides a convenient way to format the data.
+    """How many test cases ended in which result state along some other basic information.
+
+    Subclassing :class:`dict` provides a convenient way to format the data.
     """
 
     def __init__(self, dpdk_version: str | None):
+        """Extend the constructor with relevant keys.
+
+        Args:
+            dpdk_version: The version of tested DPDK.
+        """
         super(Statistics, self).__init__()
         for result in Result:
             self[result.name] = 0
@@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None):
         self["DPDK VERSION"] = dpdk_version
 
     def __iadd__(self, other: Result) -> "Statistics":
-        """
-        Add a Result to the final count.
+        """Add a Result to the final count.
+
+        Example:
+            stats: Statistics = Statistics()  # empty Statistics
+            stats += Result.PASS  # add a Result to `stats`
+
+        Args:
+            other: The Result to add to this statistics object.
+
+        Returns:
+            The modified statistics object.
         """
         self[other.name] += 1
         self["PASS RATE"] = (
@@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics":
         return self
 
     def __str__(self) -> str:
-        """
-        Provide a string representation of the data.
-        """
+        """Each line contains the formatted key = value pair."""
         stats_str = ""
         for key, value in self.items():
             stats_str += f"{key:<12} = {value}\n"
@@ -102,10 +145,16 @@ def __str__(self) -> str:
 
 
 class BaseResult(object):
-    """
-    The Base class for all results. Stores the results of
-    the setup and teardown portions of the corresponding stage
-    and a list of results from each inner stage in _inner_results.
+    """Common data and behavior of DTS results.
+
+    Stores the results of the setup and teardown portions of the corresponding stage.
+    The hierarchical nature of DTS results is captured recursively in an internal list.
+    A stage is each level in this particular hierarchy (pre-execution or the top-most level,
+    execution, build target, test suite and test case.)
+
+    Attributes:
+        setup_result: The result of the setup of the particular stage.
+        teardown_result: The results of the teardown of the particular stage.
     """
 
     setup_result: FixtureResult
@@ -113,15 +162,28 @@ class BaseResult(object):
     _inner_results: MutableSequence["BaseResult"]
 
     def __init__(self):
+        """Initialize the constructor."""
         self.setup_result = FixtureResult()
         self.teardown_result = FixtureResult()
         self._inner_results = []
 
     def update_setup(self, result: Result, error: Exception | None = None) -> None:
+        """Store the setup result.
+
+        Args:
+            result: The result of the setup.
+            error: The error that occurred in case of a failure.
+        """
         self.setup_result.result = result
         self.setup_result.error = error
 
     def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+        """Store the teardown result.
+
+        Args:
+            result: The result of the teardown.
+            error: The error that occurred in case of a failure.
+        """
         self.teardown_result.result = result
         self.teardown_result.error = error
 
@@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]:
         ]
 
     def get_errors(self) -> list[Exception]:
+        """Compile errors from the whole result hierarchy.
+
+        Returns:
+            The errors from setup, teardown and all errors found in the whole result hierarchy.
+        """
         return self._get_setup_teardown_errors() + self._get_inner_errors()
 
     def add_stats(self, statistics: Statistics) -> None:
+        """Collate stats from the whole result hierarchy.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be collated.
+        """
         for inner_result in self._inner_results:
             inner_result.add_stats(statistics)
 
 
 class TestCaseResult(BaseResult, FixtureResult):
-    """
-    The test case specific result.
-    Stores the result of the actual test case.
-    Also stores the test case name.
+    r"""The test case specific result.
+
+    Stores the result of the actual test case. This is done by adding an extra superclass
+    in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and
+    the class is itself a record of the test case.
+
+    Attributes:
+        test_case_name: The test case name.
     """
 
     test_case_name: str
 
     def __init__(self, test_case_name: str):
+        """Extend the constructor with `test_case_name`.
+
+        Args:
+            test_case_name: The test case's name.
+        """
         super(TestCaseResult, self).__init__()
         self.test_case_name = test_case_name
 
     def update(self, result: Result, error: Exception | None = None) -> None:
+        """Update the test case result.
+
+        This updates the result of the test case itself and doesn't affect
+        the results of the setup and teardown steps in any way.
+
+        Args:
+            result: The result of the test case.
+            error: The error that occurred in case of a failure.
+        """
         self.result = result
         self.error = error
 
@@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]:
         return []
 
     def add_stats(self, statistics: Statistics) -> None:
+        r"""Add the test case result to statistics.
+
+        The base method goes through the hierarchy recursively and this method is here to stop
+        the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be added.
+        """
         statistics += self.result
 
     def __bool__(self) -> bool:
+        """The test case passed only if setup, teardown and the test case itself passed."""
         return (
             bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
         )
 
 
 class TestSuiteResult(BaseResult):
-    """
-    The test suite specific result.
-    The _inner_results list stores results of test cases in a given test suite.
-    Also stores the test suite name.
+    """The test suite specific result.
+
+    The internal list stores the results of all test cases in a given test suite.
+
+    Attributes:
+        suite_name: The test suite name.
     """
 
     suite_name: str
 
     def __init__(self, suite_name: str):
+        """Extend the constructor with `suite_name`.
+
+        Args:
+            suite_name: The test suite's name.
+        """
         super(TestSuiteResult, self).__init__()
         self.suite_name = suite_name
 
     def add_test_case(self, test_case_name: str) -> TestCaseResult:
+        """Add and return the inner result (test case).
+
+        Returns:
+            The test case's result.
+        """
         test_case_result = TestCaseResult(test_case_name)
         self._inner_results.append(test_case_result)
         return test_case_result
 
 
 class BuildTargetResult(BaseResult):
-    """
-    The build target specific result.
-    The _inner_results list stores results of test suites in a given build target.
-    Also stores build target specifics, such as compiler used to build DPDK.
+    """The build target specific result.
+
+    The internal list stores the results of all test suites in a given build target.
+
+    Attributes:
+        arch: The DPDK build target architecture.
+        os: The DPDK build target operating system.
+        cpu: The DPDK build target CPU.
+        compiler: The DPDK build target compiler.
+        compiler_version: The DPDK build target compiler version.
+        dpdk_version: The built DPDK version.
     """
 
     arch: Architecture
@@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult):
     dpdk_version: str | None
 
     def __init__(self, build_target: BuildTargetConfiguration):
+        """Extend the constructor with the `build_target`'s build target config.
+
+        Args:
+            build_target: The build target's test run configuration.
+        """
         super(BuildTargetResult, self).__init__()
         self.arch = build_target.arch
         self.os = build_target.os
@@ -222,20 +345,35 @@ def __init__(self, build_target: BuildTargetConfiguration):
         self.dpdk_version = None
 
     def add_build_target_info(self, versions: BuildTargetInfo) -> None:
+        """Add information about the build target gathered at runtime.
+
+        Args:
+            versions: The additional information.
+        """
         self.compiler_version = versions.compiler_version
         self.dpdk_version = versions.dpdk_version
 
     def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+        """Add and return the inner result (test suite).
+
+        Returns:
+            The test suite's result.
+        """
         test_suite_result = TestSuiteResult(test_suite_name)
         self._inner_results.append(test_suite_result)
         return test_suite_result
 
 
 class ExecutionResult(BaseResult):
-    """
-    The execution specific result.
-    The _inner_results list stores results of build targets in a given execution.
-    Also stores the SUT node configuration.
+    """The execution specific result.
+
+    The internal list stores the results of all build targets in a given execution.
+
+    Attributes:
+        sut_node: The SUT node used in the execution.
+        sut_os_name: The operating system of the SUT node.
+        sut_os_version: The operating system version of the SUT node.
+        sut_kernel_version: The operating system kernel version of the SUT node.
     """
 
     sut_node: NodeConfiguration
@@ -244,36 +382,55 @@ class ExecutionResult(BaseResult):
     sut_kernel_version: str
 
     def __init__(self, sut_node: NodeConfiguration):
+        """Extend the constructor with the `sut_node`'s config.
+
+        Args:
+            sut_node: The SUT node's test run configuration used in the execution.
+        """
         super(ExecutionResult, self).__init__()
         self.sut_node = sut_node
 
     def add_build_target(
         self, build_target: BuildTargetConfiguration
     ) -> BuildTargetResult:
+        """Add and return the inner result (build target).
+
+        Args:
+            build_target: The build target's test run configuration.
+
+        Returns:
+            The build target's result.
+        """
         build_target_result = BuildTargetResult(build_target)
         self._inner_results.append(build_target_result)
         return build_target_result
 
     def add_sut_info(self, sut_info: NodeInfo) -> None:
+        """Add SUT information gathered at runtime.
+
+        Args:
+            sut_info: The additional SUT node information.
+        """
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
 
 
 class DTSResult(BaseResult):
-    """
-    Stores environment information and test results from a DTS run, which are:
-    * Execution level information, such as SUT and TG hardware.
-    * Build target level information, such as compiler, target OS and cpu.
-    * Test suite results.
-    * All errors that are caught and recorded during DTS execution.
+    """Stores environment information and test results from a DTS run.
 
-    The information is stored in nested objects.
+        * Execution level information, such as testbed and the test suite list,
+        * Build target level information, such as compiler, target OS and cpu,
+        * Test suite and test case results,
+        * All errors that are caught and recorded during DTS execution.
 
-    The class is capable of computing the return code used to exit DTS with
-    from the stored error.
+    The information is stored hierarchically. This is the first level of the hierarchy
+    and as such is where the data form the whole hierarchy is collated or processed.
 
-    It also provides a brief statistical summary of passed/failed test cases.
+    The internal list stores the results of all executions.
+
+    Attributes:
+        dpdk_version: The DPDK version to record.
     """
 
     dpdk_version: str | None
@@ -284,6 +441,11 @@ class DTSResult(BaseResult):
     _stats_filename: str
 
     def __init__(self, logger: DTSLOG):
+        """Extend the constructor with top-level specifics.
+
+        Args:
+            logger: The logger instance the whole result will use.
+        """
         super(DTSResult, self).__init__()
         self.dpdk_version = None
         self._logger = logger
@@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG):
         self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
 
     def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+        """Add and return the inner result (execution).
+
+        Args:
+            sut_node: The SUT node's test run configuration.
+
+        Returns:
+            The execution's result.
+        """
         execution_result = ExecutionResult(sut_node)
         self._inner_results.append(execution_result)
         return execution_result
 
     def add_error(self, error: Exception) -> None:
+        """Record an error that occurred outside any execution.
+
+        Args:
+            error: The exception to record.
+        """
         self._errors.append(error)
 
     def process(self) -> None:
-        """
-        Process the data after a DTS run.
-        The data is added to nested objects during runtime and this parent object
-        is not updated at that time. This requires us to process the nested data
-        after it's all been gathered.
+        """Process the data after a whole DTS run.
+
+        The data is added to inner objects during runtime and this object is not updated
+        at that time. This requires us to process the inner data after it's all been gathered.
 
-        The processing gathers all errors and the result statistics of test cases.
+        The processing gathers all errors and the statistics of test case results.
         """
         self._errors += self.get_errors()
         if self._errors and self._logger:
@@ -321,8 +495,10 @@ def process(self) -> None:
             stats_file.write(str(self._stats_result))
 
     def get_return_code(self) -> int:
-        """
-        Go through all stored Exceptions and return the highest error code found.
+        """Go through all stored Exceptions and return the final DTS error code.
+
+        Returns:
+            The highest error code found.
         """
         for error in self._errors:
             error_return_code = ErrorSeverity.GENERIC_ERR
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 10/23] dts: config docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (7 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 09/23] dts: test result " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 11/23] dts: remote session " Juraj Linkeš
                             ` (12 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py | 371 ++++++++++++++++++++++++++-----
 dts/framework/config/types.py    | 132 +++++++++++
 2 files changed, 446 insertions(+), 57 deletions(-)
 create mode 100644 dts/framework/config/types.py

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 2044c82611..0aa149a53d 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,8 +3,34 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Yaml config parsing methods
+"""Testbed configuration and test suite specification.
+
+This package offers classes that hold real-time information about the testbed, hold test run
+configuration describing the tested testbed and a loader function, :func:`load_config`, which loads
+the YAML test run configuration file
+and validates it according to :download:`the schema <conf_yaml_schema.json>`.
+
+The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout
+this package. The allowed keys and types inside this dictionary are defined in
+the :doc:`types <framework.config.types>` module.
+
+The test run configuration has two main sections:
+
+    * The :class:`ExecutionConfiguration` which defines what tests are going to be run
+      and how DPDK will be built. It also references the testbed where these tests and DPDK
+      are going to be run,
+    * The nodes of the testbed are defined in the other section,
+      a :class:`list` of :class:`NodeConfiguration` objects.
+
+The real-time information about testbed is supposed to be gathered at runtime.
+
+The classes defined in this package make heavy use of :mod:`dataclasses`.
+All of them use slots and are frozen:
+
+    * Slots enables some optimizations, by pre-allocating space for the defined
+      attributes in the underlying data structure,
+    * Frozen makes the object immutable. This enables further optimizations,
+      and makes it thread safe should we every want to move in that direction.
 """
 
 import json
@@ -12,11 +38,20 @@
 import pathlib
 from dataclasses import dataclass
 from enum import auto, unique
-from typing import Any, TypedDict, Union
+from typing import Union
 
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.config.types import (
+    BuildTargetConfigDict,
+    ConfigurationDict,
+    ExecutionConfigDict,
+    NodeConfigDict,
+    PortConfigDict,
+    TestSuiteConfigDict,
+    TrafficGeneratorConfigDict,
+)
 from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
@@ -24,55 +59,97 @@
 
 @unique
 class Architecture(StrEnum):
+    r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     i686 = auto()
+    #:
     x86_64 = auto()
+    #:
     x86_32 = auto()
+    #:
     arm64 = auto()
+    #:
     ppc64le = auto()
 
 
 @unique
 class OS(StrEnum):
+    r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     linux = auto()
+    #:
     freebsd = auto()
+    #:
     windows = auto()
 
 
 @unique
 class CPUType(StrEnum):
+    r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     native = auto()
+    #:
     armv8a = auto()
+    #:
     dpaa2 = auto()
+    #:
     thunderx = auto()
+    #:
     xgene1 = auto()
 
 
 @unique
 class Compiler(StrEnum):
+    r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     gcc = auto()
+    #:
     clang = auto()
+    #:
     icc = auto()
+    #:
     msvc = auto()
 
 
 @unique
 class TrafficGeneratorType(StrEnum):
+    """The supported traffic generators."""
+
+    #:
     SCAPY = auto()
 
 
-# Slots enables some optimizations, by pre-allocating space for the defined
-# attributes in the underlying data structure.
-#
-# Frozen makes the object immutable. This enables further optimizations,
-# and makes it thread safe should we every want to move in that direction.
 @dataclass(slots=True, frozen=True)
 class HugepageConfiguration:
+    r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        amount: The number of hugepages.
+        force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node.
+    """
+
     amount: int
     force_first_numa: bool
 
 
 @dataclass(slots=True, frozen=True)
 class PortConfig:
+    r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        node: The :class:`~framework.testbed_model.node.Node` where this port exists.
+        pci: The PCI address of the port.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK.
+        os_driver: The operating system driver name when the operating system controls the port.
+        peer_node: The :class:`~framework.testbed_model.node.Node` of the port
+            connected to this port.
+        peer_pci: The PCI address of the port connected to this port.
+    """
+
     node: str
     pci: str
     os_driver_for_dpdk: str
@@ -81,18 +158,44 @@ class PortConfig:
     peer_pci: str
 
     @staticmethod
-    def from_dict(node: str, d: dict) -> "PortConfig":
+    def from_dict(node: str, d: PortConfigDict) -> "PortConfig":
+        """A convenience method that creates the object from fewer inputs.
+
+        Args:
+            node: The node where this port exists.
+            d: The configuration dictionary.
+
+        Returns:
+            The port configuration instance.
+        """
         return PortConfig(node=node, **d)
 
 
 @dataclass(slots=True, frozen=True)
 class TrafficGeneratorConfig:
+    """The configuration of traffic generators.
+
+    The class will be expanded when more configuration is needed.
+
+    Attributes:
+        traffic_generator_type: The type of the traffic generator.
+    """
+
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
-        # This looks useless now, but is designed to allow expansion to traffic
-        # generators that require more configuration later.
+    def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig":
+        """A convenience method that produces traffic generator config of the proper type.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The traffic generator configuration instance.
+
+        Raises:
+            ConfigurationError: An unknown traffic generator type was encountered.
+        """
         match TrafficGeneratorType(d["type"]):
             case TrafficGeneratorType.SCAPY:
                 return ScapyTrafficGeneratorConfig(
@@ -106,11 +209,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
 
 @dataclass(slots=True, frozen=True)
 class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+    """Scapy traffic generator specific configuration."""
+
     pass
 
 
 @dataclass(slots=True, frozen=True)
 class NodeConfiguration:
+    r"""The configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        name: The name of the :class:`~framework.testbed_model.node.Node`.
+        hostname: The hostname of the :class:`~framework.testbed_model.node.Node`.
+            Can be an IP or a domain name.
+        user: The name of the user used to connect to
+            the :class:`~framework.testbed_model.node.Node`.
+        password: The password of the user. The use of passwords is heavily discouraged.
+            Please use keys instead.
+        arch: The architecture of the :class:`~framework.testbed_model.node.Node`.
+        os: The operating system of the :class:`~framework.testbed_model.node.Node`.
+        lcores: A comma delimited list of logical cores to use when running DPDK.
+        use_first_core: If :data:`True`, the first logical core won't be used.
+        hugepages: An optional hugepage configuration.
+        ports: The ports that can be used in testing.
+    """
+
     name: str
     hostname: str
     user: str
@@ -123,57 +246,91 @@ class NodeConfiguration:
     ports: list[PortConfig]
 
     @staticmethod
-    def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
-        hugepage_config = d.get("hugepages")
-        if hugepage_config:
-            if "force_first_numa" not in hugepage_config:
-                hugepage_config["force_first_numa"] = False
-            hugepage_config = HugepageConfiguration(**hugepage_config)
-
-        common_config = {
-            "name": d["name"],
-            "hostname": d["hostname"],
-            "user": d["user"],
-            "password": d.get("password"),
-            "arch": Architecture(d["arch"]),
-            "os": OS(d["os"]),
-            "lcores": d.get("lcores", "1"),
-            "use_first_core": d.get("use_first_core", False),
-            "hugepages": hugepage_config,
-            "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
-        }
-
+    def from_dict(
+        d: NodeConfigDict,
+    ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
+        """A convenience method that processes the inputs before creating a specialized instance.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            Either an SUT or TG configuration instance.
+        """
+        hugepage_config = None
+        if "hugepages" in d:
+            hugepage_config_dict = d["hugepages"]
+            if "force_first_numa" not in hugepage_config_dict:
+                hugepage_config_dict["force_first_numa"] = False
+            hugepage_config = HugepageConfiguration(**hugepage_config_dict)
+
+        # The calls here contain duplicated code which is here because Mypy doesn't
+        # properly support dictionary unpacking with TypedDicts
         if "traffic_generator" in d:
             return TGNodeConfiguration(
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
                 traffic_generator=TrafficGeneratorConfig.from_dict(
                     d["traffic_generator"]
                 ),
-                **common_config,
             )
         else:
             return SutNodeConfiguration(
-                memory_channels=d.get("memory_channels", 1), **common_config
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
+                memory_channels=d.get("memory_channels", 1),
             )
 
 
 @dataclass(slots=True, frozen=True)
 class SutNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration.
+
+    Attributes:
+        memory_channels: The number of memory channels to use when running DPDK.
+    """
+
     memory_channels: int
 
 
 @dataclass(slots=True, frozen=True)
 class TGNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration.
+
+    Attributes:
+        traffic_generator: The configuration of the traffic generator present on the TG node.
+    """
+
     traffic_generator: ScapyTrafficGeneratorConfig
 
 
 @dataclass(slots=True, frozen=True)
 class NodeInfo:
-    """Class to hold important versions within the node.
-
-    This class, unlike the NodeConfiguration class, cannot be generated at the start.
-    This is because we need to initialize a connection with the node before we can
-    collect the information needed in this class. Therefore, it cannot be a part of
-    the configuration class above.
+    """Supplemental node information.
+
+    Attributes:
+        os_name: The name of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        os_version: The version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        kernel_version: The kernel version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
     """
 
     os_name: str
@@ -183,6 +340,20 @@ class NodeInfo:
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetConfiguration:
+    """DPDK build configuration.
+
+    The configuration used for building DPDK.
+
+    Attributes:
+        arch: The target architecture to build for.
+        os: The target os to build for.
+        cpu: The target CPU to build for.
+        compiler: The compiler executable to use.
+        compiler_wrapper: This string will be put in front of the compiler when
+            executing the build. Useful for adding wrapper commands, such as ``ccache``.
+        name: The name of the compiler.
+    """
+
     arch: Architecture
     os: OS
     cpu: CPUType
@@ -191,7 +362,18 @@ class BuildTargetConfiguration:
     name: str
 
     @staticmethod
-    def from_dict(d: dict) -> "BuildTargetConfiguration":
+    def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration":
+        r"""A convenience method that processes the inputs before creating an instance.
+
+        `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and
+        `name` is constructed from `arch`, `os`, `cpu` and `compiler`.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The build target configuration instance.
+        """
         return BuildTargetConfiguration(
             arch=Architecture(d["arch"]),
             os=OS(d["os"]),
@@ -204,23 +386,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetInfo:
-    """Class to hold important versions within the build target.
+    """Various versions and other information about a build target.
 
-    This is very similar to the NodeInfo class, it just instead holds information
-    for the build target.
+    Attributes:
+        dpdk_version: The DPDK version that was built.
+        compiler_version: The version of the compiler used to build DPDK.
     """
 
     dpdk_version: str
     compiler_version: str
 
 
-class TestSuiteConfigDict(TypedDict):
-    suite: str
-    cases: list[str]
-
-
 @dataclass(slots=True, frozen=True)
 class TestSuiteConfig:
+    """Test suite configuration.
+
+    Information about a single test suite to be executed.
+
+    Attributes:
+        test_suite: The name of the test suite module without the starting ``TestSuite_``.
+        test_cases: The names of test cases from this test suite to execute.
+            If empty, all test cases will be executed.
+    """
+
     test_suite: str
     test_cases: list[str]
 
@@ -228,6 +416,14 @@ class TestSuiteConfig:
     def from_dict(
         entry: str | TestSuiteConfigDict,
     ) -> "TestSuiteConfig":
+        """Create an instance from two different types.
+
+        Args:
+            entry: Either a suite name or a dictionary containing the config.
+
+        Returns:
+            The test suite configuration instance.
+        """
         if isinstance(entry, str):
             return TestSuiteConfig(test_suite=entry, test_cases=[])
         elif isinstance(entry, dict):
@@ -238,19 +434,49 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class ExecutionConfiguration:
+    """The configuration of an execution.
+
+    The configuration contains testbed information, what tests to execute
+    and with what DPDK build.
+
+    Attributes:
+        build_targets: A list of DPDK builds to test.
+        perf: Whether to run performance tests.
+        func: Whether to run functional tests.
+        skip_smoke_tests: Whether to skip smoke tests.
+        test_suites: The names of test suites and/or test cases to execute.
+        system_under_test_node: The SUT node to use in this execution.
+        traffic_generator_node: The TG node to use in this execution.
+        vdevs: The names of virtual devices to test.
+    """
+
     build_targets: list[BuildTargetConfiguration]
     perf: bool
     func: bool
+    skip_smoke_tests: bool
     test_suites: list[TestSuiteConfig]
     system_under_test_node: SutNodeConfiguration
     traffic_generator_node: TGNodeConfiguration
     vdevs: list[str]
-    skip_smoke_tests: bool
 
     @staticmethod
     def from_dict(
-        d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
+        d: ExecutionConfigDict,
+        node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]],
     ) -> "ExecutionConfiguration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        The build target and the test suite config is transformed into their respective objects.
+        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
+        just stored.
+
+        Args:
+            d: The configuration dictionary.
+            node_map: A dictionary mapping node names to their config objects.
+
+        Returns:
+            The execution configuration instance.
+        """
         build_targets: list[BuildTargetConfiguration] = list(
             map(BuildTargetConfiguration.from_dict, d["build_targets"])
         )
@@ -291,10 +517,31 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class Configuration:
+    """DTS testbed and test configuration.
+
+    The node configuration is not stored in this object. Rather, all used node configurations
+    are stored inside the execution configuration where the nodes are actually used.
+
+    Attributes:
+        executions: Execution configurations.
+    """
+
     executions: list[ExecutionConfiguration]
 
     @staticmethod
-    def from_dict(d: dict) -> "Configuration":
+    def from_dict(d: ConfigurationDict) -> "Configuration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        Build target and test suite config is transformed into their respective objects.
+        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
+        just stored.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The whole configuration instance.
+        """
         nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
             map(NodeConfiguration.from_dict, d["nodes"])
         )
@@ -313,9 +560,17 @@ def from_dict(d: dict) -> "Configuration":
 
 
 def load_config() -> Configuration:
-    """
-    Loads the configuration file and the configuration file schema,
-    validates the configuration file, and creates a configuration object.
+    """Load DTS test run configuration from a file.
+
+    Load the YAML test run configuration file
+    and :download:`the configuration file schema <conf_yaml_schema.json>`,
+    validate the test run configuration file, and create a test run configuration object.
+
+    The YAML test run configuration file is specified in the :option:`--config-file` command line
+    argument or the :envvar:`DTS_CFG_FILE` environment variable.
+
+    Returns:
+        The parsed test run configuration.
     """
     with open(SETTINGS.config_file_path, "r") as f:
         config_data = yaml.safe_load(f)
@@ -326,6 +581,8 @@ def load_config() -> Configuration:
 
     with open(schema_path, "r") as f:
         schema = json.load(f)
-    config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
-    config_obj: Configuration = Configuration.from_dict(dict(config))
+    config = warlock.model_factory(schema, name="_Config")(config_data)
+    config_obj: Configuration = Configuration.from_dict(
+        dict(config)  # type: ignore[arg-type]
+    )
     return config_obj
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
new file mode 100644
index 0000000000..1927910d88
--- /dev/null
+++ b/dts/framework/config/types.py
@@ -0,0 +1,132 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Configuration dictionary contents specification.
+
+These type definitions serve as documentation of the configuration dictionary contents.
+
+The definitions use the built-in :class:`~typing.TypedDict` construct.
+"""
+
+from typing import TypedDict
+
+
+class PortConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    pci: str
+    #:
+    os_driver_for_dpdk: str
+    #:
+    os_driver: str
+    #:
+    peer_node: str
+    #:
+    peer_pci: str
+
+
+class TrafficGeneratorConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    type: str
+
+
+class HugepageConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    amount: int
+    #:
+    force_first_numa: bool
+
+
+class NodeConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    hugepages: HugepageConfigurationDict
+    #:
+    name: str
+    #:
+    hostname: str
+    #:
+    user: str
+    #:
+    password: str
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    lcores: str
+    #:
+    use_first_core: bool
+    #:
+    ports: list[PortConfigDict]
+    #:
+    memory_channels: int
+    #:
+    traffic_generator: TrafficGeneratorConfigDict
+
+
+class BuildTargetConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    cpu: str
+    #:
+    compiler: str
+    #:
+    compiler_wrapper: str
+
+
+class TestSuiteConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    suite: str
+    #:
+    cases: list[str]
+
+
+class ExecutionSUTConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    node_name: str
+    #:
+    vdevs: list[str]
+
+
+class ExecutionConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    build_targets: list[BuildTargetConfigDict]
+    #:
+    perf: bool
+    #:
+    func: bool
+    #:
+    skip_smoke_tests: bool
+    #:
+    test_suites: TestSuiteConfigDict
+    #:
+    system_under_test_node: ExecutionSUTConfigDict
+    #:
+    traffic_generator_node: str
+
+
+class ConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    nodes: list[NodeConfigDict]
+    #:
+    executions: list[ExecutionConfigDict]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 11/23] dts: remote session docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (8 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 10/23] dts: config " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 12/23] dts: interactive " Juraj Linkeš
                             ` (11 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/remote_session/__init__.py      |  39 +++++-
 .../remote_session/remote_session.py          | 128 +++++++++++++-----
 dts/framework/remote_session/ssh_session.py   |  16 +--
 3 files changed, 135 insertions(+), 48 deletions(-)

diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 5e7ddb2b05..51a01d6b5e 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -2,12 +2,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-The package provides modules for managing remote connections to a remote host (node),
-differentiated by OS.
-The package provides a factory function, create_session, that returns the appropriate
-remote connection based on the passed configuration. The differences are in the
-underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""Remote interactive and non-interactive sessions.
+
+This package provides modules for managing remote connections to a remote host (node).
+
+The non-interactive sessions send commands and return their output and exit code.
+
+The interactive sessions open an interactive shell which is continuously open,
+allowing it to send and receive data within that particular shell.
 """
 
 # pylama:ignore=W0611
@@ -26,10 +28,35 @@
 def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> RemoteSession:
+    """Factory for non-interactive remote sessions.
+
+    The function returns an SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The SSH remote session.
+    """
     return SSHSession(node_config, name, logger)
 
 
 def create_interactive_session(
     node_config: NodeConfiguration, logger: DTSLOG
 ) -> InteractiveRemoteSession:
+    """Factory for interactive remote sessions.
+
+    The function returns an interactive SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The interactive SSH remote session.
+    """
     return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 0647d93de4..629c2d7b9c 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -3,6 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Base remote session.
+
+This module contains the abstract base class for remote sessions and defines
+the structure of the result of a command execution.
+"""
+
+
 import dataclasses
 from abc import ABC, abstractmethod
 from pathlib import PurePath
@@ -15,8 +22,14 @@
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class CommandResult:
-    """
-    The result of remote execution of a command.
+    """The result of remote execution of a command.
+
+    Attributes:
+        name: The name of the session that executed the command.
+        command: The executed command.
+        stdout: The standard output the command produced.
+        stderr: The standard error output the command produced.
+        return_code: The return code the command exited with.
     """
 
     name: str
@@ -26,6 +39,7 @@ class CommandResult:
     return_code: int
 
     def __str__(self) -> str:
+        """Format the command outputs."""
         return (
             f"stdout: '{self.stdout}'\n"
             f"stderr: '{self.stderr}'\n"
@@ -34,13 +48,24 @@ def __str__(self) -> str:
 
 
 class RemoteSession(ABC):
-    """
-    The base class for defining which methods must be implemented in order to connect
-    to a remote host (node) and maintain a remote session. The derived classes are
-    supposed to implement/use some underlying transport protocol (e.g. SSH) to
-    implement the methods. On top of that, it provides some basic services common to
-    all derived classes, such as keeping history and logging what's being executed
-    on the remote node.
+    """Non-interactive remote session.
+
+    The abstract methods must be implemented in order to connect to a remote host (node)
+    and maintain a remote session.
+    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
+    to implement the methods. On top of that, it provides some basic services common to all
+    subclasses, such as keeping history and logging what's being executed on the remote node.
+
+    Attributes:
+        name: The name of the session.
+        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
+            or a domain name.
+        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
+        port: The port of the node, if given in `hostname`.
+        username: The username used in the connection.
+        password: The password used in the connection. Most frequently empty,
+            as the use of passwords is discouraged.
+        history: The executed commands during this session.
     """
 
     name: str
@@ -59,6 +84,16 @@ def __init__(
         session_name: str,
         logger: DTSLOG,
     ):
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            session_name: The name of the session.
+            logger: The logger instance this session will use.
+
+        Raises:
+            SSHConnectionError: If the connection to the node was not successful.
+        """
         self._node_config = node_config
 
         self.name = session_name
@@ -79,8 +114,13 @@ def __init__(
 
     @abstractmethod
     def _connect(self) -> None:
-        """
-        Create connection to assigned node.
+        """Create a connection to the node.
+
+        The implementation must assign the established session to self.session.
+
+        The implementation must except all exceptions and convert them to an SSHConnectionError.
+
+        The implementation may optionally implement retry attempts.
         """
 
     def send_command(
@@ -90,11 +130,24 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        Send a command to the connected node using optional env vars
-        and return CommandResult.
-        If verify is True, check the return code of the executed command
-        and raise a RemoteCommandExecutionError if the command failed.
+        """Send `command` to the connected node.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds to execute `command`.
+            verify: If :data:`True`, will check the exit code of `command`.
+            env: A dictionary with environment variables to be used with `command` execution.
+
+        Raises:
+            SSHSessionDeadError: If the session isn't alive when sending `command`.
+            SSHTimeoutError: If `command` execution timed out.
+            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
+
+        Returns:
+            The output of the command along with the return code.
         """
         self._logger.info(
             f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
@@ -115,29 +168,36 @@ def send_command(
     def _send_command(
         self, command: str, timeout: float, env: dict | None
     ) -> CommandResult:
-        """
-        Use the underlying protocol to execute the command using optional env vars
-        and return CommandResult.
+        """Send a command to the connected node.
+
+        The implementation must execute the command remotely with `env` environment variables
+        and return the result.
+
+        The implementation must except all exceptions and raise an SSHSessionDeadError if
+        the session is not alive and an SSHTimeoutError if the command execution times out.
         """
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session and free all used resources.
+        """Close the remote session and free all used resources.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
         """
         self._logger.logger_exit()
         self._close(force)
 
     @abstractmethod
     def _close(self, force: bool = False) -> None:
-        """
-        Execute protocol specific steps needed to close the session properly.
+        """Protocol specific steps needed to close the session properly.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
+                This doesn't have to be implemented in the overloaded method.
         """
 
     @abstractmethod
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the remote session is still responding."""
 
     @abstractmethod
     def copy_from(
@@ -147,12 +207,12 @@ def copy_from(
     ) -> None:
         """Copy a file from the remote Node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote Node associated with this remote session
+        to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
-            destination_file: a file or directory path on the local filesystem.
+            source_file: The file on the remote Node.
+            destination_file: A file or directory path on the local filesystem.
         """
 
     @abstractmethod
@@ -163,10 +223,10 @@ def copy_to(
     ) -> None:
         """Copy a file from local filesystem to the remote Node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file` on the remote Node
+        associated with this remote session.
 
         Args:
-            source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            source_file: The file on the local filesystem.
+            destination_file: A file or directory path on the remote Node.
         """
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index cee11d14d6..7186490a9a 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""SSH session remote session."""
+
 import socket
 import traceback
 from pathlib import PurePath
@@ -26,13 +28,8 @@
 class SSHSession(RemoteSession):
     """A persistent SSH connection to a remote Node.
 
-    The connection is implemented with the Fabric Python library.
-
-    Args:
-        node_config: The configuration of the Node to connect to.
-        session_name: The name of the session.
-        logger: The logger used for logging.
-            This should be passed from the parent OSSession.
+    The connection is implemented with
+    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
 
     Attributes:
         session: The underlying Fabric SSH connection.
@@ -80,6 +77,7 @@ def _connect(self) -> None:
             raise SSHConnectionError(self.hostname, errors)
 
     def is_alive(self) -> bool:
+        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
         return self.session.is_connected
 
     def _send_command(
@@ -89,7 +87,7 @@ def _send_command(
 
         Args:
             command: The command to execute.
-            timeout: Wait at most this many seconds for the execution to complete.
+            timeout: Wait at most this long in seconds to execute the command.
             env: Extra environment variables that will be used in command execution.
 
         Raises:
@@ -118,6 +116,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
         self.session.get(str(destination_file), str(source_file))
 
     def copy_to(
@@ -125,6 +124,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
         self.session.put(str(source_file), str(destination_file))
 
     def _close(self, force: bool = False) -> None:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 12/23] dts: interactive remote session docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (9 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 11/23] dts: remote session " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 13/23] dts: port and virtual device " Juraj Linkeš
                             ` (10 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../interactive_remote_session.py             | 36 +++----
 .../remote_session/interactive_shell.py       | 99 +++++++++++--------
 dts/framework/remote_session/python_shell.py  | 26 ++++-
 dts/framework/remote_session/testpmd_shell.py | 61 +++++++++---
 4 files changed, 150 insertions(+), 72 deletions(-)

diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
index 9085a668e8..c1bf30ac61 100644
--- a/dts/framework/remote_session/interactive_remote_session.py
+++ b/dts/framework/remote_session/interactive_remote_session.py
@@ -22,27 +22,23 @@
 class InteractiveRemoteSession:
     """SSH connection dedicated to interactive applications.
 
-    This connection is created using paramiko and is a persistent connection to the
-    host. This class defines methods for connecting to the node and configures this
-    connection to send "keep alive" packets every 30 seconds. Because paramiko attempts
-    to use SSH keys to establish a connection first, providing a password is optional.
-    This session is utilized by InteractiveShells and cannot be interacted with
-    directly.
-
-    Arguments:
-        node_config: Configuration class for the node you are connecting to.
-        _logger: Desired logger for this session to use.
+    The connection is created using `paramiko <https://docs.paramiko.org/en/latest/>`_
+    and is a persistent connection to the host. This class defines the methods for connecting
+    to the node and configures the connection to send "keep alive" packets every 30 seconds.
+    Because paramiko attempts to use SSH keys to establish a connection first, providing
+    a password is optional. This session is utilized by InteractiveShells
+    and cannot be interacted with directly.
 
     Attributes:
-        hostname: Hostname that will be used to initialize a connection to the node.
-        ip: A subsection of hostname that removes the port for the connection if there
+        hostname: The hostname that will be used to initialize a connection to the node.
+        ip: A subsection of `hostname` that removes the port for the connection if there
             is one. If there is no port, this will be the same as hostname.
-        port: Port to use for the ssh connection. This will be extracted from the
-            hostname if there is a port included, otherwise it will default to 22.
+        port: Port to use for the ssh connection. This will be extracted from `hostname`
+            if there is a port included, otherwise it will default to ``22``.
         username: User to connect to the node with.
         password: Password of the user connecting to the host. This will default to an
             empty string if a password is not provided.
-        session: Underlying paramiko connection.
+        session: The underlying paramiko connection.
 
     Raises:
         SSHConnectionError: There is an error creating the SSH connection.
@@ -58,9 +54,15 @@ class InteractiveRemoteSession:
     _node_config: NodeConfiguration
     _transport: Transport | None
 
-    def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None:
+    def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            logger: The logger instance this session will use.
+        """
         self._node_config = node_config
-        self._logger = _logger
+        self._logger = logger
         self.hostname = node_config.hostname
         self.username = node_config.user
         self.password = node_config.password if node_config.password else ""
diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
index c24376b2a8..a98a822e91 100644
--- a/dts/framework/remote_session/interactive_shell.py
+++ b/dts/framework/remote_session/interactive_shell.py
@@ -3,18 +3,20 @@
 
 """Common functionality for interactive shell handling.
 
-This base class, InteractiveShell, is meant to be extended by other classes that
-contain functionality specific to that shell type. These derived classes will often
-modify things like the prompt to expect or the arguments to pass into the application,
-but still utilize the same method for sending a command and collecting output. How
-this output is handled however is often application specific. If an application needs
-elevated privileges to start it is expected that the method for gaining those
-privileges is provided when initializing the class.
+The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain
+functionality specific to that shell type. These subclasses will often modify things like
+the prompt to expect or the arguments to pass into the application, but still utilize
+the same method for sending a command and collecting output. How this output is handled however
+is often application specific. If an application needs elevated privileges to start it is expected
+that the method for gaining those privileges is provided when initializing the class.
+
+The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+environment variable configure the timeout of getting the output from command execution.
 """
 
 from abc import ABC
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from paramiko import Channel, SSHClient, channel  # type: ignore[import]
 
@@ -30,28 +32,6 @@ class InteractiveShell(ABC):
     and collecting input until reaching a certain prompt. All interactive applications
     will use the same SSH connection, but each will create their own channel on that
     session.
-
-    Arguments:
-        interactive_session: The SSH session dedicated to interactive shells.
-        logger: Logger used for displaying information in the console.
-        get_privileged_command: Method for modifying a command to allow it to use
-            elevated privileges. If this is None, the application will not be started
-            with elevated privileges.
-        app_args: Command line arguments to be passed to the application on startup.
-        timeout: Timeout used for the SSH channel that is dedicated to this interactive
-            shell. This timeout is for collecting output, so if reading from the buffer
-            and no output is gathered within the timeout, an exception is thrown.
-
-    Attributes
-        _default_prompt: Prompt to expect at the end of output when sending a command.
-            This is often overridden by derived classes.
-        _command_extra_chars: Extra characters to add to the end of every command
-            before sending them. This is often overridden by derived classes and is
-            most commonly an additional newline character.
-        path: Path to the executable to start the interactive application.
-        dpdk_app: Whether this application is a DPDK app. If it is, the build
-            directory for DPDK on the node will be prepended to the path to the
-            executable.
     """
 
     _interactive_session: SSHClient
@@ -61,10 +41,22 @@ class InteractiveShell(ABC):
     _logger: DTSLOG
     _timeout: float
     _app_args: str
-    _default_prompt: str = ""
-    _command_extra_chars: str = ""
-    path: PurePath
-    dpdk_app: bool = False
+
+    #: Prompt to expect at the end of output when sending a command.
+    #: This is often overridden by subclasses.
+    _default_prompt: ClassVar[str] = ""
+
+    #: Extra characters to add to the end of every command
+    #: before sending them. This is often overridden by subclasses and is
+    #: most commonly an additional newline character.
+    _command_extra_chars: ClassVar[str] = ""
+
+    #: Path to the executable to start the interactive application.
+    path: ClassVar[PurePath]
+
+    #: Whether this application is a DPDK app. If it is, the build directory
+    #: for DPDK on the node will be prepended to the path to the executable.
+    dpdk_app: ClassVar[bool] = False
 
     def __init__(
         self,
@@ -74,6 +66,19 @@ def __init__(
         app_args: str = "",
         timeout: float = SETTINGS.timeout,
     ) -> None:
+        """Create an SSH channel during initialization.
+
+        Args:
+            interactive_session: The SSH session dedicated to interactive shells.
+            logger: The logger instance this session will use.
+            get_privileged_command: A method for modifying a command to allow it to use
+                elevated privileges. If :data:`None`, the application will not be started
+                with elevated privileges.
+            app_args: The command line arguments to be passed to the application on startup.
+            timeout: The timeout used for the SSH channel that is dedicated to this interactive
+                shell. This timeout is for collecting output, so if reading from the buffer
+                and no output is gathered within the timeout, an exception is thrown.
+        """
         self._interactive_session = interactive_session
         self._ssh_channel = self._interactive_session.invoke_shell()
         self._stdin = self._ssh_channel.makefile_stdin("w")
@@ -92,6 +97,10 @@ def _start_application(
 
         This method is often overridden by subclasses as their process for
         starting may look different.
+
+        Args:
+            get_privileged_command: A function (but could be any callable) that produces
+                the version of the command with elevated privileges.
         """
         start_command = f"{self.path} {self._app_args}"
         if get_privileged_command is not None:
@@ -99,16 +108,24 @@ def _start_application(
         self.send_command(start_command)
 
     def send_command(self, command: str, prompt: str | None = None) -> str:
-        """Send a command and get all output before the expected ending string.
+        """Send `command` and get all output before the expected ending string.
 
         Lines that expect input are not included in the stdout buffer, so they cannot
-        be used for expect. For example, if you were prompted to log into something
-        with a username and password, you cannot expect "username:" because it won't
-        yet be in the stdout buffer. A workaround for this could be consuming an
-        extra newline character to force the current prompt into the stdout buffer.
+        be used for expect.
+
+        Example:
+            If you were prompted to log into something with a username and password,
+            you cannot expect ``username:`` because it won't yet be in the stdout buffer.
+            A workaround for this could be consuming an extra newline character to force
+            the current `prompt` into the stdout buffer.
+
+        Args:
+            command: The command to send.
+            prompt: After sending the command, `send_command` will be expecting this string.
+                If :data:`None`, will use the class's default prompt.
 
         Returns:
-            All output in the buffer before expected string
+            All output in the buffer before expected string.
         """
         self._logger.info(f"Sending: '{command}'")
         if prompt is None:
@@ -126,8 +143,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str:
         return out
 
     def close(self) -> None:
+        """Properly free all resources."""
         self._stdin.close()
         self._ssh_channel.close()
 
     def __del__(self) -> None:
+        """Make sure the session is properly closed before deleting the object."""
         self.close()
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index cc3ad48a68..c8e5957ef7 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -1,12 +1,32 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Python interactive shell.
+
+Typical usage example in a TestSuite::
+
+    from framework.remote_session import PythonShell
+    python_shell = self.tg_node.create_interactive_shell(
+        PythonShell, timeout=5, privileged=True
+    )
+    python_shell.send_command("print('Hello World')")
+    pytyon_shell.close()
+"""
+
 from pathlib import PurePath
+from typing import ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class PythonShell(InteractiveShell):
-    _default_prompt: str = ">>>"
-    _command_extra_chars: str = "\n"
-    path: PurePath = PurePath("python3")
+    """Python interactive shell."""
+
+    #: Python's prompt.
+    _default_prompt: ClassVar[str] = ">>>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
+
+    #: The Python executable.
+    path: ClassVar[PurePath] = PurePath("python3")
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 1455b5a199..2632515d74 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -1,45 +1,82 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Testpmd interactive shell.
+
+Typical usage example in a TestSuite::
+
+    testpmd_shell = self.sut_node.create_interactive_shell(
+            TestPmdShell, privileged=True
+        )
+    devices = testpmd_shell.get_devices()
+    for device in devices:
+        print(device)
+    testpmd_shell.close()
+"""
+
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class TestPmdDevice(object):
+    """The data of a device that testpmd can recognize.
+
+    Attributes:
+        pci_address: The PCI address of the device.
+    """
+
     pci_address: str
 
     def __init__(self, pci_address_line: str):
+        """Initialize the device from the testpmd output line string.
+
+        Args:
+            pci_address_line: A line of testpmd output that contains a device.
+        """
         self.pci_address = pci_address_line.strip().split(": ")[1].strip()
 
     def __str__(self) -> str:
+        """The PCI address captures what the device is."""
         return self.pci_address
 
 
 class TestPmdShell(InteractiveShell):
-    path: PurePath = PurePath("app", "dpdk-testpmd")
-    dpdk_app: bool = True
-    _default_prompt: str = "testpmd>"
-    _command_extra_chars: str = (
-        "\n"  # We want to append an extra newline to every command
-    )
+    """Testpmd interactive shell.
+
+    The testpmd shell users should never use
+    the :meth:`~framework.remote_session.interactive_shell.InteractiveShell.send_command` method
+    directly, but rather call specialized methods. If there isn't one that satisfies a need,
+    it should be added.
+    """
+
+    #: The path to the testpmd executable.
+    path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd")
+
+    #: Flag this as a DPDK app so that it's clear this is not a system app and
+    #: needs to be looked in a specific path.
+    dpdk_app: ClassVar[bool] = True
+
+    #: The testpmd's prompt.
+    _default_prompt: ClassVar[str] = "testpmd>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
 
     def _start_application(
         self, get_privileged_command: Callable[[str], str] | None
     ) -> None:
-        """See "_start_application" in InteractiveShell."""
         self._app_args += " -- -i"
         super()._start_application(get_privileged_command)
 
     def get_devices(self) -> list[TestPmdDevice]:
-        """Get a list of device names that are known to testpmd
+        """Get a list of device names that are known to testpmd.
 
-        Uses the device info listed in testpmd and then parses the output to
-        return only the names of the devices.
+        Uses the device info listed in testpmd and then parses the output.
 
         Returns:
-            A list of strings representing device names (e.g. 0000:14:00.1)
+            A list of devices.
         """
         dev_info: str = self.send_command("show device info all")
         dev_list: list[TestPmdDevice] = []
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 13/23] dts: port and virtual device docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (10 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 12/23] dts: interactive " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 14/23] dts: cpu " Juraj Linkeš
                             ` (9 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/__init__.py       | 16 ++++--
 dts/framework/testbed_model/port.py           | 53 +++++++++++++++----
 dts/framework/testbed_model/virtual_device.py | 17 +++++-
 3 files changed, 71 insertions(+), 15 deletions(-)

diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ced05653b..a02be1f2d9 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,9 +2,19 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-This package contains the classes used to model the physical traffic generator,
-system under test and any other components that need to be interacted with.
+"""Testbed modelling.
+
+This package defines the testbed elements DTS works with:
+
+    * A system under test node: :class:`SutNode`,
+    * A traffic generator node: :class:`TGNode`,
+    * The ports of network interface cards (NICs) present on nodes: :class:`Port`,
+    * The logical cores of CPUs present on nodes: :class:`LogicalCore`,
+    * The virtual devices that can be created on nodes: :class:`VirtualDevice`,
+    * The operating systems running on nodes: :class:`LinuxSession` and :class:`PosixSession`.
+
+DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes
+to properly build and test DPDK.
 """
 
 # pylama:ignore=W0611
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 680c29bfe3..817405bea4 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""NIC port model.
+
+Basic port information, such as location (the port are identified by their PCI address on a node),
+drivers and address.
+"""
+
+
 from dataclasses import dataclass
 
 from framework.config import PortConfig
@@ -9,24 +16,35 @@
 
 @dataclass(slots=True, frozen=True)
 class PortIdentifier:
+    """The port identifier.
+
+    Attributes:
+        node: The node where the port resides.
+        pci: The PCI address of the port on `node`.
+    """
+
     node: str
     pci: str
 
 
 @dataclass(slots=True)
 class Port:
-    """
-    identifier: The PCI address of the port on a node.
-
-    os_driver: The driver used by this port when the OS is controlling it.
-        Example: i40e
-    os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it,
-        Example: vfio-pci.
+    """Physical port on a node.
 
-    Note: os_driver and os_driver_for_dpdk may be the same thing.
-        Example: mlx5_core
+    The ports are identified by the node they're on and their PCI addresses. The port on the other
+    side of the connection is also captured here.
+    Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
+    and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
 
-    peer: The identifier of a port this port is connected with.
+    Attributes:
+        identifier: The PCI address of the port on a node.
+        os_driver: The operating system driver name when the operating system controls the port,
+            e.g.: ``i40e``.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
+        peer: The identifier of a port this port is connected with.
+            The `peer` is on a different node.
+        mac_address: The MAC address of the port.
+        logical_name: The logical name of the port. Must be discovered.
     """
 
     identifier: PortIdentifier
@@ -37,6 +55,12 @@ class Port:
     logical_name: str = ""
 
     def __init__(self, node_name: str, config: PortConfig):
+        """Initialize the port from `node_name` and `config`.
+
+        Args:
+            node_name: The name of the port's node.
+            config: The test run configuration of the port.
+        """
         self.identifier = PortIdentifier(
             node=node_name,
             pci=config.pci,
@@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig):
 
     @property
     def node(self) -> str:
+        """The node where the port resides."""
         return self.identifier.node
 
     @property
     def pci(self) -> str:
+        """The PCI address of the port."""
         return self.identifier.pci
 
 
 @dataclass(slots=True, frozen=True)
 class PortLink:
+    """The physical, cabled connection between the ports.
+
+    Attributes:
+        sut_port: The port on the SUT node connected to `tg_port`.
+        tg_port: The port on the TG node connected to `sut_port`.
+    """
+
     sut_port: Port
     tg_port: Port
diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
index eb664d9f17..e9b5e9c3be 100644
--- a/dts/framework/testbed_model/virtual_device.py
+++ b/dts/framework/testbed_model/virtual_device.py
@@ -1,16 +1,29 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Virtual devices model.
+
+Alongside support for physical hardware, DPDK can create various virtual devices.
+"""
+
 
 class VirtualDevice(object):
-    """
-    Base class for virtual devices used by DPDK.
+    """Base class for virtual devices used by DPDK.
+
+    Attributes:
+        name: The name of the virtual device.
     """
 
     name: str
 
     def __init__(self, name: str):
+        """Initialize the virtual device.
+
+        Args:
+            name: The name of the virtual device.
+        """
         self.name = name
 
     def __str__(self) -> str:
+        """This corresponds to the name used for DPDK devices."""
         return self.name
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 14/23] dts: cpu docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (11 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 13/23] dts: port and virtual device " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 15/23] dts: os session " Juraj Linkeš
                             ` (8 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++--------
 1 file changed, 144 insertions(+), 52 deletions(-)

diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 8fe785dfe4..4edeb4a7c2 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -1,6 +1,22 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""CPU core representation and filtering.
+
+This module provides a unified representation of logical CPU cores along
+with filtering capabilities.
+
+When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
+the physical CPU cores are split into logical CPU cores with different IDs.
+
+:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify
+the socket from which to filter the number of logical cores. It's also possible to not use all
+logical CPU cores from each physical core (e.g. only the first logical core of each physical core).
+
+:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that
+the logical cores are actually present on the server.
+"""
+
 import dataclasses
 from abc import ABC, abstractmethod
 from collections.abc import Iterable, ValuesView
@@ -11,9 +27,17 @@
 
 @dataclass(slots=True, frozen=True)
 class LogicalCore(object):
-    """
-    Representation of a CPU core. A physical core is represented in OS
-    by multiple logical cores (lcores) if CPU multithreading is enabled.
+    """Representation of a logical CPU core.
+
+    A physical core is represented in OS by multiple logical cores (lcores)
+    if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same.
+
+    Attributes:
+        lcore: The logical core ID of a CPU core. It's the same as `core` with
+            disabled multithreading.
+        core: The physical core ID of a CPU core.
+        socket: The physical socket ID where the CPU resides.
+        node: The NUMA node ID where the CPU resides.
     """
 
     lcore: int
@@ -22,27 +46,36 @@ class LogicalCore(object):
     node: int
 
     def __int__(self) -> int:
+        """The CPU is best represented by the logical core, as that's what we configure in EAL."""
         return self.lcore
 
 
 class LogicalCoreList(object):
-    """
-    Convert these options into a list of logical core ids.
-    lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
-    lcore_list=[0,1,2,3] - a list of int indices
-    lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
-    lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
-
-    The class creates a unified format used across the framework and allows
-    the user to use either a str representation (using str(instance) or directly
-    in f-strings) or a list representation (by accessing instance.lcore_list).
-    Empty lcore_list is allowed.
+    r"""A unified way to store :class:`LogicalCore`\s.
+
+    Create a unified format used across the framework and allow the user to use
+    either a :class:`str` representation (using ``str(instance)`` or directly in f-strings)
+    or a :class:`list` representation (by accessing the `lcore_list` property,
+    which stores logical core IDs).
     """
 
     _lcore_list: list[int]
     _lcore_str: str
 
     def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+        """Process `lcore_list`, then sort.
+
+        There are four supported logical core list formats::
+
+            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
+            lcore_list=[0,1,2,3]        # a list of int indices
+            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
+            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
+
+        Args:
+            lcore_list: Various ways to represent multiple logical cores.
+                Empty `lcore_list` is allowed.
+        """
         self._lcore_list = []
         if isinstance(lcore_list, str):
             lcore_list = lcore_list.split(",")
@@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
     @property
     def lcore_list(self) -> list[int]:
+        """The logical core IDs."""
         return self._lcore_list
 
     def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
@@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
         return formatted_core_list
 
     def __str__(self) -> str:
+        """The consecutive ranges of logical core IDs."""
         return self._lcore_str
 
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class LogicalCoreCount(object):
-    """
-    Define the number of logical cores to use.
-    If sockets is not None, socket_count is ignored.
-    """
+    """Define the number of logical cores per physical cores per sockets."""
 
+    #: Use this many logical cores per each physical core.
     lcores_per_core: int = 1
+    #: Use this many physical cores per each socket.
     cores_per_socket: int = 2
+    #: Use this many sockets.
     socket_count: int = 1
+    #: Use exactly these sockets. This takes precedence over `socket_count`,
+    #: so when `sockets` is not :data:`None`, `socket_count` is ignored.
     sockets: list[int] | None = None
 
 
 class LogicalCoreFilter(ABC):
-    """
-    Filter according to the input filter specifier. Each filter needs to be
-    implemented in a derived class.
-    This class only implements operations common to all filters, such as sorting
-    the list to be filtered beforehand.
+    """Common filtering class.
+
+    Each filter needs to be implemented in a subclass. This base class sorts the list of cores
+    and defines the filtering method, which must be implemented by subclasses.
     """
 
     _filter_specifier: LogicalCoreCount | LogicalCoreList
@@ -122,6 +158,17 @@ def __init__(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ):
+        """Filter according to the input filter specifier.
+
+        The input `lcore_list` is copied and sorted by physical core before filtering.
+        The list is copied so that the original is left intact.
+
+        Args:
+            lcore_list: The logical CPU cores to filter.
+            filter_specifier: Filter cores from `lcore_list` according to this filter.
+            ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`,
+                sort in descending order.
+        """
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
@@ -132,31 +179,45 @@ def __init__(
 
     @abstractmethod
     def filter(self) -> list[LogicalCore]:
-        """
-        Use self._filter_specifier to filter self._lcores_to_filter
-        and return the list of filtered LogicalCores.
-        self._lcores_to_filter is a sorted copy of the original list,
-        so it may be modified.
+        r"""Filter the cores.
+
+        Use `self._filter_specifier` to filter `self._lcores_to_filter` and return
+        the filtered :class:`LogicalCore`\s.
+        `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified.
+
+        Returns:
+            The filtered cores.
         """
 
 
 class LogicalCoreCountFilter(LogicalCoreFilter):
-    """
+    """Filter cores by specified counts.
+
     Filter the input list of LogicalCores according to specified rules:
-    Use cores from the specified number of sockets or from the specified socket ids.
-    If sockets is specified, it takes precedence over socket_count.
-    From each of those sockets, use only cores_per_socket of cores.
-    And for each core, use lcores_per_core of logical cores. Hypertheading
-    must be enabled for this to take effect.
-    If ascending is True, use cores with the lowest numerical id first
-    and continue in ascending order. If False, start with the highest
-    id and continue in descending order. This ordering affects which
-    sockets to consider first as well.
+
+        * The input `filter_specifier` is :class:`LogicalCoreCount`,
+        * Use cores from the specified number of sockets or from the specified socket ids,
+        * If `sockets` is specified, it takes precedence over `socket_count`,
+        * From each of those sockets, use only `cores_per_socket` of cores,
+        * And for each core, use `lcores_per_core` of logical cores. Hypertheading
+          must be enabled for this to take effect.
     """
 
     _filter_specifier: LogicalCoreCount
 
     def filter(self) -> list[LogicalCore]:
+        """Filter the cores according to :class:`LogicalCoreCount`.
+
+        Start by filtering the allowed sockets. The cores matching the allowed socket are returned.
+        The cores of each socket are stored in separate lists.
+
+        Then filter the allowed physical cores from those lists of cores per socket. When filtering
+        physical cores, store the desired number of logical cores per physical core which then
+        together constitute the final filtered list.
+
+        Returns:
+            The filtered cores.
+        """
         sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
         filtered_lcores = []
         for socket_to_filter in sockets_to_filter:
@@ -166,24 +227,37 @@ def filter(self) -> list[LogicalCore]:
     def _filter_sockets(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> ValuesView[list[LogicalCore]]:
-        """
-        Remove all lcores that don't match the specified socket(s).
-        If self._filter_specifier.sockets is not None, keep lcores from those sockets,
-        otherwise keep lcores from the first
-        self._filter_specifier.socket_count sockets.
+        """Filter a list of cores per each allowed socket.
+
+        The sockets may be specified in two ways, either a number or a specific list of sockets.
+        In case of a specific list, we just need to return the cores from those sockets.
+        If filtering a number of cores, we need to go through all cores and note which sockets
+        appear and only filter from the first n that appear.
+
+        Args:
+            lcores_to_filter: The cores to filter. These must be sorted by the physical core.
+
+        Returns:
+            A list of lists of logical CPU cores. Each list contains cores from one socket.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
         if self._filter_specifier.sockets:
+            # when sockets in filter is specified, the sockets are already set
             socket_count = len(self._filter_specifier.sockets)
             allowed_sockets = set(self._filter_specifier.sockets)
 
+        # filter socket_count sockets from all sockets by checking the socket of each CPU
         filtered_lcores: dict[int, list[LogicalCore]] = {}
         for lcore in lcores_to_filter:
             if not self._filter_specifier.sockets:
+                # this is when sockets is not set, so we do the actual filtering
+                # when it is set, allowed_sockets is already defined and can't be changed
                 if len(allowed_sockets) < socket_count:
+                    # allowed_sockets is a set, so adding an existing socket won't re-add it
                     allowed_sockets.add(lcore.socket)
             if lcore.socket in allowed_sockets:
+                # separate sockets per socket; this makes it easier in further processing
                 if lcore.socket in filtered_lcores:
                     filtered_lcores[lcore.socket].append(lcore)
                 else:
@@ -200,12 +274,13 @@ def _filter_sockets(
     def _filter_cores_from_socket(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> list[LogicalCore]:
-        """
-        Keep only the first self._filter_specifier.cores_per_socket cores.
-        In multithreaded environments, keep only
-        the first self._filter_specifier.lcores_per_core lcores of those cores.
-        """
+        """Filter a list of cores from the given socket.
+
+        Go through the cores and note how many logical cores per physical core have been filtered.
 
+        Returns:
+            The filtered logical CPU cores.
+        """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
         lcore_count_per_core_map: dict[int, int] = {}
@@ -248,15 +323,21 @@ def _filter_cores_from_socket(
 
 
 class LogicalCoreListFilter(LogicalCoreFilter):
-    """
-    Filter the input list of Logical Cores according to the input list of
-    lcore indices.
-    An empty LogicalCoreList won't filter anything.
+    """Filter the logical CPU cores by logical CPU core IDs.
+
+    This is a simple filter that looks at logical CPU IDs and only filter those that match.
+
+    The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything.
     """
 
     _filter_specifier: LogicalCoreList
 
     def filter(self) -> list[LogicalCore]:
+        """Filter based on logical CPU core ID.
+
+        Return:
+            The filtered logical CPU cores.
+        """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
 
@@ -279,6 +360,17 @@ def lcore_filter(
     filter_specifier: LogicalCoreCount | LogicalCoreList,
     ascending: bool,
 ) -> LogicalCoreFilter:
+    """Factory for using the right filter with `filter_specifier`.
+
+    Args:
+        core_list: The logical CPU cores to filter.
+        filter_specifier: The filter to use.
+        ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+            sort in descending order.
+
+    Returns:
+        The filter matching `filter_specifier`.
+    """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
     elif isinstance(filter_specifier, LogicalCoreCount):
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 15/23] dts: os session docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (12 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 14/23] dts: cpu " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 16/23] dts: posix and linux sessions " Juraj Linkeš
                             ` (7 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/os_session.py | 275 ++++++++++++++++------
 1 file changed, 208 insertions(+), 67 deletions(-)

diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 76e595a518..bad75d52e7 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -2,6 +2,29 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""OS-aware remote session.
+
+DPDK supports multiple different operating systems, meaning it can run on these different operating
+systems. This module defines the common API that OS-unaware layers use and translates the API into
+OS-aware calls/utility usage.
+
+Note:
+    Running commands with administrative privileges requires OS awareness. This is the only layer
+    that's aware of OS differences, so this is where non-privileged command get converted
+    to privileged commands.
+
+Example:
+    A user wishes to remove a directory on
+    a remote :class:`~framework.testbed_model.sut_node.SutNode`.
+    The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware what OS the node
+    is running - it delegates the OS translation logic
+    to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node calls
+    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
+    the :attr:`~framework.testbed_model.node.Node.main_session` translates that
+    to ``rm -rf`` if the node's OS is Linux and other commands for other OSs.
+    It also translates the path to match the underlying OS.
+"""
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from ipaddress import IPv4Interface, IPv6Interface
@@ -28,10 +51,16 @@
 
 
 class OSSession(ABC):
-    """
-    The OS classes create a DTS node remote session and implement OS specific
+    """OS-unaware to OS-aware translation API definition.
+
+    The OSSession classes create a remote session to a DTS node and implement OS specific
     behavior. There a few control methods implemented by the base class, the rest need
-    to be implemented by derived classes.
+    to be implemented by subclasses.
+
+    Attributes:
+        name: The name of the session.
+        remote_session: The remote session maintaining the connection to the node.
+        interactive_session: The interactive remote session maintaining the connection to the node.
     """
 
     _config: NodeConfiguration
@@ -46,6 +75,15 @@ def __init__(
         name: str,
         logger: DTSLOG,
     ):
+        """Initialize the OS-aware session.
+
+        Connect to the node right away and also create an interactive remote session.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            name: The name of the session.
+            logger: The logger instance this session will use.
+        """
         self._config = node_config
         self.name = name
         self._logger = logger
@@ -53,15 +91,15 @@ def __init__(
         self.interactive_session = create_interactive_session(node_config, logger)
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session.
+        """Close the underlying remote session.
+
+        Args:
+            force: Force the closure of the connection.
         """
         self.remote_session.close(force)
 
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the underlying remote session is still responding."""
         return self.remote_session.is_alive()
 
     def send_command(
@@ -72,10 +110,23 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        An all-purpose API in case the command to be executed is already
-        OS-agnostic, such as when the path to the executed command has been
-        constructed beforehand.
+        """An all-purpose API for OS-agnostic commands.
+
+        This can be used for an execution of a portable command that's executed the same way
+        on all operating systems, such as Python.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds to execute the command.
+            privileged: Whether to run the command with administrative privileges.
+            verify: If True, will check the exit code of the command.
+            env: A dictionary with environment variables to be used with the command execution.
+
+        Raises:
+            RemoteCommandExecutionError: If verify is True and the command failed.
         """
         if privileged:
             command = self._get_privileged_command(command)
@@ -89,8 +140,20 @@ def create_interactive_shell(
         privileged: bool,
         app_args: str,
     ) -> InteractiveShellType:
-        """
-        See "create_interactive_shell" in SutNode
+        """Factory for interactive session handlers.
+
+        Instantiate `shell_cls` according to the remote OS specifics.
+
+        Args:
+            shell_cls: The class of the shell.
+            timeout: Timeout for reading output from the SSH channel. If you are
+                reading from the buffer and don't receive any data within the timeout
+                it will throw an error.
+            privileged: Whether to run the shell with administrative privileges.
+            app_args: The arguments to be passed to the application.
+
+        Returns:
+            An instance of the desired interactive application shell.
         """
         return shell_cls(
             self.interactive_session.session,
@@ -114,27 +177,42 @@ def _get_privileged_command(command: str) -> str:
 
     @abstractmethod
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
-        """
-        Try to find DPDK remote dir in remote_dir.
+        """Try to find DPDK directory in `remote_dir`.
+
+        The directory is the one which is created after the extraction of the tarball. The files
+        are usually extracted into a directory starting with ``dpdk-``.
+
+        Returns:
+            The absolute path of the DPDK remote directory, empty path if not found.
         """
 
     @abstractmethod
     def get_remote_tmp_dir(self) -> PurePath:
-        """
-        Get the path of the temporary directory of the remote OS.
+        """Get the path of the temporary directory of the remote OS.
+
+        Returns:
+            The absolute path of the temporary directory.
         """
 
     @abstractmethod
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for the target architecture. Get
-        information from the node if needed.
+        """Create extra environment variables needed for the target architecture.
+
+        Different architectures may require different configuration, such as setting 32-bit CFLAGS.
+
+        Returns:
+            A dictionary with keys as environment variables.
         """
 
     @abstractmethod
     def join_remote_path(self, *args: str | PurePath) -> PurePath:
-        """
-        Join path parts using the path separator that fits the remote OS.
+        """Join path parts using the path separator that fits the remote OS.
+
+        Args:
+            args: Any number of paths to join.
+
+        Returns:
+            The resulting joined path.
         """
 
     @abstractmethod
@@ -143,13 +221,13 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from the remote Node to the local filesystem.
+        """Copy a file from the remote node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote node associated with this remote
+        session to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
+            source_file: the file on the remote node.
             destination_file: a file or directory path on the local filesystem.
         """
 
@@ -159,14 +237,14 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from local filesystem to the remote Node.
+        """Copy a file from local filesystem to the remote node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file`
+        on the remote node associated with this remote session.
 
         Args:
             source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            destination_file: a file or directory path on the remote node.
         """
 
     @abstractmethod
@@ -176,8 +254,12 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
-        """
-        Remove remote directory, by default remove recursively and forcefully.
+        """Remove remote directory, by default remove recursively and forcefully.
+
+        Args:
+            remote_dir_path: The path of the directory to remove.
+            recursive: If :data:`True`, also remove all contents inside the directory.
+            force: If :data:`True`, ignore all warnings and try to remove at all costs.
         """
 
     @abstractmethod
@@ -186,9 +268,12 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
-        """
-        Extract remote tarball in place. If expected_dir is a non-empty string, check
-        whether the dir exists after extracting the archive.
+        """Extract remote tarball in its remote directory.
+
+        Args:
+            remote_tarball_path: The path of the tarball on the remote node.
+            expected_dir: If non-empty, check whether `expected_dir` exists after extracting
+                the archive.
         """
 
     @abstractmethod
@@ -201,69 +286,119 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """
-        Build DPDK in the input dir with specified environment variables and meson
-        arguments.
+        """Build DPDK on the remote node.
+
+        An extracted DPDK tarball must be present on the node. The build consists of two steps::
+
+            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
+            ninja -C remote_dpdk_build_dir
+
+        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
+        environment variable configure the timeout of DPDK build.
+
+        Args:
+            env_vars: Use these environment variables then building DPDK.
+            meson_args: Use these meson arguments when building DPDK.
+            remote_dpdk_dir: The directory on the remote node where DPDK will be built.
+            remote_dpdk_build_dir: The target build directory on the remote node.
+            rebuild: If True, do a subsequent build with ``meson configure`` instead
+                of ``meson setup``.
+            timeout: Wait at most this long in seconds for the build to execute.
         """
 
     @abstractmethod
     def get_dpdk_version(self, version_path: str | PurePath) -> str:
-        """
-        Inspect DPDK version on the remote node from version_path.
+        """Inspect the DPDK version on the remote node.
+
+        Args:
+            version_path: The path to the VERSION file containing the DPDK version.
+
+        Returns:
+            The DPDK version.
         """
 
     @abstractmethod
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
-        """
-        Compose a list of LogicalCores present on the remote node.
-        If use_first_core is False, the first physical core won't be used.
+        r"""Get the list of :class:`~framework.testbed_model.cpu.LogicalCore`\s on the remote node.
+
+        Args:
+            use_first_core: If :data:`False`, the first physical core won't be used.
+
+        Returns:
+            The logical cores present on the node.
         """
 
     @abstractmethod
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
-        """
-        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
-        dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+        """Kill and cleanup all DPDK apps.
+
+        Args:
+            dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`.
+                If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean.
         """
 
     @abstractmethod
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
-        """
-        Get the DPDK file prefix that will be used when running DPDK apps.
+        """Make OS-specific modification to the DPDK file prefix.
+
+        Args:
+           dpdk_prefix: The OS-unaware file prefix.
+
+        Returns:
+            The OS-specific file prefix.
         """
 
     @abstractmethod
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
-        """
-        Get the node's Hugepage Size, configure the specified amount of hugepages
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Configure hugepages on the node.
+
+        Get the node's Hugepage Size, configure the specified count of hugepages
         if needed and mount the hugepages if needed.
-        If force_first_numa is True, configure hugepages just on the first socket.
+
+        Args:
+            hugepage_count: Configure this many hugepages.
+            force_first_numa:  If :data:`True`, configure hugepages just on the first socket.
         """
 
     @abstractmethod
     def get_compiler_version(self, compiler_name: str) -> str:
-        """
-        Get installed version of compiler used for DPDK
+        """Get installed version of compiler used for DPDK.
+
+        Args:
+            compiler_name: The name of the compiler executable.
+
+        Returns:
+            The compiler's version.
         """
 
     @abstractmethod
     def get_node_info(self) -> NodeInfo:
-        """
-        Collect information about the node
+        """Collect additional information about the node.
+
+        Returns:
+            Node information.
         """
 
     @abstractmethod
     def update_ports(self, ports: list[Port]) -> None:
-        """
-        Get additional information about ports:
-            Logical name (e.g. enp7s0) if applicable
-            Mac address
+        """Get additional information about ports from the operating system and update them.
+
+        The additional information is:
+
+            * Logical name (e.g. ``enp7s0``) if applicable,
+            * Mac address.
+
+        Args:
+            ports: The ports to update.
         """
 
     @abstractmethod
     def configure_port_state(self, port: Port, enable: bool) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port` in the operating system.
+
+        Args:
+            port: The port to configure.
+            enable: If :data:`True`, enable the port, otherwise shut it down.
         """
 
     @abstractmethod
@@ -273,12 +408,18 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
-        """
-        Configure (add or delete) an IP address of the input port.
+        """Configure an IP address on `port` in the operating system.
+
+        Args:
+            address: The address to configure.
+            port: The port to configure.
+            delete: If :data:`True`, remove the IP address, otherwise configure it.
         """
 
     @abstractmethod
     def configure_ipv4_forwarding(self, enable: bool) -> None:
-        """
-        Enable IPv4 forwarding in the underlying OS.
+        """Enable IPv4 forwarding in the operating system.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
         """
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 16/23] dts: posix and linux sessions docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (13 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 15/23] dts: os session " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 17/23] dts: node " Juraj Linkeš
                             ` (6 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/linux_session.py | 63 ++++++++++-----
 dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++---
 2 files changed, 113 insertions(+), 31 deletions(-)

diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index f472bb8f0f..279954ff63 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""Linux OS translator.
+
+Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly
+compliant with POSIX standards, so this module only implements the parts that aren't.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import json
 from ipaddress import IPv4Interface, IPv6Interface
 from typing import TypedDict, Union
@@ -17,43 +24,51 @@
 
 
 class LshwConfigurationOutput(TypedDict):
+    """The relevant parts of ``lshw``'s ``configuration`` section."""
+
+    #:
     link: str
 
 
 class LshwOutput(TypedDict):
-    """
-    A model of the relevant information from json lshw output, e.g.:
-    {
-    ...
-    "businfo" : "pci@0000:08:00.0",
-    "logicalname" : "enp8s0",
-    "version" : "00",
-    "serial" : "52:54:00:59:e1:ac",
-    ...
-    "configuration" : {
-      ...
-      "link" : "yes",
-      ...
-    },
-    ...
+    """A model of the relevant information from ``lshw``'s json output.
+
+    e.g.::
+
+        {
+        ...
+        "businfo" : "pci@0000:08:00.0",
+        "logicalname" : "enp8s0",
+        "version" : "00",
+        "serial" : "52:54:00:59:e1:ac",
+        ...
+        "configuration" : {
+          ...
+          "link" : "yes",
+          ...
+        },
+        ...
     """
 
+    #:
     businfo: str
+    #:
     logicalname: NotRequired[str]
+    #:
     serial: NotRequired[str]
+    #:
     configuration: LshwConfigurationOutput
 
 
 class LinuxSession(PosixSession):
-    """
-    The implementation of non-Posix compliant parts of Linux remote sessions.
-    """
+    """The implementation of non-Posix compliant parts of Linux."""
 
     @staticmethod
     def _get_privileged_command(command: str) -> str:
         return f"sudo -- sh -c '{command}'"
 
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
         cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout
         lcores = []
         for cpu_line in cpu_info.splitlines():
@@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
         return lcores
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
         self._logger.info("Getting Hugepage information.")
         hugepage_size = self._get_hugepage_size()
         hugepages_total = self._get_hugepages_total()
         self._numa_nodes = self._get_numa_nodes()
 
-        if force_first_numa or hugepages_total != hugepage_amount:
+        if force_first_numa or hugepages_total != hugepage_count:
             # when forcing numa, we need to clear existing hugepages regardless
             # of size, so they can be moved to the first numa node
-            self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+            self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
         else:
             self._logger.info("Hugepages already configured.")
         self._mount_huge_pages()
@@ -140,6 +157,7 @@ def _configure_huge_pages(
         )
 
     def update_ports(self, ports: list[Port]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
             assert (
@@ -178,6 +196,7 @@ def _update_port_attr(
             )
 
     def configure_port_state(self, port: Port, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
         state = "up" if enable else "down"
         self.send_command(
             f"ip link set dev {port.logical_name} {state}", privileged=True
@@ -189,6 +208,7 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`."""
         command = "del" if delete else "add"
         self.send_command(
             f"ip address {command} {address} dev {port.logical_name}",
@@ -197,5 +217,6 @@ def configure_port_ip_address(
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`."""
         state = 1 if enable else 0
         self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index 1d1d5b1b26..a4824aa274 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -2,6 +2,15 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""POSIX compliant OS translator.
+
+Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards
+for portability between Unix operating systems which not all Linux distributions
+(or the tools most frequently bundled with said distributions) adhere to. Most of Linux
+distributions are mostly compliant though.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import re
 from collections.abc import Iterable
 from pathlib import PurePath, PurePosixPath
@@ -15,13 +24,21 @@
 
 
 class PosixSession(OSSession):
-    """
-    An intermediary class implementing the Posix compliant parts of
-    Linux and other OS remote sessions.
-    """
+    """An intermediary class implementing the POSIX standard."""
 
     @staticmethod
     def combine_short_options(**opts: bool) -> str:
+        """Combine shell options into one argument.
+
+        These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``.
+
+        Args:
+            opts: The keys are option names (usually one letter) and the bool values indicate
+                whether to include the option in the resulting argument.
+
+        Returns:
+            The options combined into one argument.
+        """
         ret_opts = ""
         for opt, include in opts.items():
             if include:
@@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str:
         return ret_opts
 
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`."""
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
 
     def get_remote_tmp_dir(self) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`."""
         return PurePosixPath("/tmp")
 
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for i686 arch build. Get information
-        from the node if needed.
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`.
+
+        Supported architecture: ``i686``.
         """
         env_vars = {}
         if arch == Architecture.i686:
@@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
         return env_vars
 
     def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.join_remote_path`."""
         return PurePosixPath(*args)
 
     def copy_from(
@@ -70,6 +90,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_file)
 
     def copy_to(
@@ -77,6 +98,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_to`."""
         self.remote_session.copy_to(source_file, destination_file)
 
     def remove_remote_dir(
@@ -85,6 +107,7 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(r=recursive, f=force)
         self.send_command(f"rm{opts} {remote_dir_path}")
 
@@ -93,6 +116,7 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
             f"tar xfm {remote_tarball_path} "
             f"-C {PurePosixPath(remote_tarball_path).parent}",
@@ -110,6 +134,7 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
         try:
             if rebuild:
                 # reconfigure, then build
@@ -140,12 +165,14 @@ def build_dpdk(
             raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
         out = self.send_command(
             f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
         )
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`."""
         self._logger.info("Cleaning up DPDK apps.")
         dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
         if dpdk_runtime_dirs:
@@ -159,6 +186,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
     def _get_dpdk_runtime_dirs(
         self, dpdk_prefix_list: Iterable[str]
     ) -> list[PurePosixPath]:
+        """Find runtime directories DPDK apps are currently using.
+
+        Args:
+              dpdk_prefix_list: The prefixes DPDK apps were started with.
+
+        Returns:
+            The paths of DPDK apps' runtime dirs.
+        """
         prefix = PurePosixPath("/var", "run", "dpdk")
         if not dpdk_prefix_list:
             remote_prefixes = self._list_remote_dirs(prefix)
@@ -170,9 +205,13 @@ def _get_dpdk_runtime_dirs(
         return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
 
     def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
-        """
-        Return a list of directories of the remote_dir.
-        If remote_path doesn't exist, return None.
+        """Contents of remote_path.
+
+        Args:
+            remote_path: List the contents of this path.
+
+        Returns:
+            The contents of remote_path. If remote_path doesn't exist, return None.
         """
         out = self.send_command(
             f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
@@ -183,6 +222,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
             return out.splitlines()
 
     def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+        """Find PIDs of running DPDK apps.
+
+        Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes
+        that opened those file.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+
+        Returns:
+            The PIDs of running DPDK apps.
+        """
         pids = []
         pid_regex = r"p(\d+)"
         for dpdk_runtime_dir in dpdk_runtime_dirs:
@@ -203,6 +253,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool:
     def _check_dpdk_hugepages(
         self, dpdk_runtime_dirs: Iterable[str | PurePath]
     ) -> None:
+        """Check there aren't any leftover hugepages.
+
+        If any hugegapes are found, emit a warning. The hugepages are investigated in the
+        "hugepage_info" file of dpdk_runtime_dirs.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+        """
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
             if self._remote_files_exists(hugepage_info):
@@ -220,9 +278,11 @@ def _remove_dpdk_runtime_dirs(
             self.remove_remote_dir(dpdk_runtime_dir)
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -240,6 +300,7 @@ def get_compiler_version(self, compiler_name: str) -> str:
                 raise ValueError(f"Unknown compiler {compiler_name}")
 
     def get_node_info(self) -> NodeInfo:
+        """Overrides :meth:`~.os_session.OSSession.get_node_info`."""
         os_release_info = self.send_command(
             "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release",
             SETTINGS.timeout,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 17/23] dts: node docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (14 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 16/23] dts: posix and linux sessions " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 18/23] dts: sut and tg nodes " Juraj Linkeš
                             ` (5 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 191 +++++++++++++++++++---------
 1 file changed, 131 insertions(+), 60 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index 7571e7b98d..abf86793a7 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+A node is any host/server DTS connects to.
+
+The base class, :class:`Node`, provides functionality common to all nodes and is supposed
+to be extended by subclasses with functionality specific to each node type.
+The decorator :func:`Node.skip_setup` can be used without subclassing.
 """
 
 from abc import ABC
@@ -35,10 +40,22 @@
 
 
 class Node(ABC):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather subclassed.
+    It implements common methods to manage any node:
+
+        * Connection to the node,
+        * Hugepages setup.
+
+    Attributes:
+        main_session: The primary OS-aware remote session used to communicate with the node.
+        config: The node configuration.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and the test run configuration.
+        ports: The ports of this node specified in the test run configuration.
+        virtual_devices: The virtual devices used on the node.
     """
 
     main_session: OSSession
@@ -52,6 +69,17 @@ class Node(ABC):
     virtual_devices: list[VirtualDevice]
 
     def __init__(self, node_config: NodeConfiguration):
+        """Connect to the node and gather info during initialization.
+
+        Extra gathered information:
+
+        * The list of available logical CPUs. This is then filtered by
+          the ``lcores`` configuration in the YAML test run configuration file,
+        * Information about ports from the YAML test run configuration file.
+
+        Args:
+            node_config: The node's test run configuration.
+        """
         self.config = node_config
         self.name = node_config.name
         self._logger = getLogger(self.name)
@@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Connected to node: {self.name}")
 
         self._get_remote_cpus()
-        # filter the node lcores according to user config
+        # filter the node lcores according to the test run configuration
         self.lcores = LogicalCoreListFilter(
             self.lcores, LogicalCoreList(self.config.lcores)
         ).filter()
@@ -77,9 +105,14 @@ def _init_ports(self) -> None:
             self.configure_port_state(port)
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call :meth:`_set_up_execution` where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution test run configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
@@ -88,58 +121,74 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
             self.virtual_devices.append(VirtualDevice(vdev))
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps common to all DTS node types.
         """
         self.virtual_devices = []
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution teardown steps.
         """
 
     def set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps common to all DTS node types.
+
+        Args:
+            build_target_config: The build target test run configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-aware remote session.
+
+        The returned session won't be used by the node creating it. The session must be used by
+        the caller. The session will be maintained for the entire lifecycle of the node object,
+        at the end of which the session will be cleaned up automatically.
+
+        Note:
+            Any number of these supplementary sessions may be created.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-aware remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -157,19 +206,19 @@ def create_interactive_shell(
         privileged: bool = False,
         app_args: str = "",
     ) -> InteractiveShellType:
-        """Create a handler for an interactive session.
+        """Factory for interactive session handlers.
 
-        Instantiate shell_cls according to the remote OS specifics.
+        Instantiate `shell_cls` according to the remote OS specifics.
 
         Args:
             shell_cls: The class of the shell.
-            timeout: Timeout for reading output from the SSH channel. If you are
-                reading from the buffer and don't receive any data within the timeout
-                it will throw an error.
+            timeout: Timeout for reading output from the SSH channel. If you are reading from
+                the buffer and don't receive any data within the timeout it will throw an error.
             privileged: Whether to run the shell with administrative privileges.
             app_args: The arguments to be passed to the application.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not shell_cls.dpdk_app:
             shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
@@ -186,14 +235,22 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
+
+        Logical cores that DTS can use are the ones that are present on the node, but filtered
+        according to the test run configuration. The `filter_specifier` will filter cores from
+        those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies the number
+                of logical cores per core, cores per socket and the number of sockets,
+                and another one that specifies a logical core list.
+            ascending: If :data:`True`, use cores with the lowest numerical id first and continue
+                in ascending order. If :data:`False`, start with the highest id and continue
+                in descending order. This ordering affects which sockets to consider first as well.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Returns:
+            The filtered logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -203,17 +260,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self) -> None:
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the node.
+
+        Configure the hugepages only if they're specified in the node's test run configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -221,8 +275,11 @@ def _setup_hugepages(self) -> None:
             )
 
     def configure_port_state(self, port: Port, enable: bool = True) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port`.
+
+        Args:
+            port: The port to enable/disable.
+            enable: :data:`True` to enable, :data:`False` to disable.
         """
         self.main_session.configure_port_state(port, enable)
 
@@ -232,15 +289,17 @@ def configure_port_ip_address(
         port: Port,
         delete: bool = False,
     ) -> None:
-        """
-        Configure the IP address of a port on this node.
+        """Add an IP address to `port` on this node.
+
+        Args:
+            address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6.
+            port: The port to which to add the address.
+            delete: If :data:`True`, will delete the address from the port instead of adding it.
         """
         self.main_session.configure_port_ip_address(address, port, delete)
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -249,6 +308,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """Skip the decorated function.
+
+        The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP`
+        environment variable enable the decorator.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
@@ -258,6 +322,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
 def create_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> OSSession:
+    """Factory for OS-aware sessions.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+    """
     match node_config.os:
         case OS.linux:
             return LinuxSession(node_config, name, logger)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 18/23] dts: sut and tg nodes docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (15 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 17/23] dts: node " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 19/23] dts: base traffic generators " Juraj Linkeš
                             ` (4 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/sut_node.py | 219 ++++++++++++++++--------
 dts/framework/testbed_model/tg_node.py  |  42 +++--
 2 files changed, 170 insertions(+), 91 deletions(-)

diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 4e33cf02ea..b57d48fd31 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -3,6 +3,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""System under test (DPDK + hardware) node.
+
+A system under test (SUT) is the combination of DPDK
+and the hardware we're testing with DPDK (NICs, crypto and other devices).
+An SUT node is where this SUT runs.
+"""
+
+
 import os
 import tarfile
 import time
@@ -26,6 +34,11 @@
 
 
 class EalParameters(object):
+    """The environment abstraction layer parameters.
+
+    The string representation can be created by converting the instance to a string.
+    """
+
     def __init__(
         self,
         lcore_list: LogicalCoreList,
@@ -35,21 +48,23 @@ def __init__(
         vdevs: list[VirtualDevice],
         other_eal_param: str,
     ):
-        """
-        Generate eal parameters character string;
-        :param lcore_list: the list of logical cores to use.
-        :param memory_channels: the number of memory channels to use.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
+        """Initialize the parameters according to inputs.
+
+        Process the parameters into the format used on the command line.
+
+        Args:
+            lcore_list: The list of logical cores to use.
+            memory_channels: The number of memory channels to use.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``
         """
         self._lcore_list = f"-l {lcore_list}"
         self._memory_channels = f"-n {memory_channels}"
@@ -61,6 +76,7 @@ def __init__(
         self._other_eal_param = other_eal_param
 
     def __str__(self) -> str:
+        """Create the EAL string."""
         return (
             f"{self._lcore_list} "
             f"{self._memory_channels} "
@@ -72,11 +88,21 @@ def __str__(self) -> str:
 
 
 class SutNode(Node):
-    """
-    A class for managing connections to the System under Test, providing
-    methods that retrieve the necessary information about the node (such as
-    CPU, memory and NIC details) and configuration capabilities.
-    Another key capability is building DPDK according to given build target.
+    """The system under test node.
+
+    The SUT node extends :class:`Node` with DPDK specific features:
+
+        * DPDK build,
+        * Gathering of DPDK build info,
+        * The running of DPDK apps, interactively or one-time execution,
+        * DPDK apps cleanup.
+
+    The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL`
+    environment variable configure the path to the DPDK tarball
+    or the git commit ID, tag ID or tree ID to test.
+
+    Attributes:
+        config: The SUT node configuration
     """
 
     config: SutNodeConfiguration
@@ -93,6 +119,11 @@ class SutNode(Node):
     _compiler_version: str | None
 
     def __init__(self, node_config: SutNodeConfiguration):
+        """Extend the constructor with SUT node specifics.
+
+        Args:
+            node_config: The SUT node's test run configuration.
+        """
         super(SutNode, self).__init__(node_config)
         self._dpdk_prefix_list = []
         self._build_target_config = None
@@ -111,6 +142,12 @@ def __init__(self, node_config: SutNodeConfiguration):
 
     @property
     def _remote_dpdk_dir(self) -> PurePath:
+        """The remote DPDK dir.
+
+        This internal property should be set after extracting the DPDK tarball. If it's not set,
+        that implies the DPDK setup step has been skipped, in which case we can guess where
+        a previous build was located.
+        """
         if self.__remote_dpdk_dir is None:
             self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
         return self.__remote_dpdk_dir
@@ -121,6 +158,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
 
     @property
     def remote_dpdk_build_dir(self) -> PurePath:
+        """The remote DPDK build directory.
+
+        This is the directory where DPDK was built.
+        We assume it was built in a subdirectory of the extracted tarball.
+        """
         if self._build_target_config:
             return self.main_session.join_remote_path(
                 self._remote_dpdk_dir, self._build_target_config.name
@@ -130,6 +172,7 @@ def remote_dpdk_build_dir(self) -> PurePath:
 
     @property
     def dpdk_version(self) -> str:
+        """Last built DPDK version."""
         if self._dpdk_version is None:
             self._dpdk_version = self.main_session.get_dpdk_version(
                 self._remote_dpdk_dir
@@ -138,12 +181,14 @@ def dpdk_version(self) -> str:
 
     @property
     def node_info(self) -> NodeInfo:
+        """Additional node information."""
         if self._node_info is None:
             self._node_info = self.main_session.get_node_info()
         return self._node_info
 
     @property
     def compiler_version(self) -> str:
+        """The node's compiler version."""
         if self._compiler_version is None:
             if self._build_target_config is not None:
                 self._compiler_version = self.main_session.get_compiler_version(
@@ -158,6 +203,11 @@ def compiler_version(self) -> str:
         return self._compiler_version
 
     def get_build_target_info(self) -> BuildTargetInfo:
+        """Get additional build target information.
+
+        Returns:
+            The build target information,
+        """
         return BuildTargetInfo(
             dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
         )
@@ -168,8 +218,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Setup DPDK on the SUT node.
+        """Setup DPDK on the SUT node.
+
+        Additional build target setup steps on top of those in :class:`Node`.
         """
         # we want to ensure that dpdk_version and compiler_version is reset for new
         # build targets
@@ -182,9 +233,7 @@ def _set_up_build_target(
     def _configure_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Populate common environment variables and set build target config.
-        """
+        """Populate common environment variables and set build target config."""
         self._env_vars = {}
         self._build_target_config = build_target_config
         self._env_vars.update(
@@ -199,9 +248,7 @@ def _configure_build_target(
 
     @Node.skip_setup
     def _copy_dpdk_tarball(self) -> None:
-        """
-        Copy to and extract DPDK tarball on the SUT node.
-        """
+        """Copy to and extract DPDK tarball on the SUT node."""
         self._logger.info("Copying DPDK tarball to SUT.")
         self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
 
@@ -232,8 +279,9 @@ def _copy_dpdk_tarball(self) -> None:
 
     @Node.skip_setup
     def _build_dpdk(self) -> None:
-        """
-        Build DPDK. Uses the already configured target. Assumes that the tarball has
+        """Build DPDK.
+
+        Uses the already configured target. Assumes that the tarball has
         already been copied to and extracted on the SUT node.
         """
         self.main_session.build_dpdk(
@@ -244,15 +292,19 @@ def _build_dpdk(self) -> None:
         )
 
     def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
-        """
-        Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
-        When app_name is 'all', build all example apps.
-        When app_name is any other string, tries to build that example app.
-        Return the directory path of the built app. If building all apps, return
-        the path to the examples directory (where all apps reside).
-        The meson_dpdk_args are keyword arguments
-        found in meson_option.txt in root DPDK directory. Do not use -D with them,
-        for example: enable_kmods=True.
+        """Build one or all DPDK apps.
+
+        Requires DPDK to be already built on the SUT node.
+
+        Args:
+            app_name: The name of the DPDK app to build.
+                When `app_name` is ``all``, build all example apps.
+            meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Returns:
+            The directory path of the built app. If building all apps, return
+            the path to the examples directory (where all apps reside).
         """
         self.main_session.build_dpdk(
             self._env_vars,
@@ -273,9 +325,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
     def kill_cleanup_dpdk_apps(self) -> None:
-        """
-        Kill all dpdk applications on the SUT. Cleanup hugepages.
-        """
+        """Kill all dpdk applications on the SUT, then clean up hugepages."""
         if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
             # we can use the session if it exists and responds
             self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
@@ -294,33 +344,34 @@ def create_eal_parameters(
         vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
-        """
-        Generate eal parameters character string;
-        :param lcore_filter_specifier: a number of lcores/cores/sockets to use
-                        or a list of lcore ids to use.
-                        The default will select one lcore for each of two cores
-                        on one socket, in ascending order of core ids.
-        :param ascending_cores: True, use cores with the lowest numerical id first
-                        and continue in ascending order. If False, start with the
-                        highest id and continue in descending order. This ordering
-                        affects which sockets to consider first as well.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param append_prefix_timestamp: if True, will append a timestamp to
-                        DPDK file prefix.
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
-        :return: eal param string, eg:
-                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
-        """
+        """Compose the EAL parameters.
+
+        Process the list of cores and the DPDK prefix and pass that along with
+        the rest of the arguments.
 
+        Args:
+            lcore_filter_specifier: A number of lcores/cores/sockets to use
+                or a list of lcore ids to use.
+                The default will select one lcore for each of two cores
+                on one socket, in ascending order of core ids.
+            ascending_cores: Sort cores in ascending order (lowest to highest IDs).
+                If :data:`False`, sort in descending order.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``.
+
+        Returns:
+            An EAL param string, such as
+            ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``.
+        """
         lcore_list = LogicalCoreList(
             self.filter_lcores(lcore_filter_specifier, ascending_cores)
         )
@@ -346,14 +397,29 @@ def create_eal_parameters(
     def run_dpdk_app(
         self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
     ) -> CommandResult:
-        """
-        Run DPDK application on the remote node.
+        """Run DPDK application on the remote node.
+
+        The application is not run interactively - the command that starts the application
+        is executed and then the call waits for it to finish execution.
+
+        Args:
+            app_path: The remote path to the DPDK application.
+            eal_args: EAL parameters to run the DPDK application with.
+            timeout: Wait at most this long in seconds to execute the command.
+
+        Returns:
+            The result of the DPDK app execution.
         """
         return self.main_session.send_command(
             f"{app_path} {eal_args}", timeout, privileged=True, verify=True
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Enable/disable IPv4 forwarding on the node.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
+        """
         self.main_session.configure_ipv4_forwarding(enable)
 
     def create_interactive_shell(
@@ -363,9 +429,13 @@ def create_interactive_shell(
         privileged: bool = False,
         eal_parameters: EalParameters | str | None = None,
     ) -> InteractiveShellType:
-        """Factory method for creating a handler for an interactive session.
+        """Extend the factory for interactive session handlers.
+
+        The extensions are SUT node specific:
 
-        Instantiate shell_cls according to the remote OS specifics.
+            * The default for `eal_parameters`,
+            * The interactive shell path `shell_cls.path` is prepended with path to the remote
+              DPDK build directory for DPDK apps.
 
         Args:
             shell_cls: The class of the shell.
@@ -375,9 +445,10 @@ def create_interactive_shell(
             privileged: Whether to run the shell with administrative privileges.
             eal_parameters: List of EAL parameters to use to launch the app. If this
                 isn't provided or an empty string is passed, it will default to calling
-                create_eal_parameters().
+                :meth:`create_eal_parameters`.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not eal_parameters:
             eal_parameters = self.create_eal_parameters()
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 166eb8430e..69eb33ccb1 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -5,13 +5,8 @@
 
 """Traffic generator node.
 
-This is the node where the traffic generator resides.
-The distinction between a node and a traffic generator is as follows:
-A node is a host that DTS connects to. It could be a baremetal server,
-a VM or a container.
-A traffic generator is software running on the node.
-A traffic generator node is a node running a traffic generator.
-A node can be a traffic generator node as well as system under test node.
+A traffic generator (TG) generates traffic that's sent towards the SUT node.
+A TG node is where the TG runs.
 """
 
 from scapy.packet import Packet  # type: ignore[import]
@@ -24,13 +19,16 @@
 
 
 class TGNode(Node):
-    """Manage connections to a node with a traffic generator.
+    """The traffic generator node.
 
-    Apart from basic node management capabilities, the Traffic Generator node has
-    specialized methods for handling the traffic generator running on it.
+    The TG node extends :class:`Node` with TG specific features:
 
-    Arguments:
-        node_config: The user configuration of the traffic generator node.
+        * Traffic generator initialization,
+        * The sending of traffic and receiving packets,
+        * The sending of traffic without receiving packets.
+
+    Not all traffic generators are capable of capturing traffic, which is why there
+    must be a way to send traffic without that.
 
     Attributes:
         traffic_generator: The traffic generator running on the node.
@@ -39,6 +37,13 @@ class TGNode(Node):
     traffic_generator: CapturingTrafficGenerator
 
     def __init__(self, node_config: TGNodeConfiguration):
+        """Extend the constructor with TG node specifics.
+
+        Initialize the traffic generator on the TG node.
+
+        Args:
+            node_config: The TG node's test run configuration.
+        """
         super(TGNode, self).__init__(node_config)
         self.traffic_generator = create_traffic_generator(
             self, node_config.traffic_generator
@@ -52,17 +57,17 @@ def send_packet_and_capture(
         receive_port: Port,
         duration: float = 1,
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet`, return received traffic.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given duration. Also record the captured traffic
         in a pcap file.
 
         Args:
             packet: The packet to send.
             send_port: The egress port on the TG node.
             receive_port: The ingress port in the TG node.
-            duration: Capture traffic for this amount of time after sending the packet.
+            duration: Capture traffic for this amount of time after sending `packet`.
 
         Returns:
              A list of received packets. May be empty if no packets are captured.
@@ -72,6 +77,9 @@ def send_packet_and_capture(
         )
 
     def close(self) -> None:
-        """Free all resources used by the node"""
+        """Free all resources used by the node.
+
+        This extends the superclass method with TG cleanup.
+        """
         self.traffic_generator.close()
         super(TGNode, self).close()
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 19/23] dts: base traffic generators docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (16 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 18/23] dts: sut and tg nodes " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 20/23] dts: scapy tg " Juraj Linkeš
                             ` (3 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../traffic_generator/__init__.py             | 22 ++++++++-
 .../capturing_traffic_generator.py            | 46 +++++++++++--------
 .../traffic_generator/traffic_generator.py    | 33 +++++++------
 3 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 11bfa1ee0f..51cca77da4 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -1,6 +1,19 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""DTS traffic generators.
+
+A traffic generator is capable of generating traffic and then monitor returning traffic.
+A traffic generator may just count the number of received packets
+and it may additionally capture individual packets.
+
+A traffic generator may be software running on generic hardware or it could be specialized hardware.
+
+The traffic generators that only count the number of received packets are suitable only for
+performance testing. In functional testing, we need to be able to dissect each arrived packet
+and a capturing traffic generator is required.
+"""
+
 from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
 from framework.exception import ConfigurationError
 from framework.testbed_model.node import Node
@@ -12,8 +25,15 @@
 def create_traffic_generator(
     tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
 ) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
+    """The factory function for creating traffic generator objects from the test run configuration.
+
+    Args:
+        tg_node: The traffic generator node where the created traffic generator will be running.
+        traffic_generator_config: The traffic generator config.
 
+    Returns:
+        A traffic generator capable of capturing received packets.
+    """
     match traffic_generator_config.traffic_generator_type:
         case TrafficGeneratorType.SCAPY:
             return ScapyTrafficGenerator(tg_node, traffic_generator_config)
diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index e521211ef0..b0a43ad003 100644
--- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -23,19 +23,22 @@
 
 
 def _get_default_capture_name() -> str:
-    """
-    This is the function used for the default implementation of capture names.
-    """
     return str(uuid.uuid4())
 
 
 class CapturingTrafficGenerator(TrafficGenerator):
     """Capture packets after sending traffic.
 
-    A mixin interface which enables a packet generator to declare that it can capture
+    The intermediary interface which enables a packet generator to declare that it can capture
     packets and return them to the user.
 
+    Similarly to
+    :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
+    this class exposes the public methods specific to capturing traffic generators and defines
+    a private method that must implement the traffic generation and capturing logic in subclasses.
+
     The methods of capturing traffic generators obey the following workflow:
+
         1. send packets
         2. capture packets
         3. write the capture to a .pcap file
@@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
 
     @property
     def is_capturing(self) -> bool:
+        """This traffic generator can capture traffic."""
         return True
 
     def send_packet_and_capture(
@@ -54,11 +58,12 @@ def send_packet_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet` and capture received traffic.
+
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        The captured traffic is recorded in the `capture_name`.pcap file.
 
         Args:
             packet: The packet to send.
@@ -68,7 +73,7 @@ def send_packet_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         return self.send_packets_and_capture(
             [packet], send_port, receive_port, duration, capture_name
@@ -82,11 +87,14 @@ def send_packets_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send packets, return received traffic.
+        """Send `packets` and capture received traffic.
 
-        Send packets on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        Send `packets` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
+
+        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
+        can be configured with the :option:`--output-dir` command line argument or
+        the :envvar:`DTS_OUTPUT_DIR` environment variable.
 
         Args:
             packets: The packets to send.
@@ -96,7 +104,7 @@ def send_packets_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         self._logger.debug(get_packet_summaries(packets))
         self._logger.debug(
@@ -124,10 +132,12 @@ def _send_packets_and_capture(
         receive_port: Port,
         duration: float,
     ) -> list[Packet]:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port and receives packets on the receive_port
-        for the specified duration. It must be able to handle no received packets.
+        """The implementation of :method:`send_packets_and_capture`.
+
+        The subclasses must implement this method which sends `packets` on `send_port`
+        and receives packets on `receive_port` for the specified `duration`.
+
+        It must be able to handle no received packets.
         """
 
     def _write_capture_from_packets(
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index ea7c3963da..ed396c6a2f 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -22,7 +22,8 @@
 class TrafficGenerator(ABC):
     """The base traffic generator.
 
-    Defines the few basic methods that each traffic generator must implement.
+    Exposes the common public methods of all traffic generators and defines private methods
+    that must implement the traffic generation logic in subclasses.
     """
 
     _config: TrafficGeneratorConfig
@@ -30,6 +31,12 @@ class TrafficGenerator(ABC):
     _logger: DTSLOG
 
     def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        """Initialize the traffic generator.
+
+        Args:
+            tg_node: The traffic generator node where the created traffic generator will be running.
+            config: The traffic generator's test run configuration.
+        """
         self._config = config
         self._tg_node = tg_node
         self._logger = getLogger(
@@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
         )
 
     def send_packet(self, packet: Packet, port: Port) -> None:
-        """Send a packet and block until it is fully sent.
+        """Send `packet` and block until it is fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packet` on `port`, then wait until `packet` is fully sent.
 
         Args:
             packet: The packet to send.
@@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
         self.send_packets([packet], port)
 
     def send_packets(self, packets: list[Packet], port: Port) -> None:
-        """Send packets and block until they are fully sent.
+        """Send `packets` and block until they are fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packets` on `port`, then wait until `packets` are fully sent.
 
         Args:
             packets: The packets to send.
@@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
 
     @abstractmethod
     def _send_packets(self, packets: list[Packet], port: Port) -> None:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port. The method should block until all packets
-        are fully sent.
+        """The implementation of :method:`send_packets`.
+
+        The subclasses must implement this method which sends `packets` on `port`.
+        The method should block until all `packets` are fully sent.
+
+        What full sent means is defined by the traffic generator.
         """
 
     @property
     def is_capturing(self) -> bool:
-        """Whether this traffic generator can capture traffic.
-
-        Returns:
-            True if the traffic generator can capture traffic, False otherwise.
-        """
+        """This traffic generator can't capture traffic."""
         return False
 
     @abstractmethod
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 20/23] dts: scapy tg docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (17 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 19/23] dts: base traffic generators " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 21/23] dts: test suites " Juraj Linkeš
                             ` (2 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
 1 file changed, 54 insertions(+), 37 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 51864b6e6b..d0fe03055a 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -2,14 +2,15 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""Scapy traffic generator.
+"""The Scapy traffic generator.
 
-Traffic generator used for functional testing, implemented using the Scapy library.
+A traffic generator used for functional testing, implemented with
+`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
 The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
 
-The XML-RPC server runs in an interactive remote SSH session running Python console,
-where we start the server. The communication with the server is facilitated with
-a local server proxy.
+The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
+in an interactive remote Python SSH session. The communication with the server is facilitated
+with a local server proxy from the :mod:`xmlrpc.client` module.
 """
 
 import inspect
@@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
     recv_iface: str,
     duration: float,
 ) -> list[bytes]:
-    """RPC function to send and capture packets.
+    """The RPC function to send and capture packets.
 
-    The function is meant to be executed on the remote TG node.
+    The function is meant to be executed on the remote TG node via the server proxy.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
         recv_iface: The logical name of the ingress interface.
         duration: Capture for this amount of time, in seconds.
 
     Returns:
         A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
+        to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     sniffer = scapy.all.AsyncSniffer(
@@ -98,19 +99,15 @@ def scapy_send_packets_and_capture(
 def scapy_send_packets(
     xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
 ) -> None:
-    """RPC function to send packets.
+    """The RPC function to send packets.
 
-    The function is meant to be executed on the remote TG node.
-    It doesn't return anything, only sends packets.
+    The function is meant to be executed on the remote TG node via the server proxy.
+    It only sends `xmlrpc_packets`, without capturing them.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
-
-    Returns:
-        A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
@@ -130,11 +127,19 @@ def scapy_send_packets(
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-    """Basic XML-RPC server that may be extended
-    by functions serializable by the marshal module.
+    r"""Basic XML-RPC server.
+
+    The server may be augmented by functions serializable by the :mod:`marshal` module.
     """
 
     def __init__(self, *args, **kwargs):
+        """Extend the XML-RPC server initialization.
+
+        Args:
+            args: The positional arguments that will be passed to the superclass's constructor.
+            kwargs: The keyword arguments that will be passed to the superclass's constructor.
+                The `allow_none` argument will be set to ``True``.
+        """
         kwargs["allow_none"] = True
         super().__init__(*args, **kwargs)
         self.register_introspection_functions()
@@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs):
         self.register_function(self.add_rpc_function)
 
     def quit(self) -> None:
+        """Quit the server."""
         self._BaseServer__shutdown_request = True
         return None
 
     def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
-        """Add a function to the server.
-
-        This is meant to be executed remotely.
+        """Add a function to the server from the local server proxy.
 
         Args:
               name: The name of the function.
@@ -159,6 +163,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
         self.register_function(function)
 
     def serve_forever(self, poll_interval: float = 0.5) -> None:
+        """Extend the superclass method with an additional print.
+
+        Once executed in the local server proxy, the print gives us a clear string to expect
+        when starting the server. The print means the function was executed on the XML-RPC server.
+        """
         print("XMLRPC OK")
         super().serve_forever(poll_interval)
 
@@ -166,19 +175,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
 class ScapyTrafficGenerator(CapturingTrafficGenerator):
     """Provides access to scapy functions via an RPC interface.
 
-    The traffic generator first starts an XML-RPC on the remote TG node.
-    Then it populates the server with functions which use the Scapy library
-    to send/receive traffic.
-
-    Any packets sent to the remote server are first converted to bytes.
-    They are received as xmlrpc.client.Binary objects on the server side.
-    When the server sends the packets back, they are also received as
-    xmlrpc.client.Binary object on the client side, are converted back to Scapy
-    packets and only then returned from the methods.
+    The class extends the base with remote execution of scapy functions.
 
-    Arguments:
-        tg_node: The node where the traffic generator resides.
-        config: The user configuration of the traffic generator.
+    Any packets sent to the remote server are first converted to bytes. They are received as
+    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
+    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
+    converted back to :class:`scapy.packet.Packet` objects and only then returned from the methods.
 
     Attributes:
         session: The exclusive interactive remote session created by the Scapy
@@ -192,6 +194,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     _config: ScapyTrafficGeneratorConfig
 
     def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        """Extend the constructor with Scapy TG specifics.
+
+        The traffic generator first starts an XML-RPC on the remote `tg_node`.
+        Then it populates the server with functions which use the Scapy library
+        to send/receive traffic:
+
+            * :func:`scapy_send_packets_and_capture`
+            * :func:`scapy_send_packets`
+
+        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
+        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
+
+        Args:
+            tg_node: The node where the traffic generator resides.
+            config: The traffic generator's test run configuration.
+        """
         super().__init__(tg_node, config)
 
         assert (
@@ -237,10 +255,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
             [line for line in src.splitlines() if not line.isspace() and line != ""]
         )
 
-        spacing = "\n" * 4
-
         # execute it in the python terminal
-        self.session.send_command(spacing + src + spacing)
+        self.session.send_command(src + "\n")
         self.session.send_command(
             f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
             f"server.serve_forever()",
@@ -274,6 +290,7 @@ def _send_packets_and_capture(
         return scapy_packets
 
     def close(self) -> None:
+        """Close the traffic generator."""
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 21/23] dts: test suites docstring update
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (18 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 20/23] dts: scapy tg " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 22/23] dts: add doc generation dependencies Juraj Linkeš
  2023-11-08 12:53           ` [PATCH v6 23/23] dts: add doc generation Juraj Linkeš
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/tests/TestSuite_hello_world.py | 16 +++++----
 dts/tests/TestSuite_os_udp.py      | 16 +++++----
 dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++---
 3 files changed, 68 insertions(+), 17 deletions(-)

diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..662a8f8726 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
 
-"""
+"""The DPDK hello world app test suite.
+
 Run the helloworld example app and verify it prints a message for each used core.
 No other EAL parameters apart from cores are used.
 """
@@ -15,22 +16,25 @@
 
 
 class TestHelloWorld(TestSuite):
+    """DPDK hello world app test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Build the app we're about to test - helloworld.
         """
         self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
 
     def test_hello_world_single_core(self) -> None:
-        """
+        """Single core test case.
+
         Steps:
             Run the helloworld app on the first usable logical core.
         Verify:
             The app prints a message from the used core:
             "hello from core <core_id>"
         """
-
         # get the first usable core
         lcore_amount = LogicalCoreCount(1, 1, 1)
         lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
@@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
         )
 
     def test_hello_world_all_cores(self) -> None:
-        """
+        """All cores test case.
+
         Steps:
             Run the helloworld app on all usable logical cores.
         Verify:
             The app prints a message from all used cores:
             "hello from core <core_id>"
         """
-
         # get the maximum logical core number
         eal_para = self.sut_node.create_eal_parameters(
             lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
index 9b5f39711d..f99c4d76e3 100644
--- a/dts/tests/TestSuite_os_udp.py
+++ b/dts/tests/TestSuite_os_udp.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
+"""Basic IPv4 OS routing test suite.
+
 Configure SUT node to route traffic from if1 to if2.
 Send a packet to the SUT node, verify it comes back on the second port on the TG node.
 """
@@ -13,22 +14,24 @@
 
 
 class TestOSUdp(TestSuite):
+    """IPv4 UDP OS routing test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Configure SUT ports and SUT to route traffic from if1 to if2.
         """
-
         self.configure_testbed_ipv4()
 
     def test_os_udp(self) -> None:
-        """
+        """Basic UDP IPv4 traffic test case.
+
         Steps:
             Send a UDP packet.
         Verify:
             The packet with proper addresses arrives at the other TG port.
         """
-
         packet = Ether() / IP() / UDP()
 
         received_packets = self.send_packet_and_capture(packet)
@@ -38,7 +41,8 @@ def test_os_udp(self) -> None:
         self.verify_packets(expected_packet, received_packets)
 
     def tear_down_suite(self) -> None:
-        """
+        """Tear down the test suite.
+
         Teardown:
             Remove the SUT port configuration configured in setup.
         """
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 4a269df75b..36ff10a862 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -1,6 +1,17 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Smoke test suite.
+
+Smoke tests are a class of tests which are used for validating a minimal set of important features.
+These are the most important features without which (or when they're faulty) the software wouldn't
+work properly. Thus, if any failure occurs while testing these features,
+there isn't that much of a reason to continue testing, as the software is fundamentally broken.
+
+These tests don't have to include only DPDK tests, as the reason for failures could be
+in the infrastructure (a faulty link between NICs or a misconfiguration).
+"""
+
 import re
 
 from framework.config import PortConfig
@@ -11,13 +22,25 @@
 
 
 class SmokeTests(TestSuite):
+    """DPDK and infrastructure smoke test suite.
+
+    The test cases validate the most basic DPDK functionality needed for all other test suites.
+    The infrastructure also needs to be tested, as that is also used by all other test suites.
+
+    Attributes:
+        is_blocking: This test suite will block the execution of all other test suites
+            in the build target after it.
+        nics_in_node: The NICs present on the SUT node.
+    """
+
     is_blocking = True
     # dicts in this list are expected to have two keys:
     # "pci_address" and "current_driver"
     nics_in_node: list[PortConfig] = []
 
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Set the build directory path and generate a list of NICs in the SUT node.
         """
@@ -25,7 +48,13 @@ def set_up_suite(self) -> None:
         self.nics_in_node = self.sut_node.config.ports
 
     def test_unit_tests(self) -> None:
-        """
+        """DPDK meson fast-tests unit tests.
+
+        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
+        These need to be addressed before other testing.
+
+        The fast-tests unit tests are a subset with only the most basic tests.
+
         Test:
             Run the fast-test unit-test suite through meson.
         """
@@ -37,7 +66,14 @@ def test_unit_tests(self) -> None:
         )
 
     def test_driver_tests(self) -> None:
-        """
+        """DPDK meson driver-tests unit tests.
+
+        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
+        These need to be addressed before other testing.
+
+        The driver-tests unit tests are a subset that test only drivers. These may be run
+        with virtual devices as well.
+
         Test:
             Run the driver-test unit-test suite through meson.
         """
@@ -63,7 +99,10 @@ def test_driver_tests(self) -> None:
         )
 
     def test_devices_listed_in_testpmd(self) -> None:
-        """
+        """Testpmd device discovery.
+
+        If the configured devices can't be found in testpmd, they can't be tested.
+
         Test:
             Uses testpmd driver to verify that devices have been found by testpmd.
         """
@@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None:
             )
 
     def test_device_bound_to_driver(self) -> None:
-        """
+        """Device driver in OS.
+
+        The devices must be bound to the proper driver, otherwise they can't be used by DPDK
+        or the traffic generators.
+
         Test:
             Ensure that all drivers listed in the config are bound to the correct
             driver.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 22/23] dts: add doc generation dependencies
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (19 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 21/23] dts: test suites " Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-08 16:00             ` Yoan Picchi
  2023-11-08 12:53           ` [PATCH v6 23/23] dts: add doc generation Juraj Linkeš
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 499 ++++++++++++++++++++++++++++++++++++++++++++-
 dts/pyproject.toml |   7 +
 2 files changed, 505 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index a734fa71f0..dea98f6913 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.13.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
+    {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
+]
+
+[package.dependencies]
+setuptools = {version = "*", markers = "python_version >= \"3.12\""}
+
+[package.extras]
+dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +201,105 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.3.2"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"},
+    {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.6"
@@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"]
 test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
 test-randomorder = ["pytest-randomly"]
 
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
 [[package]]
 name = "fabric"
 version = "2.7.1"
@@ -252,6 +401,28 @@ pathlib2 = "*"
 pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
 testing = ["mock (>=2.0.0,<3.0)"]
 
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
+
 [[package]]
 name = "invoke"
 version = "1.7.3"
@@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
 plugins = ["setuptools"]
 requirements-deprecated-finder = ["pip-api", "pipreqs"]
 
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
 [[package]]
 name = "jsonpatch"
 version = "1.33"
@@ -340,6 +528,65 @@ files = [
 [package.dependencies]
 referencing = ">=0.28.0"
 
+[[package]]
+name = "markupsafe"
+version = "2.1.3"
+description = "Safely add untrusted strings to HTML/XML markup."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
+    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -404,6 +651,17 @@ files = [
     {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
 ]
 
+[[package]]
+name = "packaging"
+version = "23.2"
+description = "Core utilities for Python packages"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
+    {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
+]
+
 [[package]]
 name = "paramiko"
 version = "3.2.0"
@@ -515,6 +773,20 @@ files = [
     {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
 ]
 
+[[package]]
+name = "pygments"
+version = "2.16.1"
+description = "Pygments is a syntax highlighting package written in Python."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"},
+    {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
 [[package]]
 name = "pylama"
 version = "8.4.1"
@@ -632,6 +904,27 @@ files = [
 attrs = ">=22.2.0"
 rpds-py = ">=0.7.0"
 
+[[package]]
+name = "requests"
+version = "2.31.0"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
+    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
 [[package]]
 name = "rpds-py"
 version = "0.9.2"
@@ -753,6 +1046,22 @@ basic = ["ipython"]
 complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
 docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
 
+[[package]]
+name = "setuptools"
+version = "68.2.2"
+description = "Easily download, build, install, upgrade, and uninstall Python packages"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"},
+    {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
+testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
+testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
+
 [[package]]
 name = "six"
 version = "1.16.0"
@@ -775,6 +1084,177 @@ files = [
     {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
 ]
 
+[[package]]
+name = "sphinx"
+version = "6.2.1"
+description = "Python documentation generator"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
+    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18.1,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.2"
+description = "Read the Docs theme for Sphinx"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
+    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = ">=4,<5"
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.7"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"},
+    {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.5"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"},
+    {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.4"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"},
+    {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.6"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"},
+    {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.9"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"},
+    {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
 [[package]]
 name = "toml"
 version = "0.10.2"
@@ -819,6 +1299,23 @@ files = [
     {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
 ]
 
+[[package]]
+name = "urllib3"
+version = "2.0.7"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
+    {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
+
 [[package]]
 name = "warlock"
 version = "2.0.1"
@@ -837,4 +1334,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
+content-hash = "5faad2e53833e9b8a353ad3554c58de991801a9ebe8f9712fc9c839b35e7a789"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 3943c87c87..98df431b3b 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -35,6 +35,13 @@ pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+sphinx = "<7"
+sphinx-rtd-theme = "^1.2.2"
+
 [build-system]
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v6 23/23] dts: add doc generation
  2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
                             ` (20 preceding siblings ...)
  2023-11-08 12:53           ` [PATCH v6 22/23] dts: add doc generation dependencies Juraj Linkeš
@ 2023-11-08 12:53           ` Juraj Linkeš
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
  2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
  21 siblings, 2 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-08 12:53 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is Sphinx, which is already
used in DPDK. The same configuration is used to preserve style, but it's
been augmented with doc-generating configuration and a change to how the
sidebar displays the content hierarchy.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 29 ++++++++++------
 doc/api/meson.build             |  1 +
 doc/guides/conf.py              | 34 ++++++++++++++++---
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 32 +++++++++++++++++-
 dts/doc/conf_yaml_schema.json   |  1 +
 dts/doc/index.rst               | 17 ++++++++++
 dts/doc/meson.build             | 60 +++++++++++++++++++++++++++++++++
 dts/meson.build                 | 16 +++++++++
 meson.build                     |  1 +
 10 files changed, 176 insertions(+), 16 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..c2f3acfb1d 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,46 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default='.')
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..92fe10d9e7 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..169b1d24bc 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,31 @@
           file=stderr)
     pass
 
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_typehints_description_target = 'documented'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_preprocess_types = True
+add_module_names = False
+toc_object_entries = False
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
+
+# Sidebar config
+html_theme_options = {
+    'collapse_navigation': False,
+    'navigation_depth': -1,
+}
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +59,8 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+path.append(environ.get('DTS_ROOT'))
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index cd771a428c..77d9434c1c 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -283,7 +283,10 @@ When adding code to the DTS framework, pay attention to the rest of the code
 and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
 warnings when some of the basics are not met.
 
-The code must be properly documented with docstrings. The style must conform to
+The API documentation, which is a helpful reference when developing, may be accessed
+in the code directly or generated with the `API docs build steps <building_api_docs>`_.
+
+Speaking of which, the code must be properly documented with docstrings. The style must conform to
 the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
 See an example of the style
 `here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
@@ -408,3 +411,30 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+.. _building_api_docs:
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+.. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+The documentation is built using the standard DPDK build system. After executing the meson command
+and entering Poetry's shell, build the documentation with:
+
+.. code-block:: console
+
+   ninja -C build dts-doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run
+   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json
new file mode 120000
index 0000000000..d89eb81b72
--- /dev/null
+++ b/dts/doc/conf_yaml_schema.json
@@ -0,0 +1 @@
+../framework/config/conf_yaml_schema.json
\ No newline at end of file
diff --git a/dts/doc/index.rst b/dts/doc/index.rst
new file mode 100644
index 0000000000..f5dcd553f2
--- /dev/null
+++ b/dts/doc/index.rst
@@ -0,0 +1,17 @@
+.. DPDK Test Suite documentation.
+
+Welcome to DPDK Test Suite's documentation!
+===========================================
+
+.. toctree::
+   :titlesonly:
+   :caption: Contents:
+
+   framework
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..e11ab83843
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,60 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: false)
+sphinx_apidoc = find_program('sphinx-apidoc', required: false)
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_api_framework_dir = join_paths(dts_dir, 'framework')
+dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+if meson.version().version_compare('>=0.57.0')
+    dts_api_src = custom_target('dts_api_src',
+            output: 'modules.rst',
+            env: {'SPHINX_APIDOC_OPTIONS': 'members,show-inheritance'},
+            command: [sphinx_apidoc, '--append-syspath', '--force',
+                '--module-first', '--separate', '-V', meson.project_version(),
+                '--output-dir', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
+                dts_api_framework_dir],
+            build_by_default: false)
+else
+    dts_api_src = custom_target('dts_api_src',
+            output: 'modules.rst',
+            command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
+                sphinx_apidoc, '--append-syspath', '--force',
+                '--module-first', '--separate', '-V', meson.project_version(),
+                '--output-dir', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
+                dts_api_framework_dir],
+            build_by_default: false)
+endif
+doc_targets += dts_api_src
+doc_target_names += 'DTS_API_sphinx_sources'
+
+cp = find_program('cp')
+cp_index = custom_target('cp_index',
+        input: ['index.rst', 'conf_yaml_schema.json'],
+        output: 'index.rst',
+        depends: dts_api_src,
+        command: [cp, '--dereference', '@INPUT@', dts_api_build_dir],
+        build_by_default: false)
+doc_targets += cp_index
+doc_target_names += 'DTS_API_sphinx_index'
+
+extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        depends: cp_index,
+        command: [sphinx_wrapper, sphinx, meson.project_version(),
+            dts_api_build_dir, dts_api_build_dir, extra_sphinx_args],
+        build_by_default: false,
+        install: false,
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..e8ce0f06ac
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts-doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 2e6e546d20..c391bf8c71 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v5 01/23] dts: code adjustments for doc generation
  2023-11-06 17:15         ` [PATCH v5 01/23] dts: code adjustments for doc generation Juraj Linkeš
@ 2023-11-08 13:35           ` Yoan Picchi
  2023-11-15  7:46             ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-08 13:35 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek
  Cc: dev

On 11/6/23 17:15, Juraj Linkeš wrote:
> The standard Python tool for generating API documentation, Sphinx,
> imports modules one-by-one when generating the documentation. This
> requires code changes:
> * properly guarding argument parsing in the if __name__ == '__main__'
>    block,
> * the logger used by DTS runner underwent the same treatment so that it
>    doesn't create log files outside of a DTS run,
> * however, DTS uses the arguments to construct an object holding global
>    variables. The defaults for the global variables needed to be moved
>    from argument parsing elsewhere,
> * importing the remote_session module from framework resulted in
>    circular imports because of one module trying to import another
>    module. This is fixed by reorganizing the code,
> * some code reorganization was done because the resulting structure
>    makes more sense, improving documentation clarity.
> 
> The are some other changes which are documentation related:
> * added missing type annotation so they appear in the generated docs,
> * reordered arguments in some methods,
> * removed superfluous arguments and attributes,
> * change private functions/methods/attributes to private and vice-versa.
> 
> The above all appear in the generated documentation and the with them,
> the documentation is improved.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/config/__init__.py              | 10 ++-
>   dts/framework/dts.py                          | 33 +++++--
>   dts/framework/exception.py                    | 54 +++++-------
>   dts/framework/remote_session/__init__.py      | 41 ++++-----
>   .../interactive_remote_session.py             |  0
>   .../{remote => }/interactive_shell.py         |  0
>   .../{remote => }/python_shell.py              |  0
>   .../remote_session/remote/__init__.py         | 27 ------
>   .../{remote => }/remote_session.py            |  0
>   .../{remote => }/ssh_session.py               | 12 +--
>   .../{remote => }/testpmd_shell.py             |  0
>   dts/framework/settings.py                     | 87 +++++++++++--------
>   dts/framework/test_result.py                  |  4 +-
>   dts/framework/test_suite.py                   |  7 +-
>   dts/framework/testbed_model/__init__.py       | 12 +--
>   dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
>   dts/framework/testbed_model/hw/__init__.py    | 27 ------
>   .../linux_session.py                          |  6 +-
>   dts/framework/testbed_model/node.py           | 26 ++++--
>   .../os_session.py                             | 22 ++---
>   dts/framework/testbed_model/{hw => }/port.py  |  0
>   .../posix_session.py                          |  4 +-
>   dts/framework/testbed_model/sut_node.py       |  8 +-
>   dts/framework/testbed_model/tg_node.py        | 30 +------
>   .../traffic_generator/__init__.py             | 24 +++++
>   .../capturing_traffic_generator.py            |  6 +-
>   .../{ => traffic_generator}/scapy.py          | 23 ++---
>   .../traffic_generator.py                      | 16 +++-
>   .../testbed_model/{hw => }/virtual_device.py  |  0
>   dts/framework/utils.py                        | 46 +++-------
>   dts/main.py                                   |  9 +-
>   31 files changed, 259 insertions(+), 288 deletions(-)
>   rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
>   rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
>   rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
>   delete mode 100644 dts/framework/remote_session/remote/__init__.py
>   rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
>   rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
>   rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
>   rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
>   delete mode 100644 dts/framework/testbed_model/hw/__init__.py
>   rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
>   rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
>   rename dts/framework/testbed_model/{hw => }/port.py (100%)
>   rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
>   create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
>   rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%)
>   rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
>   rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
>   rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
> 
> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> index cb7e00ba34..2044c82611 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -17,6 +17,7 @@
>   import warlock  # type: ignore[import]
>   import yaml
>   
> +from framework.exception import ConfigurationError
>   from framework.settings import SETTINGS
>   from framework.utils import StrEnum
>   
> @@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
>       traffic_generator_type: TrafficGeneratorType
>   
>       @staticmethod
> -    def from_dict(d: dict):
> +    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":

This function looks to be designed to support more trafic generator than 
just scapy, so setting its return type to scapy specifically looks 
wrong. Shouldn't it be a more generic traffic generator type? Like you 
did in create_traffic_generator()

>           # This looks useless now, but is designed to allow expansion to traffic
>           # generators that require more configuration later.
>           match TrafficGeneratorType(d["type"]):
> @@ -97,6 +98,10 @@ def from_dict(d: dict):
>                   return ScapyTrafficGeneratorConfig(
>                       traffic_generator_type=TrafficGeneratorType.SCAPY
>                   )
> +            case _:
> +                raise ConfigurationError(
> +                    f'Unknown traffic generator type "{d["type"]}".'
> +                )
>   
>   
>   @dataclass(slots=True, frozen=True)
> @@ -324,6 +329,3 @@ def load_config() -> Configuration:
>       config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
>       config_obj: Configuration = Configuration.from_dict(dict(config))
>       return config_obj
> -
> -
> -CONFIGURATION = load_config()
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index f773f0c38d..4c7fb0c40a 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -6,19 +6,19 @@
>   import sys
>   
>   from .config import (
> -    CONFIGURATION,
>       BuildTargetConfiguration,
>       ExecutionConfiguration,
>       TestSuiteConfig,
> +    load_config,
>   )
>   from .exception import BlockingTestSuiteError
>   from .logger import DTSLOG, getLogger
>   from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
>   from .test_suite import get_test_suites
>   from .testbed_model import SutNode, TGNode
> -from .utils import check_dts_python_version
>   
> -dts_logger: DTSLOG = getLogger("DTSRunner")
> +# dummy defaults to satisfy linters
> +dts_logger: DTSLOG = None  # type: ignore[assignment]
>   result: DTSResult = DTSResult(dts_logger)
>   
>   
> @@ -30,14 +30,18 @@ def run_all() -> None:
>       global dts_logger
>       global result
>   
> +    # create a regular DTS logger and create a new result with it
> +    dts_logger = getLogger("DTSRunner")
> +    result = DTSResult(dts_logger)
> +
>       # check the python version of the server that run dts
> -    check_dts_python_version()
> +    _check_dts_python_version()
>   
>       sut_nodes: dict[str, SutNode] = {}
>       tg_nodes: dict[str, TGNode] = {}
>       try:
>           # for all Execution sections
> -        for execution in CONFIGURATION.executions:
> +        for execution in load_config().executions:
>               sut_node = sut_nodes.get(execution.system_under_test_node.name)
>               tg_node = tg_nodes.get(execution.traffic_generator_node.name)
>   
> @@ -82,6 +86,25 @@ def run_all() -> None:
>       _exit_dts()
>   
>   
> +def _check_dts_python_version() -> None:
> +    def RED(text: str) -> str:
> +        return f"\u001B[31;1m{str(text)}\u001B[0m"
> +
> +    if sys.version_info.major < 3 or (
> +        sys.version_info.major == 3 and sys.version_info.minor < 10
> +    ):
> +        print(
> +            RED(
> +                (
> +                    "WARNING: DTS execution node's python version is lower than"
> +                    "python 3.10, is deprecated and will not work in future releases."
> +                )
> +            ),
> +            file=sys.stderr,
> +        )
> +        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> +
> +
>   def _run_execution(
>       sut_node: SutNode,
>       tg_node: TGNode,
> diff --git a/dts/framework/exception.py b/dts/framework/exception.py
> index 001a5a5496..7489c03570 100644
> --- a/dts/framework/exception.py
> +++ b/dts/framework/exception.py
> @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
>       Command execution timeout.
>       """
>   
> -    command: str
> -    output: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _command: str
>   
> -    def __init__(self, command: str, output: str):
> -        self.command = command
> -        self.output = output
> +    def __init__(self, command: str):
> +        self._command = command
>   
>       def __str__(self) -> str:
> -        return f"TIMEOUT on {self.command}"
> -
> -    def get_output(self) -> str:
> -        return self.output
> +        return f"TIMEOUT on {self._command}"
>   
>   
>   class SSHConnectionError(DTSError):
> @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
>       SSH connection error.
>       """
>   
> -    host: str
> -    errors: list[str]
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _host: str
> +    _errors: list[str]
>   
>       def __init__(self, host: str, errors: list[str] | None = None):
> -        self.host = host
> -        self.errors = [] if errors is None else errors
> +        self._host = host
> +        self._errors = [] if errors is None else errors
>   
>       def __str__(self) -> str:
> -        message = f"Error trying to connect with {self.host}."
> -        if self.errors:
> -            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
> +        message = f"Error trying to connect with {self._host}."
> +        if self._errors:
> +            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
>   
>           return message
>   
> @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
>       It can no longer be used.
>       """
>   
> -    host: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _host: str
>   
>       def __init__(self, host: str):
> -        self.host = host
> +        self._host = host
>   
>       def __str__(self) -> str:
> -        return f"SSH session with {self.host} has died"
> +        return f"SSH session with {self._host} has died"
>   
>   
>   class ConfigurationError(DTSError):
> @@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError):
>       Raised when a command executed on a Node returns a non-zero exit status.
>       """
>   
> -    command: str
> -    command_return_code: int
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
> +    command: str

did you forget the _ ?

> +    _command_return_code: int
>   
>       def __init__(self, command: str, command_return_code: int):
>           self.command = command
> -        self.command_return_code = command_return_code
> +        self._command_return_code = command_return_code
>   
>       def __str__(self) -> str:
>           return (
>               f"Command {self.command} returned a non-zero exit code: "
> -            f"{self.command_return_code}"
> +            f"{self._command_return_code}"
>           )
>   
>   
> @@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError):
>       Used in test cases to verify the expected behavior.
>       """
>   
> -    value: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
>   
> -    def __init__(self, value: str):
> -        self.value = value
> -
> -    def __str__(self) -> str:
> -        return repr(self.value)
> -
>   
>   class BlockingTestSuiteError(DTSError):
> -    suite_name: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
> +    _suite_name: str
>   
>       def __init__(self, suite_name: str) -> None:
> -        self.suite_name = suite_name
> +        self._suite_name = suite_name
>   
>       def __str__(self) -> str:
> -        return f"Blocking suite {self.suite_name} failed."
> +        return f"Blocking suite {self._suite_name} failed."
> diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
> index 00b6d1f03a..5e7ddb2b05 100644
> --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__init__.py
> @@ -12,29 +12,24 @@
>   
>   # pylama:ignore=W0611
>   
> -from framework.config import OS, NodeConfiguration
> -from framework.exception import ConfigurationError
> +from framework.config import NodeConfiguration
>   from framework.logger import DTSLOG
>   
> -from .linux_session import LinuxSession
> -from .os_session import InteractiveShellType, OSSession
> -from .remote import (
> -    CommandResult,
> -    InteractiveRemoteSession,
> -    InteractiveShell,
> -    PythonShell,
> -    RemoteSession,
> -    SSHSession,
> -    TestPmdDevice,
> -    TestPmdShell,
> -)
> -
> -
> -def create_session(
> +from .interactive_remote_session import InteractiveRemoteSession
> +from .interactive_shell import InteractiveShell
> +from .python_shell import PythonShell
> +from .remote_session import CommandResult, RemoteSession
> +from .ssh_session import SSHSession
> +from .testpmd_shell import TestPmdShell
> +
> +
> +def create_remote_session(
>       node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> OSSession:
> -    match node_config.os:
> -        case OS.linux:
> -            return LinuxSession(node_config, name, logger)
> -        case _:
> -            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> +) -> RemoteSession:
> +    return SSHSession(node_config, name, logger)
> +
> +
> +def create_interactive_session(
> +    node_config: NodeConfiguration, logger: DTSLOG
> +) -> InteractiveRemoteSession:
> +    return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/interactive_remote_session.py
> rename to dts/framework/remote_session/interactive_remote_session.py
> diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/interactive_shell.py
> rename to dts/framework/remote_session/interactive_shell.py
> diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/python_shell.py
> rename to dts/framework/remote_session/python_shell.py
> diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
> deleted file mode 100644
> index 06403691a5..0000000000
> --- a/dts/framework/remote_session/remote/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -# Copyright(c) 2023 University of New Hampshire
> -
> -# pylama:ignore=W0611
> -
> -from framework.config import NodeConfiguration
> -from framework.logger import DTSLOG
> -
> -from .interactive_remote_session import InteractiveRemoteSession
> -from .interactive_shell import InteractiveShell
> -from .python_shell import PythonShell
> -from .remote_session import CommandResult, RemoteSession
> -from .ssh_session import SSHSession
> -from .testpmd_shell import TestPmdDevice, TestPmdShell
> -
> -
> -def create_remote_session(
> -    node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> RemoteSession:
> -    return SSHSession(node_config, name, logger)
> -
> -
> -def create_interactive_session(
> -    node_config: NodeConfiguration, logger: DTSLOG
> -) -> InteractiveRemoteSession:
> -    return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/remote_session.py
> rename to dts/framework/remote_session/remote_session.py
> diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
> similarity index 91%
> rename from dts/framework/remote_session/remote/ssh_session.py
> rename to dts/framework/remote_session/ssh_session.py
> index 8d127f1601..cee11d14d6 100644
> --- a/dts/framework/remote_session/remote/ssh_session.py
> +++ b/dts/framework/remote_session/ssh_session.py
> @@ -18,9 +18,7 @@
>       SSHException,
>   )
>   
> -from framework.config import NodeConfiguration
>   from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
> -from framework.logger import DTSLOG
>   
>   from .remote_session import CommandResult, RemoteSession
>   
> @@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
>   
>       session: Connection
>   
> -    def __init__(
> -        self,
> -        node_config: NodeConfiguration,
> -        session_name: str,
> -        logger: DTSLOG,
> -    ):
> -        super(SSHSession, self).__init__(node_config, session_name, logger)
> -
>       def _connect(self) -> None:
>           errors = []
>           retry_attempts = 10
> @@ -117,7 +107,7 @@ def _send_command(
>   
>           except CommandTimedOut as e:
>               self._logger.exception(e)
> -            raise SSHTimeoutError(command, e.result.stderr) from e
> +            raise SSHTimeoutError(command) from e
>   
>           return CommandResult(
>               self.name, command, output.stdout, output.stderr, output.return_code
> diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/testpmd_shell.py
> rename to dts/framework/remote_session/testpmd_shell.py
> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> index cfa39d011b..7f5841d073 100644
> --- a/dts/framework/settings.py
> +++ b/dts/framework/settings.py
> @@ -6,7 +6,7 @@
>   import argparse
>   import os
>   from collections.abc import Callable, Iterable, Sequence
> -from dataclasses import dataclass
> +from dataclasses import dataclass, field
>   from pathlib import Path
>   from typing import Any, TypeVar
>   
> @@ -22,8 +22,8 @@ def __init__(
>               option_strings: Sequence[str],
>               dest: str,
>               nargs: str | int | None = None,
> -            const: str | None = None,
> -            default: str = None,
> +            const: bool | None = None,
> +            default: Any = None,
>               type: Callable[[str], _T | argparse.FileType | None] = None,
>               choices: Iterable[_T] | None = None,
>               required: bool = False,
> @@ -32,6 +32,12 @@ def __init__(
>           ) -> None:
>               env_var_value = os.environ.get(env_var)
>               default = env_var_value or default
> +            if const is not None:
> +                nargs = 0
> +                default = const if env_var_value else default
> +                type = None
> +                choices = None
> +                metavar = None
>               super(_EnvironmentArgument, self).__init__(
>                   option_strings,
>                   dest,
> @@ -52,22 +58,28 @@ def __call__(
>               values: Any,
>               option_string: str = None,
>           ) -> None:
> -            setattr(namespace, self.dest, values)
> +            if self.const is not None:
> +                setattr(namespace, self.dest, self.const)
> +            else:
> +                setattr(namespace, self.dest, values)
>   
>       return _EnvironmentArgument
>   
>   
> -@dataclass(slots=True, frozen=True)
> -class _Settings:
> -    config_file_path: str
> -    output_dir: str
> -    timeout: float
> -    verbose: bool
> -    skip_setup: bool
> -    dpdk_tarball_path: Path
> -    compile_timeout: float
> -    test_cases: list
> -    re_run: int
> +@dataclass(slots=True)
> +class Settings:
> +    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
> +    output_dir: str = "output"
> +    timeout: float = 15
> +    verbose: bool = False
> +    skip_setup: bool = False
> +    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
> +    compile_timeout: float = 1200
> +    test_cases: list[str] = field(default_factory=list)
> +    re_run: int = 0
> +
> +
> +SETTINGS: Settings = Settings()
>   
>   
>   def _get_parser() -> argparse.ArgumentParser:
> @@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
>       parser.add_argument(
>           "--config-file",
>           action=_env_arg("DTS_CFG_FILE"),
> -        default="conf.yaml",
> +        default=SETTINGS.config_file_path,
> +        type=Path,
>           help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
>           "and targets.",
>       )
> @@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--output-dir",
>           "--output",
>           action=_env_arg("DTS_OUTPUT_DIR"),
> -        default="output",
> +        default=SETTINGS.output_dir,
>           help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
>       )
>   
> @@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-t",
>           "--timeout",
>           action=_env_arg("DTS_TIMEOUT"),
> -        default=15,
> +        default=SETTINGS.timeout,
>           type=float,
>           help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
>           "compiling DPDK.",
> @@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-v",
>           "--verbose",
>           action=_env_arg("DTS_VERBOSE"),
> -        default="N",
> -        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
> +        default=SETTINGS.verbose,
> +        const=True,
> +        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
>           "to the console.",
>       )
>   
> @@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-s",
>           "--skip-setup",
>           action=_env_arg("DTS_SKIP_SETUP"),
> -        default="N",
> -        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
> +        const=True,
> +        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
>       )
>   
>       parser.add_argument(
> @@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--snapshot",
>           "--git-ref",
>           action=_env_arg("DTS_DPDK_TARBALL"),
> -        default="dpdk.tar.xz",
> +        default=SETTINGS.dpdk_tarball_path,
>           type=Path,
>           help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
>           "tag ID or tree ID to test. To test local changes, first commit them, "
> @@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
>       parser.add_argument(
>           "--compile-timeout",
>           action=_env_arg("DTS_COMPILE_TIMEOUT"),
> -        default=1200,
> +        default=SETTINGS.compile_timeout,
>           type=float,
>           help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
>       )
> @@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--re-run",
>           "--re_run",
>           action=_env_arg("DTS_RERUN"),
> -        default=0,
> +        default=SETTINGS.re_run,
>           type=int,
>           help="[DTS_RERUN] Re-run each test case the specified amount of times "
>           "if a test failure occurs",
> @@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
>       return parser
>   
>   
> -def _get_settings() -> _Settings:
> +def get_settings() -> Settings:
>       parsed_args = _get_parser().parse_args()
> -    return _Settings(
> +    return Settings(

That means we're parsing and creating a new setting object everytime 
we're trying to read the setting? Shouldn't we just save it and return a 
copy? That seems to be the old behavior, any reason to change it?

Related to this, this do mean that the previously created setting 
variable is only used to set up the parser, so it might need to be 
renamed to default_setting if it doesnt get reused.

>           config_file_path=parsed_args.config_file,
>           output_dir=parsed_args.output_dir,
>           timeout=parsed_args.timeout,
> -        verbose=(parsed_args.verbose == "Y"),
> -        skip_setup=(parsed_args.skip_setup == "Y"),
> +        verbose=parsed_args.verbose,
> +        skip_setup=parsed_args.skip_setup,
>           dpdk_tarball_path=Path(
> -            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
> -        )
> -        if not os.path.exists(parsed_args.tarball)
> -        else Path(parsed_args.tarball),
> +            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
> +            if not os.path.exists(parsed_args.tarball)
> +            else Path(parsed_args.tarball)
> +        ),
>           compile_timeout=parsed_args.compile_timeout,
> -        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
> +        test_cases=(
> +            parsed_args.test_cases.split(",") if parsed_args.test_cases else []
> +        ),
>           re_run=parsed_args.re_run,
>       )
> -
> -
> -SETTINGS: _Settings = _get_settings()
> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
> index f0fbe80f6f..603e18872c 100644
> --- a/dts/framework/test_result.py
> +++ b/dts/framework/test_result.py
> @@ -254,7 +254,7 @@ def add_build_target(
>           self._inner_results.append(build_target_result)
>           return build_target_result
>   
> -    def add_sut_info(self, sut_info: NodeInfo):
> +    def add_sut_info(self, sut_info: NodeInfo) -> None:
>           self.sut_os_name = sut_info.os_name
>           self.sut_os_version = sut_info.os_version
>           self.sut_kernel_version = sut_info.kernel_version
> @@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
>           self._inner_results.append(execution_result)
>           return execution_result
>   
> -    def add_error(self, error) -> None:
> +    def add_error(self, error: Exception) -> None:
>           self._errors.append(error)
>   
>       def process(self) -> None:
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index 3b890c0451..d53553bf34 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -11,7 +11,7 @@
>   import re
>   from ipaddress import IPv4Interface, IPv6Interface, ip_interface
>   from types import MethodType
> -from typing import Union
> +from typing import Any, Union
>   
>   from scapy.layers.inet import IP  # type: ignore[import]
>   from scapy.layers.l2 import Ether  # type: ignore[import]
> @@ -26,8 +26,7 @@
>   from .logger import DTSLOG, getLogger
>   from .settings import SETTINGS
>   from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
> -from .testbed_model import SutNode, TGNode
> -from .testbed_model.hw.port import Port, PortLink
> +from .testbed_model import Port, PortLink, SutNode, TGNode
>   from .utils import get_packet_summaries
>   
>   
> @@ -453,7 +452,7 @@ def _execute_test_case(
>   
>   
>   def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
> -    def is_test_suite(object) -> bool:
> +    def is_test_suite(object: Any) -> bool:
>           try:
>               if issubclass(object, TestSuite) and object is not TestSuite:
>                   return True
> diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
> index 5cbb859e47..8ced05653b 100644
> --- a/dts/framework/testbed_model/__init__.py
> +++ b/dts/framework/testbed_model/__init__.py
> @@ -9,15 +9,9 @@
>   
>   # pylama:ignore=W0611
>   
> -from .hw import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -    VirtualDevice,
> -    lcore_filter,
> -)
> +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
>   from .node import Node
> +from .port import Port, PortLink
>   from .sut_node import SutNode
>   from .tg_node import TGNode
> +from .virtual_device import VirtualDevice
> diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
> similarity index 95%
> rename from dts/framework/testbed_model/hw/cpu.py
> rename to dts/framework/testbed_model/cpu.py
> index d1918a12dc..8fe785dfe4 100644
> --- a/dts/framework/testbed_model/hw/cpu.py
> +++ b/dts/framework/testbed_model/cpu.py
> @@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
>               )
>   
>           return filtered_lcores
> +
> +
> +def lcore_filter(
> +    core_list: list[LogicalCore],
> +    filter_specifier: LogicalCoreCount | LogicalCoreList,
> +    ascending: bool,
> +) -> LogicalCoreFilter:
> +    if isinstance(filter_specifier, LogicalCoreList):
> +        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
> +    elif isinstance(filter_specifier, LogicalCoreCount):
> +        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
> +    else:
> +        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
> deleted file mode 100644
> index 88ccac0b0e..0000000000
> --- a/dts/framework/testbed_model/hw/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -
> -# pylama:ignore=W0611
> -
> -from .cpu import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -)
> -from .virtual_device import VirtualDevice
> -
> -
> -def lcore_filter(
> -    core_list: list[LogicalCore],
> -    filter_specifier: LogicalCoreCount | LogicalCoreList,
> -    ascending: bool,
> -) -> LogicalCoreFilter:
> -    if isinstance(filter_specifier, LogicalCoreList):
> -        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
> -    elif isinstance(filter_specifier, LogicalCoreCount):
> -        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
> -    else:
> -        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
> similarity index 97%
> rename from dts/framework/remote_session/linux_session.py
> rename to dts/framework/testbed_model/linux_session.py
> index a3f1a6bf3b..f472bb8f0f 100644
> --- a/dts/framework/remote_session/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -9,10 +9,10 @@
>   from typing_extensions import NotRequired
>   
>   from framework.exception import RemoteCommandExecutionError
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
>   from framework.utils import expand_range
>   
> +from .cpu import LogicalCore
> +from .port import Port
>   from .posix_session import PosixSession
>   
>   
> @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
>               lcores.append(LogicalCore(lcore, core, socket, node))
>           return lcores
>   
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>           return dpdk_prefix
>   
>       def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index fc01e0bf8e..7571e7b98d 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -12,23 +12,26 @@
>   from typing import Any, Callable, Type, Union
>   
>   from framework.config import (
> +    OS,
>       BuildTargetConfiguration,
>       ExecutionConfiguration,
>       NodeConfiguration,
>   )
> +from framework.exception import ConfigurationError
>   from framework.logger import DTSLOG, getLogger
> -from framework.remote_session import InteractiveShellType, OSSession, create_session
>   from framework.settings import SETTINGS
>   
> -from .hw import (
> +from .cpu import (
>       LogicalCore,
>       LogicalCoreCount,
>       LogicalCoreList,
>       LogicalCoreListFilter,
> -    VirtualDevice,
>       lcore_filter,
>   )
> -from .hw.port import Port
> +from .linux_session import LinuxSession
> +from .os_session import InteractiveShellType, OSSession
> +from .port import Port
> +from .virtual_device import VirtualDevice
>   
>   
>   class Node(ABC):
> @@ -69,6 +72,7 @@ def __init__(self, node_config: NodeConfiguration):
>       def _init_ports(self) -> None:
>           self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
>           self.main_session.update_ports(self.ports)
> +

Is the newline intended?

>           for port in self.ports:
>               self.configure_port_state(port)
>   
> @@ -172,9 +176,9 @@ def create_interactive_shell(
>   
>           return self.main_session.create_interactive_shell(
>               shell_cls,
> -            app_args,
>               timeout,
>               privileged,
> +            app_args,
>           )
>   
>       def filter_lcores(
> @@ -205,7 +209,7 @@ def _get_remote_cpus(self) -> None:
>           self._logger.info("Getting CPU information.")
>           self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
>   
> -    def _setup_hugepages(self):
> +    def _setup_hugepages(self) -> None:
>           """
>           Setup hugepages on the Node. Different architectures can supply different
>           amounts of memory for hugepages and numa-based hugepage allocation may need
> @@ -249,3 +253,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
>               return lambda *args: None
>           else:
>               return func
> +
> +
> +def create_session(
> +    node_config: NodeConfiguration, name: str, logger: DTSLOG
> +) -> OSSession:
> +    match node_config.os:
> +        case OS.linux:
> +            return LinuxSession(node_config, name, logger)
> +        case _:
> +            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
> similarity index 95%
> rename from dts/framework/remote_session/os_session.py
> rename to dts/framework/testbed_model/os_session.py
> index 8a709eac1c..76e595a518 100644
> --- a/dts/framework/remote_session/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -10,19 +10,19 @@
>   
>   from framework.config import Architecture, NodeConfiguration, NodeInfo
>   from framework.logger import DTSLOG
> -from framework.remote_session.remote import InteractiveShell
> -from framework.settings import SETTINGS
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
> -from framework.utils import MesonArgs
> -
> -from .remote import (
> +from framework.remote_session import (
>       CommandResult,
>       InteractiveRemoteSession,
> +    InteractiveShell,
>       RemoteSession,
>       create_interactive_session,
>       create_remote_session,
>   )
> +from framework.settings import SETTINGS
> +from framework.utils import MesonArgs
> +
> +from .cpu import LogicalCore
> +from .port import Port
>   
>   InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
>   
> @@ -85,9 +85,9 @@ def send_command(
>       def create_interactive_shell(
>           self,
>           shell_cls: Type[InteractiveShellType],
> -        eal_parameters: str,
>           timeout: float,
>           privileged: bool,
> +        app_args: str,

Is there a reason why the argument position got changed? I'd guess 
because it's more idomatic to have the extra arg at the end, but I just 
want to make sure it's intended.

>       ) -> InteractiveShellType:
>           """
>           See "create_interactive_shell" in SutNode
> @@ -96,7 +96,7 @@ def create_interactive_shell(
>               self.interactive_session.session,
>               self._logger,
>               self._get_privileged_command if privileged else None,
> -            eal_parameters,
> +            app_args,
>               timeout,
>           )
>   
> @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
>           """
>   
>       @abstractmethod
> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
>           """
>           Try to find DPDK remote dir in remote_dir.
>           """
> @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
>           """
>   
>       @abstractmethod
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>           """
>           Get the DPDK file prefix that will be used when running DPDK apps.
>           """
> diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/port.py
> rename to dts/framework/testbed_model/port.py
> diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
> similarity index 98%
> rename from dts/framework/remote_session/posix_session.py
> rename to dts/framework/testbed_model/posix_session.py
> index 5da0516e05..1d1d5b1b26 100644
> --- a/dts/framework/remote_session/posix_session.py
> +++ b/dts/framework/testbed_model/posix_session.py
> @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
>   
>           return ret_opts
>   
> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
>           remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
>           result = self.send_command(f"ls -d {remote_guess} | tail -1")
>           return PurePosixPath(result.stdout)
> @@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs(
>           for dpdk_runtime_dir in dpdk_runtime_dirs:
>               self.remove_remote_dir(dpdk_runtime_dir)
>   
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>           return ""
>   
>       def get_compiler_version(self, compiler_name: str) -> str:
> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
> index 202aebfd06..4e33cf02ea 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -15,12 +15,14 @@
>       NodeInfo,
>       SutNodeConfiguration,
>   )
> -from framework.remote_session import CommandResult, InteractiveShellType, OSSession
> +from framework.remote_session import CommandResult
>   from framework.settings import SETTINGS
>   from framework.utils import MesonArgs
>   
> -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
> +from .cpu import LogicalCoreCount, LogicalCoreList
>   from .node import Node
> +from .os_session import InteractiveShellType, OSSession
> +from .virtual_device import VirtualDevice
>   
>   
>   class EalParameters(object):
> @@ -289,7 +291,7 @@ def create_eal_parameters(
>           prefix: str = "dpdk",
>           append_prefix_timestamp: bool = True,
>           no_pci: bool = False,
> -        vdevs: list[VirtualDevice] = None,
> +        vdevs: list[VirtualDevice] | None = None,
>           other_eal_param: str = "",
>       ) -> "EalParameters":
>           """
> diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
> index 27025cfa31..166eb8430e 100644
> --- a/dts/framework/testbed_model/tg_node.py
> +++ b/dts/framework/testbed_model/tg_node.py
> @@ -16,16 +16,11 @@
>   
>   from scapy.packet import Packet  # type: ignore[import]
>   
> -from framework.config import (
> -    ScapyTrafficGeneratorConfig,
> -    TGNodeConfiguration,
> -    TrafficGeneratorType,
> -)
> -from framework.exception import ConfigurationError
> -
> -from .capturing_traffic_generator import CapturingTrafficGenerator
> -from .hw.port import Port
> +from framework.config import TGNodeConfiguration
> +
>   from .node import Node
> +from .port import Port
> +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
>   
>   
>   class TGNode(Node):
> @@ -80,20 +75,3 @@ def close(self) -> None:
>           """Free all resources used by the node"""
>           self.traffic_generator.close()
>           super(TGNode, self).close()
> -
> -
> -def create_traffic_generator(
> -    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
> -) -> CapturingTrafficGenerator:
> -    """A factory function for creating traffic generator object from user config."""
> -
> -    from .scapy import ScapyTrafficGenerator
> -
> -    match traffic_generator_config.traffic_generator_type:
> -        case TrafficGeneratorType.SCAPY:
> -            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> -        case _:
> -            raise ConfigurationError(
> -                "Unknown traffic generator: "
> -                f"{traffic_generator_config.traffic_generator_type}"
> -            )
> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
> new file mode 100644
> index 0000000000..11bfa1ee0f
> --- /dev/null
> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> @@ -0,0 +1,24 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
> +from framework.exception import ConfigurationError
> +from framework.testbed_model.node import Node
> +
> +from .capturing_traffic_generator import CapturingTrafficGenerator
> +from .scapy import ScapyTrafficGenerator
> +
> +
> +def create_traffic_generator(
> +    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
> +) -> CapturingTrafficGenerator:
> +    """A factory function for creating traffic generator object from user config."""
> +
> +    match traffic_generator_config.traffic_generator_type:
> +        case TrafficGeneratorType.SCAPY:
> +            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> +        case _:
> +            raise ConfigurationError(
> +                "Unknown traffic generator: "
> +                f"{traffic_generator_config.traffic_generator_type}"
> +            )
> diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> similarity index 96%
> rename from dts/framework/testbed_model/capturing_traffic_generator.py
> rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> index ab98987f8e..e521211ef0 100644
> --- a/dts/framework/testbed_model/capturing_traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> @@ -16,9 +16,9 @@
>   from scapy.packet import Packet  # type: ignore[import]
>   
>   from framework.settings import SETTINGS
> +from framework.testbed_model.port import Port
>   from framework.utils import get_packet_summaries
>   
> -from .hw.port import Port
>   from .traffic_generator import TrafficGenerator
>   
>   
> @@ -130,7 +130,9 @@ def _send_packets_and_capture(
>           for the specified duration. It must be able to handle no received packets.
>           """
>   
> -    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
> +    def _write_capture_from_packets(
> +        self, capture_name: str, packets: list[Packet]
> +    ) -> None:
>           file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
>           self._logger.debug(f"Writing packets to {file_name}.")
>           scapy.utils.wrpcap(file_name, packets)
> diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
> similarity index 95%
> rename from dts/framework/testbed_model/scapy.py
> rename to dts/framework/testbed_model/traffic_generator/scapy.py
> index af0d4dbb25..51864b6e6b 100644
> --- a/dts/framework/testbed_model/scapy.py
> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
> @@ -24,16 +24,15 @@
>   from scapy.packet import Packet  # type: ignore[import]
>   
>   from framework.config import OS, ScapyTrafficGeneratorConfig
> -from framework.logger import DTSLOG, getLogger
>   from framework.remote_session import PythonShell
>   from framework.settings import SETTINGS
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>   
>   from .capturing_traffic_generator import (
>       CapturingTrafficGenerator,
>       _get_default_capture_name,
>   )
> -from .hw.port import Port
> -from .tg_node import TGNode
>   
>   """
>   ========= BEGIN RPC FUNCTIONS =========
> @@ -146,7 +145,7 @@ def quit(self) -> None:
>           self._BaseServer__shutdown_request = True
>           return None
>   
> -    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
> +    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
>           """Add a function to the server.
>   
>           This is meant to be executed remotely.
> @@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
>       session: PythonShell
>       rpc_server_proxy: xmlrpc.client.ServerProxy
>       _config: ScapyTrafficGeneratorConfig
> -    _tg_node: TGNode
> -    _logger: DTSLOG
> -
> -    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
> -        self._config = config
> -        self._tg_node = tg_node
> -        self._logger = getLogger(
> -            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> -        )
> +
> +    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
> +        super().__init__(tg_node, config)
>   
>           assert (
>               self._tg_node.config.os == OS.linux
> @@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
>               function_bytes = marshal.dumps(function.__code__)
>               self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
>   
> -    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
> +    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
>           # load the source of the function
>           src = inspect.getsource(QuittableXMLRPCServer)
>           # Lines with only whitespace break the repl if in the middle of a function
> @@ -280,7 +273,7 @@ def _send_packets_and_capture(
>           scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
>           return scapy_packets
>   
> -    def close(self):
> +    def close(self) -> None:
>           try:
>               self.rpc_server_proxy.quit()
>           except ConnectionRefusedError:
> diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> similarity index 80%
> rename from dts/framework/testbed_model/traffic_generator.py
> rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
> index 28c35d3ce4..ea7c3963da 100644
> --- a/dts/framework/testbed_model/traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> @@ -12,11 +12,12 @@
>   
>   from scapy.packet import Packet  # type: ignore[import]
>   
> -from framework.logger import DTSLOG
> +from framework.config import TrafficGeneratorConfig
> +from framework.logger import DTSLOG, getLogger
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>   from framework.utils import get_packet_summaries
>   
> -from .hw.port import Port
> -
>   
>   class TrafficGenerator(ABC):
>       """The base traffic generator.
> @@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
>       Defines the few basic methods that each traffic generator must implement.
>       """
>   
> +    _config: TrafficGeneratorConfig
> +    _tg_node: Node
>       _logger: DTSLOG
>   
> +    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> +        self._config = config
> +        self._tg_node = tg_node
> +        self._logger = getLogger(
> +            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> +        )
> +
>       def send_packet(self, packet: Packet, port: Port) -> None:
>           """Send a packet and block until it is fully sent.
>   
> diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/virtual_device.py
> rename to dts/framework/testbed_model/virtual_device.py
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index d27c2c5b5f..f0c916471c 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -7,7 +7,6 @@
>   import json
>   import os
>   import subprocess
> -import sys
>   from enum import Enum
>   from pathlib import Path
>   from subprocess import SubprocessError
> @@ -16,35 +15,7 @@
>   
>   from .exception import ConfigurationError
>   
> -
> -class StrEnum(Enum):
> -    @staticmethod
> -    def _generate_next_value_(
> -        name: str, start: int, count: int, last_values: object
> -    ) -> str:
> -        return name
> -
> -    def __str__(self) -> str:
> -        return self.name
> -
> -
> -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
> -
> -
> -def check_dts_python_version() -> None:
> -    if sys.version_info.major < 3 or (
> -        sys.version_info.major == 3 and sys.version_info.minor < 10
> -    ):
> -        print(
> -            RED(
> -                (
> -                    "WARNING: DTS execution node's python version is lower than"
> -                    "python 3.10, is deprecated and will not work in future releases."
> -                )
> -            ),
> -            file=sys.stderr,
> -        )
> -        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
>   
>   
>   def expand_range(range_str: str) -> list[int]:
> @@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
>       return expanded_range
>   
>   
> -def get_packet_summaries(packets: list[Packet]):
> +def get_packet_summaries(packets: list[Packet]) -> str:
>       if len(packets) == 1:
>           packet_summaries = packets[0].summary()
>       else:
> @@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
>       return f"Packet contents: \n{packet_summaries}"
>   
>   
> -def RED(text: str) -> str:
> -    return f"\u001B[31;1m{str(text)}\u001B[0m"
> +class StrEnum(Enum):
> +    @staticmethod
> +    def _generate_next_value_(
> +        name: str, start: int, count: int, last_values: object
> +    ) -> str:
> +        return name

I don't understand this function? I don't see it used anywhere. And the 
parameters are unused?

> +
> +    def __str__(self) -> str:
> +        return self.name
>   
>   
>   class MesonArgs(object):
> @@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
>           if self._tarball_path and os.path.exists(self._tarball_path):
>               os.remove(self._tarball_path)
>   
> -    def __fspath__(self):
> +    def __fspath__(self) -> str:
>           return str(self._tarball_path)
> diff --git a/dts/main.py b/dts/main.py
> index 43311fa847..5d4714b0c3 100755
> --- a/dts/main.py
> +++ b/dts/main.py
> @@ -10,10 +10,17 @@
>   
>   import logging
>   
> -from framework import dts
> +from framework import settings
>   
>   
>   def main() -> None:
> +    """Set DTS settings, then run DTS.
> +
> +    The DTS settings are taken from the command line arguments and the environment variables.
> +    """
> +    settings.SETTINGS = settings.get_settings()
> +    from framework import dts

Why the import *inside* the main ?

> +
>       dts.run_all()
>   
>   


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v6 22/23] dts: add doc generation dependencies
  2023-11-08 12:53           ` [PATCH v6 22/23] dts: add doc generation dependencies Juraj Linkeš
@ 2023-11-08 16:00             ` Yoan Picchi
  2023-11-15 10:00               ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-08 16:00 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek
  Cc: dev

On 11/8/23 12:53, Juraj Linkeš wrote:
> Sphinx imports every Python module when generating documentation from
> docstrings, meaning all dts dependencies, including Python version,
> must be satisfied.
> By adding Sphinx to dts dependencies we make sure that the proper
> Python version and dependencies are used when Sphinx is executed.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/poetry.lock    | 499 ++++++++++++++++++++++++++++++++++++++++++++-
>   dts/pyproject.toml |   7 +
>   2 files changed, 505 insertions(+), 1 deletion(-)
> 
> diff --git a/dts/poetry.lock b/dts/poetry.lock
> index a734fa71f0..dea98f6913 100644
> --- a/dts/poetry.lock
> +++ b/dts/poetry.lock
> @@ -1,5 +1,16 @@
>   # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
>   
> +[[package]]
> +name = "alabaster"
> +version = "0.7.13"
> +description = "A configurable sidebar-enabled Sphinx theme"
> +optional = false
> +python-versions = ">=3.6"
> +files = [
> +    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
> +    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
> +]
> +
>   [[package]]
>   name = "attrs"
>   version = "23.1.0"
> @@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
>   tests = ["attrs[tests-no-zope]", "zope-interface"]
>   tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
>   
> +[[package]]
> +name = "babel"
> +version = "2.13.1"
> +description = "Internationalization utilities"
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
> +    {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
> +]
> +
> +[package.dependencies]
> +setuptools = {version = "*", markers = "python_version >= \"3.12\""}
> +
> +[package.extras]
> +dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
> +
>   [[package]]
>   name = "bcrypt"
>   version = "4.0.1"
> @@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
>   jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
>   uvloop = ["uvloop (>=0.15.2)"]
>   
> +[[package]]
> +name = "certifi"
> +version = "2023.7.22"
> +description = "Python package for providing Mozilla's CA Bundle."
> +optional = false
> +python-versions = ">=3.6"
> +files = [
> +    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
> +    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
> +]
> +
>   [[package]]
>   name = "cffi"
>   version = "1.15.1"
> @@ -162,6 +201,105 @@ files = [
>   [package.dependencies]
>   pycparser = "*"
>   
> +[[package]]
> +name = "charset-normalizer"
> +version = "3.3.2"
> +description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
> +optional = false
> +python-versions = ">=3.7.0"
> +files = [
> +    {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"},
> +    {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"},
> +    {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"},
> +    {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"},
> +    {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"},
> +    {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"},
> +    {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"},
> +    {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
> +]
> +
>   [[package]]
>   name = "click"
>   version = "8.1.6"
> @@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"]
>   test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
>   test-randomorder = ["pytest-randomly"]
>   
> +[[package]]
> +name = "docutils"
> +version = "0.18.1"
> +description = "Docutils -- Python Documentation Utilities"
> +optional = false
> +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
> +files = [
> +    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
> +    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
> +]
> +
>   [[package]]
>   name = "fabric"
>   version = "2.7.1"
> @@ -252,6 +401,28 @@ pathlib2 = "*"
>   pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
>   testing = ["mock (>=2.0.0,<3.0)"]
>   
> +[[package]]
> +name = "idna"
> +version = "3.4"
> +description = "Internationalized Domain Names in Applications (IDNA)"
> +optional = false
> +python-versions = ">=3.5"
> +files = [
> +    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
> +    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
> +]
> +
> +[[package]]
> +name = "imagesize"
> +version = "1.4.1"
> +description = "Getting image size from png/jpeg/jpeg2000/gif file"
> +optional = false
> +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
> +files = [
> +    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
> +    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
> +]
> +
>   [[package]]
>   name = "invoke"
>   version = "1.7.3"
> @@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
>   plugins = ["setuptools"]
>   requirements-deprecated-finder = ["pip-api", "pipreqs"]
>   
> +[[package]]
> +name = "jinja2"
> +version = "3.1.2"
> +description = "A very fast and expressive template engine."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
> +    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
> +]
> +
> +[package.dependencies]
> +MarkupSafe = ">=2.0"
> +
> +[package.extras]
> +i18n = ["Babel (>=2.7)"]
> +
>   [[package]]
>   name = "jsonpatch"
>   version = "1.33"
> @@ -340,6 +528,65 @@ files = [
>   [package.dependencies]
>   referencing = ">=0.28.0"
>   
> +[[package]]
> +name = "markupsafe"
> +version = "2.1.3"
> +description = "Safely add untrusted strings to HTML/XML markup."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
> +    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
> +    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
> +    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
> +    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
> +    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
> +    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
> +]
> +
>   [[package]]
>   name = "mccabe"
>   version = "0.7.0"
> @@ -404,6 +651,17 @@ files = [
>       {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
>   ]
>   
> +[[package]]
> +name = "packaging"
> +version = "23.2"
> +description = "Core utilities for Python packages"
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
> +    {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
> +]
> +
>   [[package]]
>   name = "paramiko"
>   version = "3.2.0"
> @@ -515,6 +773,20 @@ files = [
>       {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
>   ]
>   
> +[[package]]
> +name = "pygments"
> +version = "2.16.1"
> +description = "Pygments is a syntax highlighting package written in Python."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"},
> +    {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"},
> +]
> +
> +[package.extras]
> +plugins = ["importlib-metadata"]
> +
>   [[package]]
>   name = "pylama"
>   version = "8.4.1"
> @@ -632,6 +904,27 @@ files = [
>   attrs = ">=22.2.0"
>   rpds-py = ">=0.7.0"
>   
> +[[package]]
> +name = "requests"
> +version = "2.31.0"
> +description = "Python HTTP for Humans."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
> +    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
> +]
> +
> +[package.dependencies]
> +certifi = ">=2017.4.17"
> +charset-normalizer = ">=2,<4"
> +idna = ">=2.5,<4"
> +urllib3 = ">=1.21.1,<3"
> +
> +[package.extras]
> +socks = ["PySocks (>=1.5.6,!=1.5.7)"]
> +use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
> +
>   [[package]]
>   name = "rpds-py"
>   version = "0.9.2"
> @@ -753,6 +1046,22 @@ basic = ["ipython"]
>   complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
>   docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
>   
> +[[package]]
> +name = "setuptools"
> +version = "68.2.2"
> +description = "Easily download, build, install, upgrade, and uninstall Python packages"
> +optional = false
> +python-versions = ">=3.8"
> +files = [
> +    {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"},
> +    {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"},
> +]
> +
> +[package.extras]
> +docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
> +testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
> +testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
> +
>   [[package]]
>   name = "six"
>   version = "1.16.0"
> @@ -775,6 +1084,177 @@ files = [
>       {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
>   ]
>   
> +[[package]]
> +name = "sphinx"
> +version = "6.2.1"
> +description = "Python documentation generator"
> +optional = false
> +python-versions = ">=3.8"
> +files = [
> +    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
> +    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
> +]
> +
> +[package.dependencies]
> +alabaster = ">=0.7,<0.8"
> +babel = ">=2.9"
> +colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
> +docutils = ">=0.18.1,<0.20"
> +imagesize = ">=1.3"
> +Jinja2 = ">=3.0"
> +packaging = ">=21.0"
> +Pygments = ">=2.13"
> +requests = ">=2.25.0"
> +snowballstemmer = ">=2.0"
> +sphinxcontrib-applehelp = "*"
> +sphinxcontrib-devhelp = "*"
> +sphinxcontrib-htmlhelp = ">=2.0.0"
> +sphinxcontrib-jsmath = "*"
> +sphinxcontrib-qthelp = "*"
> +sphinxcontrib-serializinghtml = ">=1.1.5"
> +
> +[package.extras]
> +docs = ["sphinxcontrib-websupport"]
> +lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
> +test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
> +
> +[[package]]
> +name = "sphinx-rtd-theme"
> +version = "1.2.2"
> +description = "Read the Docs theme for Sphinx"
> +optional = false
> +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
> +files = [
> +    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
> +    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
> +]
> +
> +[package.dependencies]
> +docutils = "<0.19"
> +sphinx = ">=1.6,<7"
> +sphinxcontrib-jquery = ">=4,<5"
> +
> +[package.extras]
> +dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
> +
> +[[package]]
> +name = "sphinxcontrib-applehelp"
> +version = "1.0.7"
> +description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
> +optional = false
> +python-versions = ">=3.9"
> +files = [
> +    {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"},
> +    {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"},
> +]
> +
> +[package.dependencies]
> +Sphinx = ">=5"
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-devhelp"
> +version = "1.0.5"
> +description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents"
> +optional = false
> +python-versions = ">=3.9"
> +files = [
> +    {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"},
> +    {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"},
> +]
> +
> +[package.dependencies]
> +Sphinx = ">=5"
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-htmlhelp"
> +version = "2.0.4"
> +description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
> +optional = false
> +python-versions = ">=3.9"
> +files = [
> +    {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"},
> +    {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"},
> +]
> +
> +[package.dependencies]
> +Sphinx = ">=5"
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["html5lib", "pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-jquery"
> +version = "4.1"
> +description = "Extension to include jQuery on newer Sphinx releases"
> +optional = false
> +python-versions = ">=2.7"
> +files = [
> +    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
> +    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
> +]
> +
> +[package.dependencies]
> +Sphinx = ">=1.8"
> +
> +[[package]]
> +name = "sphinxcontrib-jsmath"
> +version = "1.0.1"
> +description = "A sphinx extension which renders display math in HTML via JavaScript"
> +optional = false
> +python-versions = ">=3.5"
> +files = [
> +    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
> +    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
> +]
> +
> +[package.extras]
> +test = ["flake8", "mypy", "pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-qthelp"
> +version = "1.0.6"
> +description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents"
> +optional = false
> +python-versions = ">=3.9"
> +files = [
> +    {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"},
> +    {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"},
> +]
> +
> +[package.dependencies]
> +Sphinx = ">=5"
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
> +[[package]]
> +name = "sphinxcontrib-serializinghtml"
> +version = "1.1.9"
> +description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)"
> +optional = false
> +python-versions = ">=3.9"
> +files = [
> +    {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"},
> +    {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"},
> +]
> +
> +[package.dependencies]
> +Sphinx = ">=5"
> +
> +[package.extras]
> +lint = ["docutils-stubs", "flake8", "mypy"]
> +test = ["pytest"]
> +
>   [[package]]
>   name = "toml"
>   version = "0.10.2"
> @@ -819,6 +1299,23 @@ files = [
>       {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
>   ]
>   
> +[[package]]
> +name = "urllib3"
> +version = "2.0.7"
> +description = "HTTP library with thread-safe connection pooling, file post, and more."
> +optional = false
> +python-versions = ">=3.7"
> +files = [
> +    {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
> +    {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
> +]
> +
> +[package.extras]
> +brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
> +secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
> +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
> +zstd = ["zstandard (>=0.18.0)"]
> +
>   [[package]]
>   name = "warlock"
>   version = "2.0.1"
> @@ -837,4 +1334,4 @@ jsonschema = ">=4,<5"
>   [metadata]
>   lock-version = "2.0"
>   python-versions = "^3.10"
> -content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
> +content-hash = "5faad2e53833e9b8a353ad3554c58de991801a9ebe8f9712fc9c839b35e7a789"
> diff --git a/dts/pyproject.toml b/dts/pyproject.toml
> index 3943c87c87..98df431b3b 100644
> --- a/dts/pyproject.toml
> +++ b/dts/pyproject.toml
> @@ -35,6 +35,13 @@ pylama = "^8.4.1"
>   pyflakes = "^2.5.0"
>   toml = "^0.10.2"
>   
> +[tool.poetry.group.docs]
> +optional = true
> +
> +[tool.poetry.group.docs.dependencies]
> +sphinx = "<7"
> +sphinx-rtd-theme = "^1.2.2"
> +
>   [build-system]
>   requires = ["poetry-core>=1.0.0"]
>   build-backend = "poetry.core.masonry.api"

I do get some warning while I build the doc:

$ poetry install --with docs

[...]

Installing dependencies from lock file
Warning: poetry.lock is not consistent with pyproject.toml. You may be 
getting improper dependencies. Run `poetry lock [--no-update]` to fix it.

The doc seems to build fine though

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v6 05/23] dts: settings docstring update
  2023-11-08 12:53           ` [PATCH v6 05/23] dts: settings " Juraj Linkeš
@ 2023-11-08 16:17             ` Yoan Picchi
  2023-11-15 10:09               ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-08 16:17 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek
  Cc: dev

On 11/8/23 12:53, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/settings.py | 101 +++++++++++++++++++++++++++++++++++++-
>   1 file changed, 100 insertions(+), 1 deletion(-)
> 
> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> index 7f5841d073..787db7c198 100644
> --- a/dts/framework/settings.py
> +++ b/dts/framework/settings.py
> @@ -3,6 +3,70 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022 University of New Hampshire
>   
> +"""Environment variables and command line arguments parsing.
> +
> +This is a simple module utilizing the built-in argparse module to parse command line arguments,
> +augment them with values from environment variables and make them available across the framework.
> +
> +The command line value takes precedence, followed by the environment variable value,
> +followed by the default value defined in this module.
> +
> +The command line arguments along with the supported environment variables are:
> +
> +.. option:: --config-file
> +.. envvar:: DTS_CFG_FILE
> +
> +    The path to the YAML test run configuration file.
> +
> +.. option:: --output-dir, --output
> +.. envvar:: DTS_OUTPUT_DIR
> +
> +    The directory where DTS logs and results are saved.
> +
> +.. option:: --compile-timeout
> +.. envvar:: DTS_COMPILE_TIMEOUT
> +
> +    The timeout for compiling DPDK.
> +
> +.. option:: -t, --timeout
> +.. envvar:: DTS_TIMEOUT
> +
> +    The timeout for all DTS operation except for compiling DPDK.
> +
> +.. option:: -v, --verbose
> +.. envvar:: DTS_VERBOSE
> +
> +    Set to any value to enable logging everything to the console.
> +
> +.. option:: -s, --skip-setup
> +.. envvar:: DTS_SKIP_SETUP
> +
> +    Set to any value to skip building DPDK.
> +
> +.. option:: --tarball, --snapshot, --git-ref
> +.. envvar:: DTS_DPDK_TARBALL
> +
> +    The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
> +
> +.. option:: --test-cases
> +.. envvar:: DTS_TESTCASES
> +
> +    A comma-separated list of test cases to execute. Unknown test cases will be silently ignored.
> +
> +.. option:: --re-run, --re_run
> +.. envvar:: DTS_RERUN
> +
> +    Re-run each test case this many times in case of a failure.
> +
> +Attributes:
> +    SETTINGS: The module level variable storing framework-wide DTS settings.

In the generated doc, "Attributes" doesn't appear. It ends up looking 
like SETTINGS is just another environment variable, with no separation 
with the above list.

> +
> +Typical usage example::
> +
> +  from framework.settings import SETTINGS
> +  foo = SETTINGS.foo
> +"""
> +
>   import argparse
>   import os
>   from collections.abc import Callable, Iterable, Sequence
> @@ -16,6 +80,23 @@
>   
>   
>   def _env_arg(env_var: str) -> Any:
> +    """A helper method augmenting the argparse Action with environment variable > +
> +    If the supplied environment variable is defined, then the default value
> +    of the argument is modified. This satisfies the priority order of
> +    command line argument > environment variable > default value.
> +
> +    Arguments with no values (flags) should be defined using the const keyword argument
> +    (True or False). When the argument is specified, it will be set to const, if not specified,
> +    the default will be stored (possibly modified by the corresponding environment variable).
> +
> +    Other arguments work the same as default argparse arguments, that is using
> +    the default 'store' action.
> +
> +    Returns:
> +          The modified argparse.Action.
> +    """
> +
>       class _EnvironmentArgument(argparse.Action):
>           def __init__(
>               self,
> @@ -68,14 +149,28 @@ def __call__(
>   
>   @dataclass(slots=True)
>   class Settings:
> +    """Default framework-wide user settings.
> +
> +    The defaults may be modified at the start of the run.
> +    """
> +
> +    #:
>       config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
> +    #:
>       output_dir: str = "output"
> +    #:
>       timeout: float = 15
> +    #:
>       verbose: bool = False
> +    #:
>       skip_setup: bool = False
> +    #:
>       dpdk_tarball_path: Path | str = "dpdk.tar.xz"
> +    #:
>       compile_timeout: float = 1200
> +    #:
>       test_cases: list[str] = field(default_factory=list)
> +    #:
>       re_run: int = 0

For some reason in the doc, __init__ also appears : 
__init__(config_file_path: ~pathlib.Path = PosixPath('/ho...

>   
>   
> @@ -169,7 +264,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           action=_env_arg("DTS_RERUN"),
>           default=SETTINGS.re_run,
>           type=int,
> -        help="[DTS_RERUN] Re-run each test case the specified amount of times "
> +        help="[DTS_RERUN] Re-run each test case the specified number of times "
>           "if a test failure occurs",
>       )
>   
> @@ -177,6 +272,10 @@ def _get_parser() -> argparse.ArgumentParser:
>   
>   
>   def get_settings() -> Settings:
> +    """Create new settings with inputs from the user.
> +
> +    The inputs are taken from the command line and from environment variables.
> +    """
>       parsed_args = _get_parser().parse_args()
>       return Settings(
>           config_file_path=parsed_args.config_file,


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v6 06/23] dts: logger and settings docstring update
  2023-11-08 12:53           ` [PATCH v6 06/23] dts: logger and " Juraj Linkeš
@ 2023-11-08 17:14             ` Yoan Picchi
  2023-11-15 10:11               ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-08 17:14 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek
  Cc: dev

On 11/8/23 12:53, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/logger.py | 72 +++++++++++++++++++++----------
>   dts/framework/utils.py  | 96 ++++++++++++++++++++++++++++++-----------
>   2 files changed, 121 insertions(+), 47 deletions(-)
> 
> diff --git a/dts/framework/logger.py b/dts/framework/logger.py
> index bb2991e994..d3eb75a4e4 100644
> --- a/dts/framework/logger.py
> +++ b/dts/framework/logger.py
> @@ -3,9 +3,9 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> -"""
> -DTS logger module with several log level. DTS framework and TestSuite logs
> -are saved in different log files.
> +"""DTS logger module.
> +
> +DTS framework and TestSuite logs are saved in different log files.
>   """
>   
>   import logging
> @@ -18,19 +18,21 @@
>   stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
>   
>   
> -class LoggerDictType(TypedDict):
> -    logger: "DTSLOG"
> -    name: str
> -    node: str
> -
> +class DTSLOG(logging.LoggerAdapter):
> +    """DTS logger adapter class for framework and testsuites.
>   
> -# List for saving all using loggers
> -Loggers: list[LoggerDictType] = []
> +    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
> +    variable control the verbosity of output. If enabled, all messages will be emitted to the
> +    console.
>   
> +    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
> +    variable modify the directory where the logs will be stored.
>   
> -class DTSLOG(logging.LoggerAdapter):
> -    """
> -    DTS log class for framework and testsuite.
> +    Attributes:
> +        node: The additional identifier. Currently unused.
> +        sh: The handler which emits logs to console.
> +        fh: The handler which emits logs to a file.
> +        verbose_fh: Just as fh, but logs with a different, more verbose, format.
>       """
>   
>       _logger: logging.Logger
> @@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
>       verbose_fh: logging.FileHandler
>   
>       def __init__(self, logger: logging.Logger, node: str = "suite"):
> +        """Extend the constructor with additional handlers.
> +
> +        One handler logs to the console, the other one to a file, with either a regular or verbose
> +        format.
> +
> +        Args:
> +            logger: The logger from which to create the logger adapter.
> +            node: An additional identifier. Currently unused.
> +        """
>           self._logger = logger
>           # 1 means log everything, this will be used by file handlers if their level
>           # is not set
> @@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
>           super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
>   
>       def logger_exit(self) -> None:
> -        """
> -        Remove stream handler and logfile handler.
> -        """
> +        """Remove the stream handler and the logfile handler."""
>           for handler in (self.sh, self.fh, self.verbose_fh):
>               handler.flush()
>               self._logger.removeHandler(handler)
>   
>   
> +class _LoggerDictType(TypedDict):
> +    logger: DTSLOG
> +    name: str
> +    node: str
> +
> +
> +# List for saving all loggers in use
> +_Loggers: list[_LoggerDictType] = []
> +
> +
>   def getLogger(name: str, node: str = "suite") -> DTSLOG:
> +    """Get DTS logger adapter identified by name and node.
> +
> +    An existing logger will be return if one with the exact name and node already exists.
> +    A new one will be created and stored otherwise.
> +
> +    Args:
> +        name: The name of the logger.
> +        node: An additional identifier for the logger.
> +
> +    Returns:
> +        A logger uniquely identified by both name and node.
>       """
> -    Get logger handler and if there's no handler for specified Node will create one.
> -    """
> -    global Loggers
> +    global _Loggers
>       # return saved logger
> -    logger: LoggerDictType
> -    for logger in Loggers:
> +    logger: _LoggerDictType
> +    for logger in _Loggers:
>           if logger["name"] == name and logger["node"] == node:
>               return logger["logger"]
>   
>       # return new logger
>       dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
> -    Loggers.append({"logger": dts_logger, "name": name, "node": node})
> +    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
>       return dts_logger
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index f0c916471c..0613adf7ad 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -3,6 +3,16 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> +"""Various utility classes and functions.
> +
> +These are used in multiple modules across the framework. They're here because
> +they provide some non-specific functionality, greatly simplify imports or just don't
> +fit elsewhere.
> +
> +Attributes:
> +    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
> +"""
> +
>   import atexit
>   import json
>   import os
> @@ -19,12 +29,20 @@
>   
>   
>   def expand_range(range_str: str) -> list[int]:
> -    """
> -    Process range string into a list of integers. There are two possible formats:
> -    n - a single integer
> -    n-m - a range of integers
> +    """Process `range_str` into a list of integers.
> +
> +    There are two possible formats of `range_str`:
> +
> +        * ``n`` - a single integer,
> +        * ``n-m`` - a range of integers.
>   
> -    The returned range includes both n and m. Empty string returns an empty list.
> +    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
> +
> +    Args:
> +        range_str: The range to expand.
> +
> +    Returns:
> +        All the numbers from the range.
>       """
>       expanded_range: list[int] = []
>       if range_str:
> @@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]:
>   
>   
>   def get_packet_summaries(packets: list[Packet]) -> str:
> +    """Format a string summary from `packets`.
> +
> +    Args:
> +        packets: The packets to format.
> +
> +    Returns:
> +        The summary of `packets`.
> +    """
>       if len(packets) == 1:
>           packet_summaries = packets[0].summary()
>       else:
> @@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str:
>   
>   
>   class StrEnum(Enum):
> +    """Enum with members stored as strings."""
> +
>       @staticmethod
>       def _generate_next_value_(
>           name: str, start: int, count: int, last_values: object
> @@ -56,22 +84,29 @@ def _generate_next_value_(
>           return name
>   
>       def __str__(self) -> str:
> +        """The string representation is the name of the member."""
>           return self.name
>   
>   
>   class MesonArgs(object):
> -    """
> -    Aggregate the arguments needed to build DPDK:
> -    default_library: Default library type, Meson allows "shared", "static" and "both".
> -               Defaults to None, in which case the argument won't be used.
> -    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> -               Do not use -D with them, for example:
> -               meson_args = MesonArgs(enable_kmods=True).
> -    """
> +    """Aggregate the arguments needed to build DPDK."""
>   
>       _default_library: str
>   
>       def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> +        """Initialize the meson arguments.
> +
> +        Args:
> +            default_library: The default library type, Meson supports ``shared``, ``static`` and
> +                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
> +            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
> +                Do not use ``-D`` with them.
> +
> +        Example:
> +            ::
> +
> +                meson_args = MesonArgs(enable_kmods=True).
> +        """
>           self._default_library = (
>               f"--default-library={default_library}" if default_library else ""
>           )
> @@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
>           )
>   
>       def __str__(self) -> str:
> +        """The actual args."""
>           return " ".join(f"{self._default_library} {self._dpdk_args}".split())
>   
>   
> @@ -93,35 +129,33 @@ class _TarCompressionFormat(StrEnum):
>       and Enum values are the associated file extensions.
>       """
>   
> +    #:
>       gzip = "gz"
> +    #:
>       compress = "Z"
> +    #:
>       bzip2 = "bz2"
> +    #:
>       lzip = "lz"
> +    #:
>       lzma = "lzma"
> +    #:
>       lzop = "lzo"
> +    #:
>       xz = "xz"
> +    #:
>       zstd = "zst"

Just to be sure, _TarCompressionFormat doesn't appear in the doc
(framework.utils.html). I believe that's intended (because of the _) but 
then I don't think the #: are used for anything.

>   
>   
>   class DPDKGitTarball(object):
> -    """Create a compressed tarball of DPDK from the repository.
> -
> -    The DPDK version is specified with git object git_ref.
> -    The tarball will be compressed with _TarCompressionFormat,
> -    which must be supported by the DTS execution environment.
> -    The resulting tarball will be put into output_dir.
> +    """Compressed tarball of DPDK from the repository.
>   
> -    The class supports the os.PathLike protocol,
> +    The class supports the :class:`os.PathLike` protocol,
>       which is used to get the Path of the tarball::
>   
>           from pathlib import Path
>           tarball = DPDKGitTarball("HEAD", "output")
>           tarball_path = Path(tarball)
> -
> -    Arguments:
> -        git_ref: A git commit ID, tag ID or tree ID.
> -        output_dir: The directory where to put the resulting tarball.
> -        tar_compression_format: The compression format to use.
>       """
>   
>       _git_ref: str
> @@ -136,6 +170,17 @@ def __init__(
>           output_dir: str,
>           tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
>       ):
> +        """Create the tarball during initialization.
> +
> +        The DPDK version is specified with `git_ref`. The tarball will be compressed with
> +        `tar_compression_format`, which must be supported by the DTS execution environment.
> +        The resulting tarball will be put into `output_dir`.
> +
> +        Args:
> +            git_ref: A git commit ID, tag ID or tree ID.
> +            output_dir: The directory where to put the resulting tarball.
> +            tar_compression_format: The compression format to use.
> +        """
>           self._git_ref = git_ref
>           self._tar_compression_format = tar_compression_format
>   
> @@ -204,4 +249,5 @@ def _delete_tarball(self) -> None:
>               os.remove(self._tarball_path)
>   
>       def __fspath__(self) -> str:
> +        """The os.PathLike protocol implementation."""
>           return str(self._tarball_path)


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v5 01/23] dts: code adjustments for doc generation
  2023-11-08 13:35           ` Yoan Picchi
@ 2023-11-15  7:46             ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15  7:46 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: Thomas Monjalon, Honnappa Nagarahalli, Bruce Richardson,
	Jeremy Spewock, Patrick Robb, Paul Szczepanek, dev

On Wed, Nov 8, 2023 at 2:35 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/6/23 17:15, Juraj Linkeš wrote:
> > The standard Python tool for generating API documentation, Sphinx,
> > imports modules one-by-one when generating the documentation. This
> > requires code changes:
> > * properly guarding argument parsing in the if __name__ == '__main__'
> >    block,
> > * the logger used by DTS runner underwent the same treatment so that it
> >    doesn't create log files outside of a DTS run,
> > * however, DTS uses the arguments to construct an object holding global
> >    variables. The defaults for the global variables needed to be moved
> >    from argument parsing elsewhere,
> > * importing the remote_session module from framework resulted in
> >    circular imports because of one module trying to import another
> >    module. This is fixed by reorganizing the code,
> > * some code reorganization was done because the resulting structure
> >    makes more sense, improving documentation clarity.
> >
> > The are some other changes which are documentation related:
> > * added missing type annotation so they appear in the generated docs,
> > * reordered arguments in some methods,
> > * removed superfluous arguments and attributes,
> > * change private functions/methods/attributes to private and vice-versa.
> >
> > The above all appear in the generated documentation and the with them,
> > the documentation is improved.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/config/__init__.py              | 10 ++-
> >   dts/framework/dts.py                          | 33 +++++--
> >   dts/framework/exception.py                    | 54 +++++-------
> >   dts/framework/remote_session/__init__.py      | 41 ++++-----
> >   .../interactive_remote_session.py             |  0
> >   .../{remote => }/interactive_shell.py         |  0
> >   .../{remote => }/python_shell.py              |  0
> >   .../remote_session/remote/__init__.py         | 27 ------
> >   .../{remote => }/remote_session.py            |  0
> >   .../{remote => }/ssh_session.py               | 12 +--
> >   .../{remote => }/testpmd_shell.py             |  0
> >   dts/framework/settings.py                     | 87 +++++++++++--------
> >   dts/framework/test_result.py                  |  4 +-
> >   dts/framework/test_suite.py                   |  7 +-
> >   dts/framework/testbed_model/__init__.py       | 12 +--
> >   dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
> >   dts/framework/testbed_model/hw/__init__.py    | 27 ------
> >   .../linux_session.py                          |  6 +-
> >   dts/framework/testbed_model/node.py           | 26 ++++--
> >   .../os_session.py                             | 22 ++---
> >   dts/framework/testbed_model/{hw => }/port.py  |  0
> >   .../posix_session.py                          |  4 +-
> >   dts/framework/testbed_model/sut_node.py       |  8 +-
> >   dts/framework/testbed_model/tg_node.py        | 30 +------
> >   .../traffic_generator/__init__.py             | 24 +++++
> >   .../capturing_traffic_generator.py            |  6 +-
> >   .../{ => traffic_generator}/scapy.py          | 23 ++---
> >   .../traffic_generator.py                      | 16 +++-
> >   .../testbed_model/{hw => }/virtual_device.py  |  0
> >   dts/framework/utils.py                        | 46 +++-------
> >   dts/main.py                                   |  9 +-
> >   31 files changed, 259 insertions(+), 288 deletions(-)
> >   rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
> >   rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
> >   rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
> >   delete mode 100644 dts/framework/remote_session/remote/__init__.py
> >   rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
> >   rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
> >   rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
> >   rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
> >   delete mode 100644 dts/framework/testbed_model/hw/__init__.py
> >   rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
> >   rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
> >   rename dts/framework/testbed_model/{hw => }/port.py (100%)
> >   rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
> >   create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
> >   rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%)
> >   rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
> >   rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
> >   rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
> >
> > diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> > index cb7e00ba34..2044c82611 100644
> > --- a/dts/framework/config/__init__.py
> > +++ b/dts/framework/config/__init__.py
> > @@ -17,6 +17,7 @@
> >   import warlock  # type: ignore[import]
> >   import yaml
> >
> > +from framework.exception import ConfigurationError
> >   from framework.settings import SETTINGS
> >   from framework.utils import StrEnum
> >
> > @@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
> >       traffic_generator_type: TrafficGeneratorType
> >
> >       @staticmethod
> > -    def from_dict(d: dict):
> > +    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
>
> This function looks to be designed to support more trafic generator than
> just scapy, so setting its return type to scapy specifically looks
> wrong. Shouldn't it be a more generic traffic generator type? Like you
> did in create_traffic_generator()
>

The reason is the type in the constructor of the scapy traffic
generator - the type there should be ScapyTrafficGeneratorConfig and
if I change it anywhere in the chain, mypy reports an error. I don't
want to do any extra refactoring in this patch if we don't have to, so
we need to rethink this when adding a new traffic generator.

> >           # This looks useless now, but is designed to allow expansion to traffic
> >           # generators that require more configuration later.
> >           match TrafficGeneratorType(d["type"]):
> > @@ -97,6 +98,10 @@ def from_dict(d: dict):
> >                   return ScapyTrafficGeneratorConfig(
> >                       traffic_generator_type=TrafficGeneratorType.SCAPY
> >                   )
> > +            case _:
> > +                raise ConfigurationError(
> > +                    f'Unknown traffic generator type "{d["type"]}".'
> > +                )
> >
> >
> >   @dataclass(slots=True, frozen=True)

<snip>

> > --- a/dts/framework/settings.py
> > +++ b/dts/framework/settings.py
<small snip>
> > @@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
> >       return parser
> >
> >
> > -def _get_settings() -> _Settings:
> > +def get_settings() -> Settings:
> >       parsed_args = _get_parser().parse_args()
> > -    return _Settings(
> > +    return Settings(
>
> That means we're parsing and creating a new setting object everytime
> we're trying to read the setting? Shouldn't we just save it and return a
> copy? That seems to be the old behavior, any reason to change it?
>

By old behavior, do you mean the behavior from the previous version?

I want the Settings object to be immutable, as much as it can be in
Python (that's why the dataclass if frozen), so that it's clear it
shouldn't be changed during runtime, as the object represents user
choices (any modifications would violate that). More below.

> Related to this, this do mean that the previously created setting
> variable is only used to set up the parser, so it might need to be
> renamed to default_setting if it doesnt get reused.
>

It is used. The reason the SETTINGS variable is implemented this way
is mostly because of Sphinx. Sphinx imports everything file by file:
When it imports a module that uses the SETTINGS variable (such as
node.py), the variable needs to be defined. On top of that, when
Sphinx accesses command line arguments, it sees it's own command line
arguments (which are incompatible with DTS), so we need to guard the
command line parsing against imports (we have it in if __name__ ==
"main" in main.py). This is why the defaults are split from the
command line parsing - when Sphinx imports the module, it uses the
object with defaults and during runtime we replace the object with
user-defined values.

There are other ways to do this, but I didn't find a better one with
all the constraints and requirements outlined above.

> >           config_file_path=parsed_args.config_file,
> >           output_dir=parsed_args.output_dir,
> >           timeout=parsed_args.timeout,
> > -        verbose=(parsed_args.verbose == "Y"),
> > -        skip_setup=(parsed_args.skip_setup == "Y"),
> > +        verbose=parsed_args.verbose,
> > +        skip_setup=parsed_args.skip_setup,
> >           dpdk_tarball_path=Path(
> > -            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
> > -        )
> > -        if not os.path.exists(parsed_args.tarball)
> > -        else Path(parsed_args.tarball),
> > +            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
> > +            if not os.path.exists(parsed_args.tarball)
> > +            else Path(parsed_args.tarball)
> > +        ),
> >           compile_timeout=parsed_args.compile_timeout,
> > -        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
> > +        test_cases=(
> > +            parsed_args.test_cases.split(",") if parsed_args.test_cases else []
> > +        ),
> >           re_run=parsed_args.re_run,
> >       )
> > -
> > -
> > -SETTINGS: _Settings = _get_settings()

<snip>

> > diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> > index fc01e0bf8e..7571e7b98d 100644
> > --- a/dts/framework/testbed_model/node.py
> > +++ b/dts/framework/testbed_model/node.py
> > @@ -12,23 +12,26 @@
> >   from typing import Any, Callable, Type, Union
> >
> >   from framework.config import (
> > +    OS,
> >       BuildTargetConfiguration,
> >       ExecutionConfiguration,
> >       NodeConfiguration,
> >   )
> > +from framework.exception import ConfigurationError
> >   from framework.logger import DTSLOG, getLogger
> > -from framework.remote_session import InteractiveShellType, OSSession, create_session
> >   from framework.settings import SETTINGS
> >
> > -from .hw import (
> > +from .cpu import (
> >       LogicalCore,
> >       LogicalCoreCount,
> >       LogicalCoreList,
> >       LogicalCoreListFilter,
> > -    VirtualDevice,
> >       lcore_filter,
> >   )
> > -from .hw.port import Port
> > +from .linux_session import LinuxSession
> > +from .os_session import InteractiveShellType, OSSession
> > +from .port import Port
> > +from .virtual_device import VirtualDevice
> >
> >
> >   class Node(ABC):
> > @@ -69,6 +72,7 @@ def __init__(self, node_config: NodeConfiguration):
> >       def _init_ports(self) -> None:
> >           self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
> >           self.main_session.update_ports(self.ports)
> > +
>
> Is the newline intended?
>

Hm, I don't really remember or see a reason for it, really. I can remove it.

> >           for port in self.ports:
> >               self.configure_port_state(port)
> >
> > @@ -172,9 +176,9 @@ def create_interactive_shell(
> >
> >           return self.main_session.create_interactive_shell(
> >               shell_cls,
> > -            app_args,
> >               timeout,
> >               privileged,
> > +            app_args,
> >           )
> >
> >       def filter_lcores(
> > @@ -205,7 +209,7 @@ def _get_remote_cpus(self) -> None:
> >           self._logger.info("Getting CPU information.")
> >           self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
> >
> > -    def _setup_hugepages(self):
> > +    def _setup_hugepages(self) -> None:
> >           """
> >           Setup hugepages on the Node. Different architectures can supply different
> >           amounts of memory for hugepages and numa-based hugepage allocation may need
> > @@ -249,3 +253,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> >               return lambda *args: None
> >           else:
> >               return func
> > +
> > +
> > +def create_session(
> > +    node_config: NodeConfiguration, name: str, logger: DTSLOG
> > +) -> OSSession:
> > +    match node_config.os:
> > +        case OS.linux:
> > +            return LinuxSession(node_config, name, logger)
> > +        case _:
> > +            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> > diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
> > similarity index 95%
> > rename from dts/framework/remote_session/os_session.py
> > rename to dts/framework/testbed_model/os_session.py
> > index 8a709eac1c..76e595a518 100644
> > --- a/dts/framework/remote_session/os_session.py
> > +++ b/dts/framework/testbed_model/os_session.py
> > @@ -10,19 +10,19 @@
> >
> >   from framework.config import Architecture, NodeConfiguration, NodeInfo
> >   from framework.logger import DTSLOG
> > -from framework.remote_session.remote import InteractiveShell
> > -from framework.settings import SETTINGS
> > -from framework.testbed_model import LogicalCore
> > -from framework.testbed_model.hw.port import Port
> > -from framework.utils import MesonArgs
> > -
> > -from .remote import (
> > +from framework.remote_session import (
> >       CommandResult,
> >       InteractiveRemoteSession,
> > +    InteractiveShell,
> >       RemoteSession,
> >       create_interactive_session,
> >       create_remote_session,
> >   )
> > +from framework.settings import SETTINGS
> > +from framework.utils import MesonArgs
> > +
> > +from .cpu import LogicalCore
> > +from .port import Port
> >
> >   InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
> >
> > @@ -85,9 +85,9 @@ def send_command(
> >       def create_interactive_shell(
> >           self,
> >           shell_cls: Type[InteractiveShellType],
> > -        eal_parameters: str,
> >           timeout: float,
> >           privileged: bool,
> > +        app_args: str,
>
> Is there a reason why the argument position got changed? I'd guess
> because it's more idomatic to have the extra arg at the end, but I just
> want to make sure it's intended.
>

Yes, this is very much intended. It's here to unite the method
signature with the signatures of the rest of the methods called down
the line.
I made this API change during API documentation as the different
signatures of basically the same methods would look terrible in the
docs.

> >       ) -> InteractiveShellType:
> >           """
> >           See "create_interactive_shell" in SutNode

<snip>

> > diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> > index d27c2c5b5f..f0c916471c 100644
> > --- a/dts/framework/utils.py
> > +++ b/dts/framework/utils.py
> > @@ -7,7 +7,6 @@
> >   import json
> >   import os
> >   import subprocess
> > -import sys
> >   from enum import Enum
> >   from pathlib import Path
> >   from subprocess import SubprocessError
> > @@ -16,35 +15,7 @@
> >
> >   from .exception import ConfigurationError
> >
> > -
> > -class StrEnum(Enum):
> > -    @staticmethod
> > -    def _generate_next_value_(
> > -        name: str, start: int, count: int, last_values: object
> > -    ) -> str:
> > -        return name
> > -
> > -    def __str__(self) -> str:
> > -        return self.name
> > -
> > -
> > -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
> > -
> > -
> > -def check_dts_python_version() -> None:
> > -    if sys.version_info.major < 3 or (
> > -        sys.version_info.major == 3 and sys.version_info.minor < 10
> > -    ):
> > -        print(
> > -            RED(
> > -                (
> > -                    "WARNING: DTS execution node's python version is lower than"
> > -                    "python 3.10, is deprecated and will not work in future releases."
> > -                )
> > -            ),
> > -            file=sys.stderr,
> > -        )
> > -        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> > +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
> >
> >
> >   def expand_range(range_str: str) -> list[int]:
> > @@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
> >       return expanded_range
> >
> >
> > -def get_packet_summaries(packets: list[Packet]):
> > +def get_packet_summaries(packets: list[Packet]) -> str:
> >       if len(packets) == 1:
> >           packet_summaries = packets[0].summary()
> >       else:
> > @@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
> >       return f"Packet contents: \n{packet_summaries}"
> >
> >
> > -def RED(text: str) -> str:
> > -    return f"\u001B[31;1m{str(text)}\u001B[0m"
> > +class StrEnum(Enum):
> > +    @staticmethod
> > +    def _generate_next_value_(
> > +        name: str, start: int, count: int, last_values: object
> > +    ) -> str:
> > +        return name
>
> I don't understand this function? I don't see it used anywhere. And the
> parameters are unused?
>

This is an internal method of Enum that defines what happens when
auto() is called (which is used plenty).

> > +
> > +    def __str__(self) -> str:
> > +        return self.name
> >
> >
> >   class MesonArgs(object):
> > @@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
> >           if self._tarball_path and os.path.exists(self._tarball_path):
> >               os.remove(self._tarball_path)
> >
> > -    def __fspath__(self):
> > +    def __fspath__(self) -> str:
> >           return str(self._tarball_path)
> > diff --git a/dts/main.py b/dts/main.py
> > index 43311fa847..5d4714b0c3 100755
> > --- a/dts/main.py
> > +++ b/dts/main.py
> > @@ -10,10 +10,17 @@
> >
> >   import logging
> >
> > -from framework import dts
> > +from framework import settings
> >
> >
> >   def main() -> None:
> > +    """Set DTS settings, then run DTS.
> > +
> > +    The DTS settings are taken from the command line arguments and the environment variables.
> > +    """
> > +    settings.SETTINGS = settings.get_settings()
> > +    from framework import dts
>
> Why the import *inside* the main ?
>

This is actually explained in the docstring added in one of the later
patches, so let me copy paste it here:

The DTS settings are taken from the command line arguments and the
environment variables.
The settings object is stored in the module-level variable
settings.SETTINGS which the entire
framework uses. After importing the module (or the variable), any
changes to the variable are
not going to be reflected without a re-import. This means that the
SETTINGS variable must
be modified before the settings module is imported anywhere else in
the framework.

> > +
> >       dts.run_all()
> >
> >
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v6 22/23] dts: add doc generation dependencies
  2023-11-08 16:00             ` Yoan Picchi
@ 2023-11-15 10:00               ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 10:00 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, dev

> I do get some warning while I build the doc:
>
> $ poetry install --with docs
>
> [...]
>
> Installing dependencies from lock file
> Warning: poetry.lock is not consistent with pyproject.toml. You may be
> getting improper dependencies. Run `poetry lock [--no-update]` to fix it.

Looks like my version had an improper content-hash. I'll fix it in the
next version.

>
> The doc seems to build fine though

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v6 05/23] dts: settings docstring update
  2023-11-08 16:17             ` Yoan Picchi
@ 2023-11-15 10:09               ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 10:09 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, dev

On Wed, Nov 8, 2023 at 5:17 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/8/23 12:53, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/settings.py | 101 +++++++++++++++++++++++++++++++++++++-
> >   1 file changed, 100 insertions(+), 1 deletion(-)
> >
> > diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> > index 7f5841d073..787db7c198 100644
> > --- a/dts/framework/settings.py
> > +++ b/dts/framework/settings.py
> > @@ -3,6 +3,70 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022 University of New Hampshire
> >
> > +"""Environment variables and command line arguments parsing.
> > +
> > +This is a simple module utilizing the built-in argparse module to parse command line arguments,
> > +augment them with values from environment variables and make them available across the framework.
> > +
> > +The command line value takes precedence, followed by the environment variable value,
> > +followed by the default value defined in this module.
> > +
> > +The command line arguments along with the supported environment variables are:
> > +
> > +.. option:: --config-file
> > +.. envvar:: DTS_CFG_FILE
> > +
> > +    The path to the YAML test run configuration file.
> > +
> > +.. option:: --output-dir, --output
> > +.. envvar:: DTS_OUTPUT_DIR
> > +
> > +    The directory where DTS logs and results are saved.
> > +
> > +.. option:: --compile-timeout
> > +.. envvar:: DTS_COMPILE_TIMEOUT
> > +
> > +    The timeout for compiling DPDK.
> > +
> > +.. option:: -t, --timeout
> > +.. envvar:: DTS_TIMEOUT
> > +
> > +    The timeout for all DTS operation except for compiling DPDK.
> > +
> > +.. option:: -v, --verbose
> > +.. envvar:: DTS_VERBOSE
> > +
> > +    Set to any value to enable logging everything to the console.
> > +
> > +.. option:: -s, --skip-setup
> > +.. envvar:: DTS_SKIP_SETUP
> > +
> > +    Set to any value to skip building DPDK.
> > +
> > +.. option:: --tarball, --snapshot, --git-ref
> > +.. envvar:: DTS_DPDK_TARBALL
> > +
> > +    The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
> > +
> > +.. option:: --test-cases
> > +.. envvar:: DTS_TESTCASES
> > +
> > +    A comma-separated list of test cases to execute. Unknown test cases will be silently ignored.
> > +
> > +.. option:: --re-run, --re_run
> > +.. envvar:: DTS_RERUN
> > +
> > +    Re-run each test case this many times in case of a failure.
> > +
> > +Attributes:
> > +    SETTINGS: The module level variable storing framework-wide DTS settings.
>
> In the generated doc, "Attributes" doesn't appear. It ends up looking
> like SETTINGS is just another environment variable, with no separation
> with the above list.
>

Yes, the Attributes: section is just a syntactical way to tell the
parser to render the attributes in a certain way.
We could add some delimiter or an extra paragraph explaining that what
comes next are module attributes. I'll try to add something.

> > +
> > +Typical usage example::
> > +
> > +  from framework.settings import SETTINGS
> > +  foo = SETTINGS.foo
> > +"""
> > +
> >   import argparse
> >   import os
> >   from collections.abc import Callable, Iterable, Sequence
> > @@ -16,6 +80,23 @@
> >
> >
> >   def _env_arg(env_var: str) -> Any:
> > +    """A helper method augmenting the argparse Action with environment variable
> > +
> > +    If the supplied environment variable is defined, then the default value
> > +    of the argument is modified. This satisfies the priority order of
> > +    command line argument > environment variable > default value.
> > +
> > +    Arguments with no values (flags) should be defined using the const keyword argument
> > +    (True or False). When the argument is specified, it will be set to const, if not specified,
> > +    the default will be stored (possibly modified by the corresponding environment variable).
> > +
> > +    Other arguments work the same as default argparse arguments, that is using
> > +    the default 'store' action.
> > +
> > +    Returns:
> > +          The modified argparse.Action.
> > +    """
> > +
> >       class _EnvironmentArgument(argparse.Action):
> >           def __init__(
> >               self,
> > @@ -68,14 +149,28 @@ def __call__(
> >
> >   @dataclass(slots=True)
> >   class Settings:
> > +    """Default framework-wide user settings.
> > +
> > +    The defaults may be modified at the start of the run.
> > +    """
> > +
> > +    #:
> >       config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
> > +    #:
> >       output_dir: str = "output"
> > +    #:
> >       timeout: float = 15
> > +    #:
> >       verbose: bool = False
> > +    #:
> >       skip_setup: bool = False
> > +    #:
> >       dpdk_tarball_path: Path | str = "dpdk.tar.xz"
> > +    #:
> >       compile_timeout: float = 1200
> > +    #:
> >       test_cases: list[str] = field(default_factory=list)
> > +    #:
> >       re_run: int = 0
>
> For some reason in the doc, __init__ also appears :
> __init__(config_file_path: ~pathlib.Path = PosixPath('/ho...
>

Yes, the @dataclass decorator adds the constructor so it gets
documented. This is useful so that we see the default values.

> >
> >
> > @@ -169,7 +264,7 @@ def _get_parser() -> argparse.ArgumentParser:
> >           action=_env_arg("DTS_RERUN"),
> >           default=SETTINGS.re_run,
> >           type=int,
> > -        help="[DTS_RERUN] Re-run each test case the specified amount of times "
> > +        help="[DTS_RERUN] Re-run each test case the specified number of times "
> >           "if a test failure occurs",
> >       )
> >
> > @@ -177,6 +272,10 @@ def _get_parser() -> argparse.ArgumentParser:
> >
> >
> >   def get_settings() -> Settings:
> > +    """Create new settings with inputs from the user.
> > +
> > +    The inputs are taken from the command line and from environment variables.
> > +    """
> >       parsed_args = _get_parser().parse_args()
> >       return Settings(
> >           config_file_path=parsed_args.config_file,
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v6 06/23] dts: logger and settings docstring update
  2023-11-08 17:14             ` Yoan Picchi
@ 2023-11-15 10:11               ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 10:11 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, dev

On Wed, Nov 8, 2023 at 6:14 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/8/23 12:53, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/logger.py | 72 +++++++++++++++++++++----------
> >   dts/framework/utils.py  | 96 ++++++++++++++++++++++++++++++-----------
> >   2 files changed, 121 insertions(+), 47 deletions(-)
> >

<snip>

> > diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> > index f0c916471c..0613adf7ad 100644
> > --- a/dts/framework/utils.py
> > +++ b/dts/framework/utils.py
<snip>
> > @@ -93,35 +129,33 @@ class _TarCompressionFormat(StrEnum):
> >       and Enum values are the associated file extensions.
> >       """
> >
> > +    #:
> >       gzip = "gz"
> > +    #:
> >       compress = "Z"
> > +    #:
> >       bzip2 = "bz2"
> > +    #:
> >       lzip = "lz"
> > +    #:
> >       lzma = "lzma"
> > +    #:
> >       lzop = "lzo"
> > +    #:
> >       xz = "xz"
> > +    #:
> >       zstd = "zst"
>
> Just to be sure, _TarCompressionFormat doesn't appear in the doc
> (framework.utils.html). I believe that's intended (because of the _) but
> then I don't think the #: are used for anything.
>

Good point, I'll remove the comments as they're just clutter when not
in doc generation.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 00/21] dts: docstrings update
  2023-11-08 12:53           ` [PATCH v6 23/23] dts: add doc generation Juraj Linkeš
@ 2023-11-15 13:09             ` Juraj Linkeš
  2023-11-15 13:09               ` [PATCH v7 01/21] dts: code adjustments for doc generation Juraj Linkeš
                                 ` (21 more replies)
  2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
  1 sibling, 22 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

The first commit makes changes to the code. These code changes mainly
change the structure of the code so that the actual API docs generation
works. There are also some code changes which get reflected in the
documentation, such as making functions/methods/attributes private or
public.

The rest of the commits deal with the actual docstring documentation
(from which the API docs are generated). The format of the docstrings
is the Google format [0] with PEP257 [1] and some guidelines captured
in the last commit of this group covering what the Google format
doesn't.
The docstring updates are split into many commits to make review
possible. When accepted, they may be squashed.
The docstrings have been composed in anticipation of [2], adhering to
maximum line length of 100. We don't have a tool for automatic docstring
formatting, hence the usage of 100 right away to save time.

NOTE: The logger.py module is not fully documented, as it's being
refactored and the refactor will be submitted in the near future.
Documenting it now seems unnecessary.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
[1] https://peps.python.org/pep-0257/
[2] https://patches.dpdk.org/project/dpdk/list/?series=29844

v7:
Split the series into docstrings and api docs generation and addressed
comments.

Juraj Linkeš (21):
  dts: code adjustments for doc generation
  dts: add docstring checker
  dts: add basic developer docs
  dts: exceptions docstring update
  dts: settings docstring update
  dts: logger and utils docstring update
  dts: dts runner and main docstring update
  dts: test suite docstring update
  dts: test result docstring update
  dts: config docstring update
  dts: remote session docstring update
  dts: interactive remote session docstring update
  dts: port and virtual device docstring update
  dts: cpu docstring update
  dts: os session docstring update
  dts: posix and linux sessions docstring update
  dts: node docstring update
  dts: sut and tg nodes docstring update
  dts: base traffic generators docstring update
  dts: scapy tg docstring update
  dts: test suites docstring update

 doc/guides/tools/dts.rst                      |  73 +++
 dts/framework/__init__.py                     |  12 +-
 dts/framework/config/__init__.py              | 379 +++++++++++++---
 dts/framework/config/types.py                 | 132 ++++++
 dts/framework/dts.py                          | 161 +++++--
 dts/framework/exception.py                    | 156 ++++---
 dts/framework/logger.py                       |  72 ++-
 dts/framework/remote_session/__init__.py      |  80 ++--
 .../interactive_remote_session.py             |  36 +-
 .../remote_session/interactive_shell.py       | 152 +++++++
 dts/framework/remote_session/os_session.py    | 284 ------------
 dts/framework/remote_session/python_shell.py  |  32 ++
 .../remote_session/remote/__init__.py         |  27 --
 .../remote/interactive_shell.py               | 133 ------
 .../remote_session/remote/python_shell.py     |  12 -
 .../remote_session/remote/remote_session.py   | 172 -------
 .../remote_session/remote/testpmd_shell.py    |  49 --
 .../remote_session/remote_session.py          | 232 ++++++++++
 .../{remote => }/ssh_session.py               |  28 +-
 dts/framework/remote_session/testpmd_shell.py |  86 ++++
 dts/framework/settings.py                     | 190 ++++++--
 dts/framework/test_result.py                  | 296 +++++++++---
 dts/framework/test_suite.py                   | 230 +++++++---
 dts/framework/testbed_model/__init__.py       |  28 +-
 dts/framework/testbed_model/{hw => }/cpu.py   | 209 ++++++---
 dts/framework/testbed_model/hw/__init__.py    |  27 --
 dts/framework/testbed_model/hw/port.py        |  60 ---
 .../testbed_model/hw/virtual_device.py        |  16 -
 .../linux_session.py                          |  69 ++-
 dts/framework/testbed_model/node.py           | 216 ++++++---
 dts/framework/testbed_model/os_session.py     | 425 ++++++++++++++++++
 dts/framework/testbed_model/port.py           |  93 ++++
 .../posix_session.py                          |  85 +++-
 dts/framework/testbed_model/sut_node.py       | 232 ++++++----
 dts/framework/testbed_model/tg_node.py        |  70 ++-
 .../testbed_model/traffic_generator.py        |  72 ---
 .../traffic_generator/__init__.py             |  44 ++
 .../capturing_traffic_generator.py            |  52 ++-
 .../{ => traffic_generator}/scapy.py          | 114 ++---
 .../traffic_generator/traffic_generator.py    |  87 ++++
 dts/framework/testbed_model/virtual_device.py |  29 ++
 dts/framework/utils.py                        | 128 +++---
 dts/main.py                                   |  17 +-
 dts/poetry.lock                               |  12 +-
 dts/pyproject.toml                            |   6 +-
 dts/tests/TestSuite_hello_world.py            |  16 +-
 dts/tests/TestSuite_os_udp.py                 |  19 +-
 dts/tests/TestSuite_smoke_tests.py            |  53 ++-
 48 files changed, 3511 insertions(+), 1692 deletions(-)
 create mode 100644 dts/framework/config/types.py
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (76%)
 create mode 100644 dts/framework/remote_session/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/os_session.py
 create mode 100644 dts/framework/remote_session/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 delete mode 100644 dts/framework/remote_session/remote/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/remote/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/remote_session.py
 delete mode 100644 dts/framework/remote_session/remote/testpmd_shell.py
 create mode 100644 dts/framework/remote_session/remote_session.py
 rename dts/framework/remote_session/{remote => }/ssh_session.py (83%)
 create mode 100644 dts/framework/remote_session/testpmd_shell.py
 rename dts/framework/testbed_model/{hw => }/cpu.py (50%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 delete mode 100644 dts/framework/testbed_model/hw/port.py
 delete mode 100644 dts/framework/testbed_model/hw/virtual_device.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (79%)
 create mode 100644 dts/framework/testbed_model/os_session.py
 create mode 100644 dts/framework/testbed_model/port.py
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (74%)
 delete mode 100644 dts/framework/testbed_model/traffic_generator.py
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (66%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (71%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/traffic_generator.py
 create mode 100644 dts/framework/testbed_model/virtual_device.py

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 01/21] dts: code adjustments for doc generation
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-16 21:04                 ` Jeremy Spewock
  2023-11-20 16:02                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 02/21] dts: add docstring checker Juraj Linkeš
                                 ` (20 subsequent siblings)
  21 siblings, 2 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

The standard Python tool for generating API documentation, Sphinx,
imports modules one-by-one when generating the documentation. This
requires code changes:
* properly guarding argument parsing in the if __name__ == '__main__'
  block,
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create log files outside of a DTS run,
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere,
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by reorganizing the code,
* some code reorganization was done because the resulting structure
  makes more sense, improving documentation clarity.

The are some other changes which are documentation related:
* added missing type annotation so they appear in the generated docs,
* reordered arguments in some methods,
* removed superfluous arguments and attributes,
* change private functions/methods/attributes to private and vice-versa.

The above all appear in the generated documentation and the with them,
the documentation is improved.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              | 10 ++-
 dts/framework/dts.py                          | 33 +++++--
 dts/framework/exception.py                    | 54 +++++-------
 dts/framework/remote_session/__init__.py      | 41 ++++-----
 .../interactive_remote_session.py             |  0
 .../{remote => }/interactive_shell.py         |  0
 .../{remote => }/python_shell.py              |  0
 .../remote_session/remote/__init__.py         | 27 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               | 12 +--
 .../{remote => }/testpmd_shell.py             |  0
 dts/framework/settings.py                     | 87 +++++++++++--------
 dts/framework/test_result.py                  |  4 +-
 dts/framework/test_suite.py                   |  7 +-
 dts/framework/testbed_model/__init__.py       | 12 +--
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py    | 27 ------
 .../linux_session.py                          |  6 +-
 dts/framework/testbed_model/node.py           | 25 ++++--
 .../os_session.py                             | 22 ++---
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py                          |  4 +-
 dts/framework/testbed_model/sut_node.py       |  8 +-
 dts/framework/testbed_model/tg_node.py        | 30 +------
 .../traffic_generator/__init__.py             | 24 +++++
 .../capturing_traffic_generator.py            |  6 +-
 .../{ => traffic_generator}/scapy.py          | 23 ++---
 .../traffic_generator.py                      | 16 +++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py                        | 46 +++-------
 dts/main.py                                   |  9 +-
 31 files changed, 258 insertions(+), 288 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index cb7e00ba34..2044c82611 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -17,6 +17,7 @@
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
 
@@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict):
+    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
         # This looks useless now, but is designed to allow expansion to traffic
         # generators that require more configuration later.
         match TrafficGeneratorType(d["type"]):
@@ -97,6 +98,10 @@ def from_dict(d: dict):
                 return ScapyTrafficGeneratorConfig(
                     traffic_generator_type=TrafficGeneratorType.SCAPY
                 )
+            case _:
+                raise ConfigurationError(
+                    f'Unknown traffic generator type "{d["type"]}".'
+                )
 
 
 @dataclass(slots=True, frozen=True)
@@ -324,6 +329,3 @@ def load_config() -> Configuration:
     config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
     config_obj: Configuration = Configuration.from_dict(dict(config))
     return config_obj
-
-
-CONFIGURATION = load_config()
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index f773f0c38d..4c7fb0c40a 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,19 +6,19 @@
 import sys
 
 from .config import (
-    CONFIGURATION,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     TestSuiteConfig,
+    load_config,
 )
 from .exception import BlockingTestSuiteError
 from .logger import DTSLOG, getLogger
 from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
 from .test_suite import get_test_suites
 from .testbed_model import SutNode, TGNode
-from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG = None  # type: ignore[assignment]
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -30,14 +30,18 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
-    check_dts_python_version()
+    _check_dts_python_version()
 
     sut_nodes: dict[str, SutNode] = {}
     tg_nodes: dict[str, TGNode] = {}
     try:
         # for all Execution sections
-        for execution in CONFIGURATION.executions:
+        for execution in load_config().executions:
             sut_node = sut_nodes.get(execution.system_under_test_node.name)
             tg_node = tg_nodes.get(execution.traffic_generator_node.name)
 
@@ -82,6 +86,25 @@ def run_all() -> None:
     _exit_dts()
 
 
+def _check_dts_python_version() -> None:
+    def RED(text: str) -> str:
+        return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+    if sys.version_info.major < 3 or (
+        sys.version_info.major == 3 and sys.version_info.minor < 10
+    ):
+        print(
+            RED(
+                (
+                    "WARNING: DTS execution node's python version is lower than"
+                    "python 3.10, is deprecated and will not work in future releases."
+                )
+            ),
+            file=sys.stderr,
+        )
+        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+
+
 def _run_execution(
     sut_node: SutNode,
     tg_node: TGNode,
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 001a5a5496..7489c03570 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
     Command execution timeout.
     """
 
-    command: str
-    output: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _command: str
 
-    def __init__(self, command: str, output: str):
-        self.command = command
-        self.output = output
+    def __init__(self, command: str):
+        self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self.command}"
-
-    def get_output(self) -> str:
-        return self.output
+        return f"TIMEOUT on {self._command}"
 
 
 class SSHConnectionError(DTSError):
@@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
     SSH connection error.
     """
 
-    host: str
-    errors: list[str]
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
+    _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
-        self.host = host
-        self.errors = [] if errors is None else errors
+        self._host = host
+        self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
-        message = f"Error trying to connect with {self.host}."
-        if self.errors:
-            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
+        message = f"Error trying to connect with {self._host}."
+        if self._errors:
+            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
 
         return message
 
@@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
     It can no longer be used.
     """
 
-    host: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
 
     def __init__(self, host: str):
-        self.host = host
+        self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self.host} has died"
+        return f"SSH session with {self._host} has died"
 
 
 class ConfigurationError(DTSError):
@@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError):
     Raised when a command executed on a Node returns a non-zero exit status.
     """
 
-    command: str
-    command_return_code: int
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    command: str
+    _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
         self.command = command
-        self.command_return_code = command_return_code
+        self._command_return_code = command_return_code
 
     def __str__(self) -> str:
         return (
             f"Command {self.command} returned a non-zero exit code: "
-            f"{self.command_return_code}"
+            f"{self._command_return_code}"
         )
 
 
@@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError):
     Used in test cases to verify the expected behavior.
     """
 
-    value: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
-    def __init__(self, value: str):
-        self.value = value
-
-    def __str__(self) -> str:
-        return repr(self.value)
-
 
 class BlockingTestSuiteError(DTSError):
-    suite_name: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
+    _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
-        self.suite_name = suite_name
+        self._suite_name = suite_name
 
     def __str__(self) -> str:
-        return f"Blocking suite {self.suite_name} failed."
+        return f"Blocking suite {self._suite_name} failed."
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 00b6d1f03a..5e7ddb2b05 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -12,29 +12,24 @@
 
 # pylama:ignore=W0611
 
-from framework.config import OS, NodeConfiguration
-from framework.exception import ConfigurationError
+from framework.config import NodeConfiguration
 from framework.logger import DTSLOG
 
-from .linux_session import LinuxSession
-from .os_session import InteractiveShellType, OSSession
-from .remote import (
-    CommandResult,
-    InteractiveRemoteSession,
-    InteractiveShell,
-    PythonShell,
-    RemoteSession,
-    SSHSession,
-    TestPmdDevice,
-    TestPmdShell,
-)
-
-
-def create_session(
+from .interactive_remote_session import InteractiveRemoteSession
+from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
+from .testpmd_shell import TestPmdShell
+
+
+def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> OSSession:
-    match node_config.os:
-        case OS.linux:
-            return LinuxSession(node_config, name, logger)
-        case _:
-            raise ConfigurationError(f"Unsupported OS {node_config.os}")
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
+
+
+def create_interactive_session(
+    node_config: NodeConfiguration, logger: DTSLOG
+) -> InteractiveRemoteSession:
+    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_remote_session.py
rename to dts/framework/remote_session/interactive_remote_session.py
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_shell.py
rename to dts/framework/remote_session/interactive_shell.py
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/python_shell.py
rename to dts/framework/remote_session/python_shell.py
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 06403691a5..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-# Copyright(c) 2023 University of New Hampshire
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .interactive_remote_session import InteractiveRemoteSession
-from .interactive_shell import InteractiveShell
-from .python_shell import PythonShell
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-from .testpmd_shell import TestPmdDevice, TestPmdShell
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
-
-
-def create_interactive_session(
-    node_config: NodeConfiguration, logger: DTSLOG
-) -> InteractiveRemoteSession:
-    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
index 8d127f1601..cee11d14d6 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -18,9 +18,7 @@
     SSHException,
 )
 
-from framework.config import NodeConfiguration
 from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
-from framework.logger import DTSLOG
 
 from .remote_session import CommandResult, RemoteSession
 
@@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
 
     session: Connection
 
-    def __init__(
-        self,
-        node_config: NodeConfiguration,
-        session_name: str,
-        logger: DTSLOG,
-    ):
-        super(SSHSession, self).__init__(node_config, session_name, logger)
-
     def _connect(self) -> None:
         errors = []
         retry_attempts = 10
@@ -117,7 +107,7 @@ def _send_command(
 
         except CommandTimedOut as e:
             self._logger.exception(e)
-            raise SSHTimeoutError(command, e.result.stderr) from e
+            raise SSHTimeoutError(command) from e
 
         return CommandResult(
             self.name, command, output.stdout, output.stderr, output.return_code
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/testpmd_shell.py
rename to dts/framework/remote_session/testpmd_shell.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index cfa39d011b..7f5841d073 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -22,8 +22,8 @@ def __init__(
             option_strings: Sequence[str],
             dest: str,
             nargs: str | int | None = None,
-            const: str | None = None,
-            default: str = None,
+            const: bool | None = None,
+            default: Any = None,
             type: Callable[[str], _T | argparse.FileType | None] = None,
             choices: Iterable[_T] | None = None,
             required: bool = False,
@@ -32,6 +32,12 @@ def __init__(
         ) -> None:
             env_var_value = os.environ.get(env_var)
             default = env_var_value or default
+            if const is not None:
+                nargs = 0
+                default = const if env_var_value else default
+                type = None
+                choices = None
+                metavar = None
             super(_EnvironmentArgument, self).__init__(
                 option_strings,
                 dest,
@@ -52,22 +58,28 @@ def __call__(
             values: Any,
             option_string: str = None,
         ) -> None:
-            setattr(namespace, self.dest, values)
+            if self.const is not None:
+                setattr(namespace, self.dest, self.const)
+            else:
+                setattr(namespace, self.dest, values)
 
     return _EnvironmentArgument
 
 
-@dataclass(slots=True, frozen=True)
-class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+@dataclass(slots=True)
+class Settings:
+    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: Settings = Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
         "and targets.",
     )
@@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
         "compiling DPDK.",
@@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
-        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
+        default=SETTINGS.verbose,
+        const=True,
+        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
         "to the console.",
     )
 
@@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
-        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+        const=True,
+        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
     )
 
     parser.add_argument(
@@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--snapshot",
         "--git-ref",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
         "tag ID or tree ID to test. To test local changes, first commit them, "
@@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
     return parser
 
 
-def _get_settings() -> _Settings:
+def get_settings() -> Settings:
     parsed_args = _get_parser().parse_args()
-    return _Settings(
+    return Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
-        verbose=(parsed_args.verbose == "Y"),
-        skip_setup=(parsed_args.skip_setup == "Y"),
+        verbose=parsed_args.verbose,
+        skip_setup=parsed_args.skip_setup,
         dpdk_tarball_path=Path(
-            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
-        )
-        if not os.path.exists(parsed_args.tarball)
-        else Path(parsed_args.tarball),
+            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
+            if not os.path.exists(parsed_args.tarball)
+            else Path(parsed_args.tarball)
+        ),
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=(
+            parsed_args.test_cases.split(",") if parsed_args.test_cases else []
+        ),
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index f0fbe80f6f..603e18872c 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -254,7 +254,7 @@ def add_build_target(
         self._inner_results.append(build_target_result)
         return build_target_result
 
-    def add_sut_info(self, sut_info: NodeInfo):
+    def add_sut_info(self, sut_info: NodeInfo) -> None:
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
@@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
         self._inner_results.append(execution_result)
         return execution_result
 
-    def add_error(self, error) -> None:
+    def add_error(self, error: Exception) -> None:
         self._errors.append(error)
 
     def process(self) -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 3b890c0451..d53553bf34 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -11,7 +11,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Union
+from typing import Any, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -26,8 +26,7 @@
 from .logger import DTSLOG, getLogger
 from .settings import SETTINGS
 from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode, TGNode
-from .testbed_model.hw.port import Port, PortLink
+from .testbed_model import Port, PortLink, SutNode, TGNode
 from .utils import get_packet_summaries
 
 
@@ -453,7 +452,7 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
-    def is_test_suite(object) -> bool:
+    def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
                 return True
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 5cbb859e47..8ced05653b 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,15 +9,9 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
+from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from .node import Node
+from .port import Port, PortLink
 from .sut_node import SutNode
 from .tg_node import TGNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/testbed_model/cpu.py
index d1918a12dc..8fe785dfe4 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
similarity index 97%
rename from dts/framework/remote_session/linux_session.py
rename to dts/framework/testbed_model/linux_session.py
index a3f1a6bf3b..f472bb8f0f 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -9,10 +9,10 @@
 from typing_extensions import NotRequired
 
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
 from framework.utils import expand_range
 
+from .cpu import LogicalCore
+from .port import Port
 from .posix_session import PosixSession
 
 
@@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
             lcores.append(LogicalCore(lcore, core, socket, node))
         return lcores
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return dpdk_prefix
 
     def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index fc01e0bf8e..fa5b143cdd 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,23 +12,26 @@
 from typing import Any, Callable, Type, Union
 
 from framework.config import (
+    OS,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     NodeConfiguration,
 )
+from framework.exception import ConfigurationError
 from framework.logger import DTSLOG, getLogger
-from framework.remote_session import InteractiveShellType, OSSession, create_session
 from framework.settings import SETTINGS
 
-from .hw import (
+from .cpu import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
-    VirtualDevice,
     lcore_filter,
 )
-from .hw.port import Port
+from .linux_session import LinuxSession
+from .os_session import InteractiveShellType, OSSession
+from .port import Port
+from .virtual_device import VirtualDevice
 
 
 class Node(ABC):
@@ -172,9 +175,9 @@ def create_interactive_shell(
 
         return self.main_session.create_interactive_shell(
             shell_cls,
-            app_args,
             timeout,
             privileged,
+            app_args,
         )
 
     def filter_lcores(
@@ -205,7 +208,7 @@ def _get_remote_cpus(self) -> None:
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
-    def _setup_hugepages(self):
+    def _setup_hugepages(self) -> None:
         """
         Setup hugepages on the Node. Different architectures can supply different
         amounts of memory for hugepages and numa-based hugepage allocation may need
@@ -249,3 +252,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
             return lambda *args: None
         else:
             return func
+
+
+def create_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> OSSession:
+    match node_config.os:
+        case OS.linux:
+            return LinuxSession(node_config, name, logger)
+        case _:
+            raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
similarity index 95%
rename from dts/framework/remote_session/os_session.py
rename to dts/framework/testbed_model/os_session.py
index 8a709eac1c..76e595a518 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -10,19 +10,19 @@
 
 from framework.config import Architecture, NodeConfiguration, NodeInfo
 from framework.logger import DTSLOG
-from framework.remote_session.remote import InteractiveShell
-from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
-from framework.utils import MesonArgs
-
-from .remote import (
+from framework.remote_session import (
     CommandResult,
     InteractiveRemoteSession,
+    InteractiveShell,
     RemoteSession,
     create_interactive_session,
     create_remote_session,
 )
+from framework.settings import SETTINGS
+from framework.utils import MesonArgs
+
+from .cpu import LogicalCore
+from .port import Port
 
 InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
 
@@ -85,9 +85,9 @@ def send_command(
     def create_interactive_shell(
         self,
         shell_cls: Type[InteractiveShellType],
-        eal_parameters: str,
         timeout: float,
         privileged: bool,
+        app_args: str,
     ) -> InteractiveShellType:
         """
         See "create_interactive_shell" in SutNode
@@ -96,7 +96,7 @@ def create_interactive_shell(
             self.interactive_session.session,
             self._logger,
             self._get_privileged_command if privileged else None,
-            eal_parameters,
+            app_args,
             timeout,
         )
 
@@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
         """
 
     @abstractmethod
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
         """
         Try to find DPDK remote dir in remote_dir.
         """
@@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
         """
 
     @abstractmethod
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
         Get the DPDK file prefix that will be used when running DPDK apps.
         """
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
similarity index 100%
rename from dts/framework/testbed_model/hw/port.py
rename to dts/framework/testbed_model/port.py
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
similarity index 98%
rename from dts/framework/remote_session/posix_session.py
rename to dts/framework/testbed_model/posix_session.py
index 5da0516e05..1d1d5b1b26 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
 
         return ret_opts
 
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
@@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs(
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 4161d3a4d5..17deea06e2 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -15,12 +15,14 @@
     NodeInfo,
     SutNodeConfiguration,
 )
-from framework.remote_session import CommandResult, InteractiveShellType, OSSession
+from framework.remote_session import CommandResult
 from framework.settings import SETTINGS
 from framework.utils import MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
+from .cpu import LogicalCoreCount, LogicalCoreList
 from .node import Node
+from .os_session import InteractiveShellType, OSSession
+from .virtual_device import VirtualDevice
 
 
 class EalParameters(object):
@@ -307,7 +309,7 @@ def create_eal_parameters(
         prefix: str = "dpdk",
         append_prefix_timestamp: bool = True,
         no_pci: bool = False,
-        vdevs: list[VirtualDevice] = None,
+        vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
         """
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 27025cfa31..166eb8430e 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -16,16 +16,11 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.config import (
-    ScapyTrafficGeneratorConfig,
-    TGNodeConfiguration,
-    TrafficGeneratorType,
-)
-from framework.exception import ConfigurationError
-
-from .capturing_traffic_generator import CapturingTrafficGenerator
-from .hw.port import Port
+from framework.config import TGNodeConfiguration
+
 from .node import Node
+from .port import Port
+from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
 
 
 class TGNode(Node):
@@ -80,20 +75,3 @@ def close(self) -> None:
         """Free all resources used by the node"""
         self.traffic_generator.close()
         super(TGNode, self).close()
-
-
-def create_traffic_generator(
-    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
-) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
-
-    from .scapy import ScapyTrafficGenerator
-
-    match traffic_generator_config.traffic_generator_type:
-        case TrafficGeneratorType.SCAPY:
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
-        case _:
-            raise ConfigurationError(
-                "Unknown traffic generator: "
-                f"{traffic_generator_config.traffic_generator_type}"
-            )
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
new file mode 100644
index 0000000000..11bfa1ee0f
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
+from framework.exception import ConfigurationError
+from framework.testbed_model.node import Node
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .scapy import ScapyTrafficGenerator
+
+
+def create_traffic_generator(
+    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+    """A factory function for creating traffic generator object from user config."""
+
+    match traffic_generator_config.traffic_generator_type:
+        case TrafficGeneratorType.SCAPY:
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+        case _:
+            raise ConfigurationError(
+                "Unknown traffic generator: "
+                f"{traffic_generator_config.traffic_generator_type}"
+            )
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
similarity index 96%
rename from dts/framework/testbed_model/capturing_traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index ab98987f8e..e521211ef0 100644
--- a/dts/framework/testbed_model/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -16,9 +16,9 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.settings import SETTINGS
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
 from .traffic_generator import TrafficGenerator
 
 
@@ -130,7 +130,9 @@ def _send_packets_and_capture(
         for the specified duration. It must be able to handle no received packets.
         """
 
-    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+    def _write_capture_from_packets(
+        self, capture_name: str, packets: list[Packet]
+    ) -> None:
         file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
         self._logger.debug(f"Writing packets to {file_name}.")
         scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
similarity index 95%
rename from dts/framework/testbed_model/scapy.py
rename to dts/framework/testbed_model/traffic_generator/scapy.py
index af0d4dbb25..51864b6e6b 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -24,16 +24,15 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import DTSLOG, getLogger
 from framework.remote_session import PythonShell
 from framework.settings import SETTINGS
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 
 from .capturing_traffic_generator import (
     CapturingTrafficGenerator,
     _get_default_capture_name,
 )
-from .hw.port import Port
-from .tg_node import TGNode
 
 """
 ========= BEGIN RPC FUNCTIONS =========
@@ -146,7 +145,7 @@ def quit(self) -> None:
         self._BaseServer__shutdown_request = True
         return None
 
-    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
         """Add a function to the server.
 
         This is meant to be executed remotely.
@@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     session: PythonShell
     rpc_server_proxy: xmlrpc.client.ServerProxy
     _config: ScapyTrafficGeneratorConfig
-    _tg_node: TGNode
-    _logger: DTSLOG
-
-    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
-        self._config = config
-        self._tg_node = tg_node
-        self._logger = getLogger(
-            f"{self._tg_node.name} {self._config.traffic_generator_type}"
-        )
+
+    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        super().__init__(tg_node, config)
 
         assert (
             self._tg_node.config.os == OS.linux
@@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
             function_bytes = marshal.dumps(function.__code__)
             self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
 
-    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
+    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
         # load the source of the function
         src = inspect.getsource(QuittableXMLRPCServer)
         # Lines with only whitespace break the repl if in the middle of a function
@@ -280,7 +273,7 @@ def _send_packets_and_capture(
         scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
         return scapy_packets
 
-    def close(self):
+    def close(self) -> None:
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
similarity index 80%
rename from dts/framework/testbed_model/traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 28c35d3ce4..ea7c3963da 100644
--- a/dts/framework/testbed_model/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -12,11 +12,12 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.logger import DTSLOG
+from framework.config import TrafficGeneratorConfig
+from framework.logger import DTSLOG, getLogger
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
-
 
 class TrafficGenerator(ABC):
     """The base traffic generator.
@@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
     Defines the few basic methods that each traffic generator must implement.
     """
 
+    _config: TrafficGeneratorConfig
+    _tg_node: Node
     _logger: DTSLOG
 
+    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        self._config = config
+        self._tg_node = tg_node
+        self._logger = getLogger(
+            f"{self._tg_node.name} {self._config.traffic_generator_type}"
+        )
+
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send a packet and block until it is fully sent.
 
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index d27c2c5b5f..f0c916471c 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -7,7 +7,6 @@
 import json
 import os
 import subprocess
-import sys
 from enum import Enum
 from pathlib import Path
 from subprocess import SubprocessError
@@ -16,35 +15,7 @@
 
 from .exception import ConfigurationError
 
-
-class StrEnum(Enum):
-    @staticmethod
-    def _generate_next_value_(
-        name: str, start: int, count: int, last_values: object
-    ) -> str:
-        return name
-
-    def __str__(self) -> str:
-        return self.name
-
-
-REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
-
-
-def check_dts_python_version() -> None:
-    if sys.version_info.major < 3 or (
-        sys.version_info.major == 3 and sys.version_info.minor < 10
-    ):
-        print(
-            RED(
-                (
-                    "WARNING: DTS execution node's python version is lower than"
-                    "python 3.10, is deprecated and will not work in future releases."
-                )
-            ),
-            file=sys.stderr,
-        )
-        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
 
 
 def expand_range(range_str: str) -> list[int]:
@@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
     return expanded_range
 
 
-def get_packet_summaries(packets: list[Packet]):
+def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
     return f"Packet contents: \n{packet_summaries}"
 
 
-def RED(text: str) -> str:
-    return f"\u001B[31;1m{str(text)}\u001B[0m"
+class StrEnum(Enum):
+    @staticmethod
+    def _generate_next_value_(
+        name: str, start: int, count: int, last_values: object
+    ) -> str:
+        return name
+
+    def __str__(self) -> str:
+        return self.name
 
 
 class MesonArgs(object):
@@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
         if self._tarball_path and os.path.exists(self._tarball_path):
             os.remove(self._tarball_path)
 
-    def __fspath__(self):
+    def __fspath__(self) -> str:
         return str(self._tarball_path)
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..5d4714b0c3 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,17 @@
 
 import logging
 
-from framework import dts
+from framework import settings
 
 
 def main() -> None:
+    """Set DTS settings, then run DTS.
+
+    The DTS settings are taken from the command line arguments and the environment variables.
+    """
+    settings.SETTINGS = settings.get_settings()
+    from framework import dts
+
     dts.run_all()
 
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 02/21] dts: add docstring checker
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
  2023-11-15 13:09               ` [PATCH v7 01/21] dts: code adjustments for doc generation Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-20 16:03                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 03/21] dts: add basic developer docs Juraj Linkeš
                                 ` (19 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Python docstrings are the in-code way to document the code. The
docstring checker of choice is pydocstyle which we're executing from
Pylama, but the current latest versions are not complatible due to [0],
so pin the pydocstyle version to the latest working version.

[0] https://github.com/klen/pylama/issues/232

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 12 ++++++------
 dts/pyproject.toml |  6 +++++-
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..a734fa71f0 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -489,20 +489,20 @@ files = [
 
 [[package]]
 name = "pydocstyle"
-version = "6.3.0"
+version = "6.1.1"
 description = "Python docstring style checker"
 optional = false
 python-versions = ">=3.6"
 files = [
-    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
-    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
+    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
 ]
 
 [package.dependencies]
-snowballstemmer = ">=2.2.0"
+snowballstemmer = "*"
 
 [package.extras]
-toml = ["tomli (>=1.2.3)"]
+toml = ["toml"]
 
 [[package]]
 name = "pyflakes"
@@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 6762edfa6b..3943c87c87 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -25,6 +25,7 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
+pydocstyle = "6.1.1"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^0.961"
@@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
 [tool.pylama]
-linters = "mccabe,pycodestyle,pyflakes"
+linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
 max_line_length = 88 # https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length
 
+[tool.pylama.linter.pydocstyle]
+convention = "google"
+
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 03/21] dts: add basic developer docs
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
  2023-11-15 13:09               ` [PATCH v7 01/21] dts: code adjustments for doc generation Juraj Linkeš
  2023-11-15 13:09               ` [PATCH v7 02/21] dts: add docstring checker Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-20 16:03                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 04/21] dts: exceptions docstring update Juraj Linkeš
                                 ` (18 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Expand the framework contribution guidelines and add how to document the
code with Python docstrings.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..cd771a428c 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
 The results contain basic statistics of passed/failed test cases and DPDK version.
 
 
+Contributing to DTS
+-------------------
+
+There are two areas of contribution: The DTS framework and DTS test suites.
+
+The framework contains the logic needed to run test cases, such as connecting to nodes,
+running DPDK apps and collecting results.
+
+The test cases call APIs from the framework to test their scenarios. Adding test cases may
+require adding code to the framework as well.
+
+
+Framework Coding Guidelines
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When adding code to the DTS framework, pay attention to the rest of the code
+and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
+warnings when some of the basics are not met.
+
+The code must be properly documented with docstrings. The style must conform to
+the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
+See an example of the style
+`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
+For cases which are not covered by the Google style, refer
+to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
+the two style guides, where we deviate or where some additional clarification is helpful:
+
+   * The __init__() methods of classes are documented separately from the docstring of the class
+     itself.
+   * The docstrigs of implemented abstract methods should refer to the superclass's definition
+     if there's no deviation.
+   * Instance variables/attributes should be documented in the docstring of the class
+     in the ``Attributes:`` section.
+   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
+     attributes which result in instance variables/attributes should also be recorded
+     in the ``Attributes:`` section.
+   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
+     the type annotated line. The description may be omitted if the meaning is obvious.
+   * The Enum and TypedDict also process the attributes in particular ways and should be documented
+     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
+   * When referencing a parameter of a function or a method in their docstring, don't use
+     any articles and put the parameter into single backticks. This mimics the style of
+     `Python's documentation <https://docs.python.org/3/index.html>`_.
+   * When specifying a value, use double backticks::
+
+        def foo(greet: bool) -> None:
+            """Demonstration of single and double backticks.
+
+            `greet` controls whether ``Hello World`` is printed.
+
+            Args:
+               greet: Whether to print the ``Hello World`` message.
+            """
+            if greet:
+               print(f"Hello World")
+
+   * The docstring maximum line length is the same as the code maximum line length.
+
+
 How To Write a Test Suite
 -------------------------
 
@@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
    | These methods don't need to be implemented if there's no need for them in a test suite.
      In that case, nothing will happen when they're is executed.
 
+#. **Configuration, traffic and other logic**
+
+   The ``TestSuite`` class contains a variety of methods for anything that
+   a test suite setup, a teardown, or a test case may need to do.
+
+   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
+   and use the interactive shell instances directly.
+
+   These are the two main ways to call the framework logic in test suites. If there's any
+   functionality or logic missing from the framework, it should be implemented so that
+   the test suites can use one of these two ways.
+
 #. **Test case verification**
 
    Test case verification should be done with the ``verify`` method, which records the result.
@@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
    and used by the test suite via the ``sut_node`` field.
 
 
+.. _dts_dev_tools:
+
 DTS Developer Tools
 -------------------
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 04/21] dts: exceptions docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (2 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 03/21] dts: add basic developer docs Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-20 16:22                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 05/21] dts: settings " Juraj Linkeš
                                 ` (17 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/__init__.py  |  12 ++++-
 dts/framework/exception.py | 106 +++++++++++++++++++++++++------------
 2 files changed, 83 insertions(+), 35 deletions(-)

diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
index d551ad4bf0..662e6ccad2 100644
--- a/dts/framework/__init__.py
+++ b/dts/framework/__init__.py
@@ -1,3 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
+
+"""Libraries and utilities for running DPDK Test Suite (DTS).
+
+The various modules in the DTS framework offer:
+
+* Connections to nodes, both interactive and non-interactive,
+* A straightforward way to add support for different operating systems of remote nodes,
+* Test suite setup, execution and teardown, along with test case setup, execution and teardown,
+* Pre-test suite setup and post-test suite teardown.
+"""
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 7489c03570..ee1562c672 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -3,8 +3,10 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-User-defined exceptions used across the framework.
+"""DTS exceptions.
+
+The exceptions all have different severities expressed as an integer.
+The highest severity of all raised exception is used as the exit code of DTS.
 """
 
 from enum import IntEnum, unique
@@ -13,59 +15,79 @@
 
 @unique
 class ErrorSeverity(IntEnum):
-    """
-    The severity of errors that occur during DTS execution.
+    """The severity of errors that occur during DTS execution.
+
     All exceptions are caught and the most severe error is used as return code.
     """
 
+    #:
     NO_ERR = 0
+    #:
     GENERIC_ERR = 1
+    #:
     CONFIG_ERR = 2
+    #:
     REMOTE_CMD_EXEC_ERR = 3
+    #:
     SSH_ERR = 4
+    #:
     DPDK_BUILD_ERR = 10
+    #:
     TESTCASE_VERIFY_ERR = 20
+    #:
     BLOCKING_TESTSUITE_ERR = 25
 
 
 class DTSError(Exception):
-    """
-    The base exception from which all DTS exceptions are derived.
-    Stores error severity.
+    """The base exception from which all DTS exceptions are subclassed.
+
+    Do not use this exception, only use subclassed exceptions.
     """
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
 class SSHTimeoutError(DTSError):
-    """
-    Command execution timeout.
-    """
+    """The SSH execution of a command timed out."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _command: str
 
     def __init__(self, command: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            command: The executed command.
+        """
         self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self._command}"
+        """Add some context to the string representation."""
+        return f"{self._command} execution timed out."
 
 
 class SSHConnectionError(DTSError):
-    """
-    SSH connection error.
-    """
+    """An unsuccessful SSH connection."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
     _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            host: The hostname to which we're trying to connect.
+            errors: Any errors that occurred during the connection attempt.
+        """
         self._host = host
         self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
+        """Include the errors in the string representation."""
         message = f"Error trying to connect with {self._host}."
         if self._errors:
             message += f" Errors encountered while retrying: {', '.join(self._errors)}"
@@ -74,43 +96,53 @@ def __str__(self) -> str:
 
 
 class SSHSessionDeadError(DTSError):
-    """
-    SSH session is not alive.
-    It can no longer be used.
-    """
+    """The SSH session is no longer alive."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
 
     def __init__(self, host: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            host: The hostname of the disconnected node.
+        """
         self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self._host} has died"
+        """Add some context to the string representation."""
+        return f"SSH session with {self._host} has died."
 
 
 class ConfigurationError(DTSError):
-    """
-    Raised when an invalid configuration is encountered.
-    """
+    """An invalid configuration."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
 
 
 class RemoteCommandExecutionError(DTSError):
-    """
-    Raised when a command executed on a Node returns a non-zero exit status.
-    """
+    """An unsuccessful execution of a remote command."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    #: The executed command.
     command: str
     _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            command: The executed command.
+            command_return_code: The return code of the executed command.
+        """
         self.command = command
         self._command_return_code = command_return_code
 
     def __str__(self) -> str:
+        """Include both the command and return code in the string representation."""
         return (
             f"Command {self.command} returned a non-zero exit code: "
             f"{self._command_return_code}"
@@ -118,35 +150,41 @@ def __str__(self) -> str:
 
 
 class RemoteDirectoryExistsError(DTSError):
-    """
-    Raised when a remote directory to be created already exists.
-    """
+    """A directory that exists on a remote node."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
 
 
 class DPDKBuildError(DTSError):
-    """
-    Raised when DPDK build fails for any reason.
-    """
+    """A DPDK build failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
 
 
 class TestCaseVerifyError(DTSError):
-    """
-    Used in test cases to verify the expected behavior.
-    """
+    """A test case failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
 
 class BlockingTestSuiteError(DTSError):
+    """A failure in a blocking test suite."""
+
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
     _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
+        """Define the meaning of the first argument.
+
+        Args:
+            suite_name: The blocking test suite.
+        """
         self._suite_name = suite_name
 
     def __str__(self) -> str:
+        """Add some context to the string representation."""
         return f"Blocking suite {self._suite_name} failed."
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 05/21] dts: settings docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (3 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 04/21] dts: exceptions docstring update Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-15 13:09               ` [PATCH v7 06/21] dts: logger and utils " Juraj Linkeš
                                 ` (16 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/settings.py | 103 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 102 insertions(+), 1 deletion(-)

diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 7f5841d073..fc7c4e00e8 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -3,6 +3,72 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
+"""Environment variables and command line arguments parsing.
+
+This is a simple module utilizing the built-in argparse module to parse command line arguments,
+augment them with values from environment variables and make them available across the framework.
+
+The command line value takes precedence, followed by the environment variable value,
+followed by the default value defined in this module.
+
+The command line arguments along with the supported environment variables are:
+
+.. option:: --config-file
+.. envvar:: DTS_CFG_FILE
+
+    The path to the YAML test run configuration file.
+
+.. option:: --output-dir, --output
+.. envvar:: DTS_OUTPUT_DIR
+
+    The directory where DTS logs and results are saved.
+
+.. option:: --compile-timeout
+.. envvar:: DTS_COMPILE_TIMEOUT
+
+    The timeout for compiling DPDK.
+
+.. option:: -t, --timeout
+.. envvar:: DTS_TIMEOUT
+
+    The timeout for all DTS operation except for compiling DPDK.
+
+.. option:: -v, --verbose
+.. envvar:: DTS_VERBOSE
+
+    Set to any value to enable logging everything to the console.
+
+.. option:: -s, --skip-setup
+.. envvar:: DTS_SKIP_SETUP
+
+    Set to any value to skip building DPDK.
+
+.. option:: --tarball, --snapshot, --git-ref
+.. envvar:: DTS_DPDK_TARBALL
+
+    The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
+
+.. option:: --test-cases
+.. envvar:: DTS_TESTCASES
+
+    A comma-separated list of test cases to execute. Unknown test cases will be silently ignored.
+
+.. option:: --re-run, --re_run
+.. envvar:: DTS_RERUN
+
+    Re-run each test case this many times in case of a failure.
+
+The module provides one key module-level variable:
+
+Attributes:
+    SETTINGS: The module level variable storing framework-wide DTS settings.
+
+Typical usage example::
+
+  from framework.settings import SETTINGS
+  foo = SETTINGS.foo
+"""
+
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
@@ -16,6 +82,23 @@
 
 
 def _env_arg(env_var: str) -> Any:
+    """A helper method augmenting the argparse Action with environment variables.
+
+    If the supplied environment variable is defined, then the default value
+    of the argument is modified. This satisfies the priority order of
+    command line argument > environment variable > default value.
+
+    Arguments with no values (flags) should be defined using the const keyword argument
+    (True or False). When the argument is specified, it will be set to const, if not specified,
+    the default will be stored (possibly modified by the corresponding environment variable).
+
+    Other arguments work the same as default argparse arguments, that is using
+    the default 'store' action.
+
+    Returns:
+          The modified argparse.Action.
+    """
+
     class _EnvironmentArgument(argparse.Action):
         def __init__(
             self,
@@ -68,14 +151,28 @@ def __call__(
 
 @dataclass(slots=True)
 class Settings:
+    """Default framework-wide user settings.
+
+    The defaults may be modified at the start of the run.
+    """
+
+    #:
     config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    #:
     output_dir: str = "output"
+    #:
     timeout: float = 15
+    #:
     verbose: bool = False
+    #:
     skip_setup: bool = False
+    #:
     dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    #:
     compile_timeout: float = 1200
+    #:
     test_cases: list[str] = field(default_factory=list)
+    #:
     re_run: int = 0
 
 
@@ -169,7 +266,7 @@ def _get_parser() -> argparse.ArgumentParser:
         action=_env_arg("DTS_RERUN"),
         default=SETTINGS.re_run,
         type=int,
-        help="[DTS_RERUN] Re-run each test case the specified amount of times "
+        help="[DTS_RERUN] Re-run each test case the specified number of times "
         "if a test failure occurs",
     )
 
@@ -177,6 +274,10 @@ def _get_parser() -> argparse.ArgumentParser:
 
 
 def get_settings() -> Settings:
+    """Create new settings with inputs from the user.
+
+    The inputs are taken from the command line and from environment variables.
+    """
     parsed_args = _get_parser().parse_args()
     return Settings(
         config_file_path=parsed_args.config_file,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 06/21] dts: logger and utils docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (4 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 05/21] dts: settings " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-20 16:23                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 07/21] dts: dts runner and main " Juraj Linkeš
                                 ` (15 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/logger.py | 72 ++++++++++++++++++++++-----------
 dts/framework/utils.py  | 88 +++++++++++++++++++++++++++++------------
 2 files changed, 113 insertions(+), 47 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index bb2991e994..d3eb75a4e4 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -3,9 +3,9 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-DTS logger module with several log level. DTS framework and TestSuite logs
-are saved in different log files.
+"""DTS logger module.
+
+DTS framework and TestSuite logs are saved in different log files.
 """
 
 import logging
@@ -18,19 +18,21 @@
 stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 
 
-class LoggerDictType(TypedDict):
-    logger: "DTSLOG"
-    name: str
-    node: str
-
+class DTSLOG(logging.LoggerAdapter):
+    """DTS logger adapter class for framework and testsuites.
 
-# List for saving all using loggers
-Loggers: list[LoggerDictType] = []
+    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
+    variable control the verbosity of output. If enabled, all messages will be emitted to the
+    console.
 
+    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+    variable modify the directory where the logs will be stored.
 
-class DTSLOG(logging.LoggerAdapter):
-    """
-    DTS log class for framework and testsuite.
+    Attributes:
+        node: The additional identifier. Currently unused.
+        sh: The handler which emits logs to console.
+        fh: The handler which emits logs to a file.
+        verbose_fh: Just as fh, but logs with a different, more verbose, format.
     """
 
     _logger: logging.Logger
@@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
     verbose_fh: logging.FileHandler
 
     def __init__(self, logger: logging.Logger, node: str = "suite"):
+        """Extend the constructor with additional handlers.
+
+        One handler logs to the console, the other one to a file, with either a regular or verbose
+        format.
+
+        Args:
+            logger: The logger from which to create the logger adapter.
+            node: An additional identifier. Currently unused.
+        """
         self._logger = logger
         # 1 means log everything, this will be used by file handlers if their level
         # is not set
@@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
         super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
 
     def logger_exit(self) -> None:
-        """
-        Remove stream handler and logfile handler.
-        """
+        """Remove the stream handler and the logfile handler."""
         for handler in (self.sh, self.fh, self.verbose_fh):
             handler.flush()
             self._logger.removeHandler(handler)
 
 
+class _LoggerDictType(TypedDict):
+    logger: DTSLOG
+    name: str
+    node: str
+
+
+# List for saving all loggers in use
+_Loggers: list[_LoggerDictType] = []
+
+
 def getLogger(name: str, node: str = "suite") -> DTSLOG:
+    """Get DTS logger adapter identified by name and node.
+
+    An existing logger will be return if one with the exact name and node already exists.
+    A new one will be created and stored otherwise.
+
+    Args:
+        name: The name of the logger.
+        node: An additional identifier for the logger.
+
+    Returns:
+        A logger uniquely identified by both name and node.
     """
-    Get logger handler and if there's no handler for specified Node will create one.
-    """
-    global Loggers
+    global _Loggers
     # return saved logger
-    logger: LoggerDictType
-    for logger in Loggers:
+    logger: _LoggerDictType
+    for logger in _Loggers:
         if logger["name"] == name and logger["node"] == node:
             return logger["logger"]
 
     # return new logger
     dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
-    Loggers.append({"logger": dts_logger, "name": name, "node": node})
+    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
     return dts_logger
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index f0c916471c..5016e3be10 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -3,6 +3,16 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Various utility classes and functions.
+
+These are used in multiple modules across the framework. They're here because
+they provide some non-specific functionality, greatly simplify imports or just don't
+fit elsewhere.
+
+Attributes:
+    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
+"""
+
 import atexit
 import json
 import os
@@ -19,12 +29,20 @@
 
 
 def expand_range(range_str: str) -> list[int]:
-    """
-    Process range string into a list of integers. There are two possible formats:
-    n - a single integer
-    n-m - a range of integers
+    """Process `range_str` into a list of integers.
+
+    There are two possible formats of `range_str`:
+
+        * ``n`` - a single integer,
+        * ``n-m`` - a range of integers.
 
-    The returned range includes both n and m. Empty string returns an empty list.
+    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
+
+    Args:
+        range_str: The range to expand.
+
+    Returns:
+        All the numbers from the range.
     """
     expanded_range: list[int] = []
     if range_str:
@@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]:
 
 
 def get_packet_summaries(packets: list[Packet]) -> str:
+    """Format a string summary from `packets`.
+
+    Args:
+        packets: The packets to format.
+
+    Returns:
+        The summary of `packets`.
+    """
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str:
 
 
 class StrEnum(Enum):
+    """Enum with members stored as strings."""
+
     @staticmethod
     def _generate_next_value_(
         name: str, start: int, count: int, last_values: object
@@ -56,22 +84,29 @@ def _generate_next_value_(
         return name
 
     def __str__(self) -> str:
+        """The string representation is the name of the member."""
         return self.name
 
 
 class MesonArgs(object):
-    """
-    Aggregate the arguments needed to build DPDK:
-    default_library: Default library type, Meson allows "shared", "static" and "both".
-               Defaults to None, in which case the argument won't be used.
-    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
-               Do not use -D with them, for example:
-               meson_args = MesonArgs(enable_kmods=True).
-    """
+    """Aggregate the arguments needed to build DPDK."""
 
     _default_library: str
 
     def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+        """Initialize the meson arguments.
+
+        Args:
+            default_library: The default library type, Meson supports ``shared``, ``static`` and
+                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
+            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Example:
+            ::
+
+                meson_args = MesonArgs(enable_kmods=True).
+        """
         self._default_library = (
             f"--default-library={default_library}" if default_library else ""
         )
@@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
         )
 
     def __str__(self) -> str:
+        """The actual args."""
         return " ".join(f"{self._default_library} {self._dpdk_args}".split())
 
 
@@ -104,24 +140,14 @@ class _TarCompressionFormat(StrEnum):
 
 
 class DPDKGitTarball(object):
-    """Create a compressed tarball of DPDK from the repository.
-
-    The DPDK version is specified with git object git_ref.
-    The tarball will be compressed with _TarCompressionFormat,
-    which must be supported by the DTS execution environment.
-    The resulting tarball will be put into output_dir.
+    """Compressed tarball of DPDK from the repository.
 
-    The class supports the os.PathLike protocol,
+    The class supports the :class:`os.PathLike` protocol,
     which is used to get the Path of the tarball::
 
         from pathlib import Path
         tarball = DPDKGitTarball("HEAD", "output")
         tarball_path = Path(tarball)
-
-    Arguments:
-        git_ref: A git commit ID, tag ID or tree ID.
-        output_dir: The directory where to put the resulting tarball.
-        tar_compression_format: The compression format to use.
     """
 
     _git_ref: str
@@ -136,6 +162,17 @@ def __init__(
         output_dir: str,
         tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
     ):
+        """Create the tarball during initialization.
+
+        The DPDK version is specified with `git_ref`. The tarball will be compressed with
+        `tar_compression_format`, which must be supported by the DTS execution environment.
+        The resulting tarball will be put into `output_dir`.
+
+        Args:
+            git_ref: A git commit ID, tag ID or tree ID.
+            output_dir: The directory where to put the resulting tarball.
+            tar_compression_format: The compression format to use.
+        """
         self._git_ref = git_ref
         self._tar_compression_format = tar_compression_format
 
@@ -204,4 +241,5 @@ def _delete_tarball(self) -> None:
             os.remove(self._tarball_path)
 
     def __fspath__(self) -> str:
+        """The os.PathLike protocol implementation."""
         return str(self._tarball_path)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 07/21] dts: dts runner and main docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (5 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 06/21] dts: logger and utils " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-16 21:51                 ` Jeremy Spewock
  2023-11-20 17:43                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 08/21] dts: test suite " Juraj Linkeš
                                 ` (14 subsequent siblings)
  21 siblings, 2 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++-------
 dts/main.py          |   8 ++-
 2 files changed, 112 insertions(+), 24 deletions(-)

diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 4c7fb0c40a..331fed7dc4 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,33 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+r"""Test suite runner module.
+
+A DTS run is split into stages:
+
+    #. Execution stage,
+    #. Build target stage,
+    #. Test suite stage,
+    #. Test case stage.
+
+The module is responsible for running tests on testbeds defined in the test run configuration.
+Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or
+one of its subclasses. The test case results are also recorded.
+
+If an error occurs, the current stage is aborted, the error is recorded and the run continues in
+the next iteration of the same stage. The return code is the highest `severity` of all
+:class:`~.framework.exception.DTSError`\s.
+
+Example:
+    An error occurs in a build target setup. The current build target is aborted and the run
+    continues with the next build target. If the errored build target was the last one in the given
+    execution, the next execution begins.
+
+Attributes:
+    dts_logger: The logger instance used in this module.
+    result: The top level result used in the module.
+"""
+
 import sys
 
 from .config import (
@@ -23,9 +50,38 @@
 
 
 def run_all() -> None:
-    """
-    The main process of DTS. Runs all build targets in all executions from the main
-    config file.
+    """Run all build targets in all executions from the test run configuration.
+
+    Before running test suites, executions and build targets are first set up.
+    The executions and build targets defined in the test run configuration are iterated over.
+    The executions define which tests to run and where to run them and build targets define
+    the DPDK build setup.
+
+    The tests suites are set up for each execution/build target tuple and each scheduled
+    test case within the test suite is set up, executed and torn down. After all test cases
+    have been executed, the test suite is torn down and the next build target will be tested.
+
+    All the nested steps look like this:
+
+        #. Execution setup
+
+            #. Build target setup
+
+                #. Test suite setup
+
+                    #. Test case setup
+                    #. Test case logic
+                    #. Test case teardown
+
+                #. Test suite teardown
+
+            #. Build target teardown
+
+        #. Execution teardown
+
+    The test cases are filtered according to the specification in the test run configuration and
+    the :option:`--test-cases` command line argument or
+    the :envvar:`DTS_TESTCASES` environment variable.
     """
     global dts_logger
     global result
@@ -87,6 +143,8 @@ def run_all() -> None:
 
 
 def _check_dts_python_version() -> None:
+    """Check the required Python version - v3.10."""
+
     def RED(text: str) -> str:
         return f"\u001B[31;1m{str(text)}\u001B[0m"
 
@@ -111,9 +169,16 @@ def _run_execution(
     execution: ExecutionConfiguration,
     result: DTSResult,
 ) -> None:
-    """
-    Run the given execution. This involves running the execution setup as well as
-    running all build targets in the given execution.
+    """Run the given execution.
+
+    This involves running the execution setup as well as running all build targets
+    in the given execution. After that, execution teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: An execution's test run configuration.
+        result: The top level result object.
     """
     dts_logger.info(
         f"Running execution with SUT '{execution.system_under_test_node.name}'."
@@ -150,8 +215,18 @@ def _run_build_target(
     execution: ExecutionConfiguration,
     execution_result: ExecutionResult,
 ) -> None:
-    """
-    Run the given build target.
+    """Run the given build target.
+
+    This involves running the build target setup as well as running all test suites
+    in the given execution the build target is defined in.
+    After that, build target teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        build_target: A build target's test run configuration.
+        execution: The build target's execution's test run configuration.
+        execution_result: The execution level result object associated with the execution.
     """
     dts_logger.info(f"Running build target '{build_target.name}'.")
     build_target_result = execution_result.add_build_target(build_target)
@@ -183,10 +258,17 @@ def _run_all_suites(
     execution: ExecutionConfiguration,
     build_target_result: BuildTargetResult,
 ) -> None:
-    """
-    Use the given build_target to run execution's test suites
-    with possibly only a subset of test cases.
-    If no subset is specified, run all test cases.
+    """Run the execution's (possibly a subset) test suites using the current build_target.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
     """
     end_build_target = False
     if not execution.skip_smoke_tests:
@@ -215,16 +297,22 @@ def _run_single_suite(
     build_target_result: BuildTargetResult,
     test_suite_config: TestSuiteConfig,
 ) -> None:
-    """Runs a single test suite.
+    """Run all test suite in a single test suite module.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
 
     Args:
-        sut_node: Node to run tests on.
-        execution: Execution the test case belongs to.
-        build_target_result: Build target configuration test case is run on
-        test_suite_config: Test suite configuration
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
+        test_suite_config: Test suite test run configuration specifying the test suite module
+            and possibly a subset of test cases of test suites in that module.
 
     Raises:
-        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
+        BlockingTestSuiteError: If a blocking test suite fails.
     """
     try:
         full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
@@ -248,9 +336,7 @@ def _run_single_suite(
 
 
 def _exit_dts() -> None:
-    """
-    Process all errors and exit with the proper exit code.
-    """
+    """Process all errors and exit with the proper exit code."""
     result.process()
 
     if dts_logger:
diff --git a/dts/main.py b/dts/main.py
index 5d4714b0c3..f703615d11 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -4,9 +4,7 @@
 # Copyright(c) 2022 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
-"""
-A test framework for testing DPDK.
-"""
+"""The DTS executable."""
 
 import logging
 
@@ -17,6 +15,10 @@ def main() -> None:
     """Set DTS settings, then run DTS.
 
     The DTS settings are taken from the command line arguments and the environment variables.
+    The settings object is stored in the module-level variable settings.SETTINGS which the entire
+    framework uses. After importing the module (or the variable), any changes to the variable are
+    not going to be reflected without a re-import. This means that the SETTINGS variable must
+    be modified before the settings module is imported anywhere else in the framework.
     """
     settings.SETTINGS = settings.get_settings()
     from framework import dts
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 08/21] dts: test suite docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (6 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 07/21] dts: dts runner and main " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-16 22:16                 ` Jeremy Spewock
  2023-11-15 13:09               ` [PATCH v7 09/21] dts: test result " Juraj Linkeš
                                 ` (13 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_suite.py | 223 +++++++++++++++++++++++++++---------
 1 file changed, 168 insertions(+), 55 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index d53553bf34..9e5251ffc6 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -2,8 +2,19 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Base class for creating DTS test cases.
+"""Features common to all test suites.
+
+The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such
+must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics
+needed by subclasses:
+
+    * Test suite and test case execution flow,
+    * Testbed (SUT, TG) configuration,
+    * Packet sending and verification,
+    * Test case verification.
+
+The module also defines a function, :func:`get_test_suites`,
+for gathering test suites from a Python module.
 """
 
 import importlib
@@ -31,25 +42,44 @@
 
 
 class TestSuite(object):
-    """
-    The base TestSuite class provides methods for handling basic flow of a test suite:
-    * test case filtering and collection
-    * test suite setup/cleanup
-    * test setup/cleanup
-    * test case execution
-    * error handling and results storage
-    Test cases are implemented by derived classes. Test cases are all methods
-    starting with test_, further divided into performance test cases
-    (starting with test_perf_) and functional test cases (all other test cases).
-    By default, all test cases will be executed. A list of testcase str names
-    may be specified in conf.yaml or on the command line
-    to filter which test cases to run.
-    The methods named [set_up|tear_down]_[suite|test_case] should be overridden
-    in derived classes if the appropriate suite/test case fixtures are needed.
+    """The base class with methods for handling the basic flow of a test suite.
+
+        * Test case filtering and collection,
+        * Test suite setup/cleanup,
+        * Test setup/cleanup,
+        * Test case execution,
+        * Error handling and results storage.
+
+    Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``,
+    further divided into performance test cases (starting with ``test_perf_``)
+    and functional test cases (all other test cases).
+
+    By default, all test cases will be executed. A list of testcase names may be specified
+    in the YAML test run configuration file and in the :option:`--test-cases` command line argument
+    or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run.
+    The union of both lists will be used. Any unknown test cases from the latter lists
+    will be silently ignored.
+
+    If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable
+    is set, in case of a test case failure, the test case will be executed again until it passes
+    or it fails that many times in addition of the first failure.
+
+    The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses
+    if the appropriate test suite/test case fixtures are needed.
+
+    The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
+    properly choose the IP addresses and other configuration that must be tailored to the testbed.
+
+    Attributes:
+        sut_node: The SUT node where the test suite is running.
+        tg_node: The TG node where the test suite is running.
+        is_blocking: Whether the test suite is blocking. A failure of a blocking test suite
+            will block the execution of all subsequent test suites in the current build target.
     """
 
     sut_node: SutNode
-    is_blocking = False
+    tg_node: TGNode
+    is_blocking: bool = False
     _logger: DTSLOG
     _test_cases_to_run: list[str]
     _func: bool
@@ -72,6 +102,19 @@ def __init__(
         func: bool,
         build_target_result: BuildTargetResult,
     ):
+        """Initialize the test suite testbed information and basic configuration.
+
+        Process what test cases to run, create the associated :class:`TestSuiteResult`,
+        find links between ports and set up default IP addresses to be used when configuring them.
+
+        Args:
+            sut_node: The SUT node where the test suite will run.
+            tg_node: The TG node where the test suite will run.
+            test_cases: The list of test cases to execute.
+                If empty, all test cases will be executed.
+            func: Whether to run functional tests.
+            build_target_result: The build target result this test suite is run in.
+        """
         self.sut_node = sut_node
         self.tg_node = tg_node
         self._logger = getLogger(self.__class__.__name__)
@@ -95,6 +138,7 @@ def __init__(
         self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
 
     def _process_links(self) -> None:
+        """Construct links between SUT and TG ports."""
         for sut_port in self.sut_node.ports:
             for tg_port in self.tg_node.ports:
                 if (sut_port.identifier, sut_port.peer) == (
@@ -106,27 +150,42 @@ def _process_links(self) -> None:
                     )
 
     def set_up_suite(self) -> None:
-        """
-        Set up test fixtures common to all test cases; this is done before
-        any test case is run.
+        """Set up test fixtures common to all test cases.
+
+        This is done before any test case has been run.
         """
 
     def tear_down_suite(self) -> None:
-        """
-        Tear down the previously created test fixtures common to all test cases.
+        """Tear down the previously created test fixtures common to all test cases.
+
+        This is done after all test have been run.
         """
 
     def set_up_test_case(self) -> None:
-        """
-        Set up test fixtures before each test case.
+        """Set up test fixtures before each test case.
+
+        This is done before *each* test case.
         """
 
     def tear_down_test_case(self) -> None:
-        """
-        Tear down the previously created test fixtures after each test case.
+        """Tear down the previously created test fixtures after each test case.
+
+        This is done after *each* test case.
         """
 
     def configure_testbed_ipv4(self, restore: bool = False) -> None:
+        """Configure IPv4 addresses on all testbed ports.
+
+        The configured ports are:
+
+        * SUT ingress port,
+        * SUT egress port,
+        * TG ingress port,
+        * TG egress port.
+
+        Args:
+            restore: If :data:`True`, will remove the configuration instead.
+        """
         delete = True if restore else False
         enable = False if restore else True
         self._configure_ipv4_forwarding(enable)
@@ -153,11 +212,13 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None:
     def send_packet_and_capture(
         self, packet: Packet, duration: float = 1
     ) -> list[Packet]:
-        """
-        Send a packet through the appropriate interface and
-        receive on the appropriate interface.
-        Modify the packet with l3/l2 addresses corresponding
-        to the testbed and desired traffic.
+        """Send and receive `packet` using the associated TG.
+
+        Send `packet` through the appropriate interface and receive on the appropriate interface.
+        Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic.
+
+        Returns:
+            A list of received packets.
         """
         packet = self._adjust_addresses(packet)
         return self.tg_node.send_packet_and_capture(
@@ -165,13 +226,25 @@ def send_packet_and_capture(
         )
 
     def get_expected_packet(self, packet: Packet) -> Packet:
+        """Inject the proper L2/L3 addresses into `packet`.
+
+        Args:
+            packet: The packet to modify.
+
+        Returns:
+            `packet` with injected L2/L3 addresses.
+        """
         return self._adjust_addresses(packet, expected=True)
 
     def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
-        """
+        """L2 and L3 address additions in both directions.
+
         Assumptions:
-            Two links between SUT and TG, one link is TG -> SUT,
-            the other SUT -> TG.
+            Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG.
+
+        Args:
+            packet: The packet to modify.
+            expected: If :data:`True`, the direction is SUT -> TG, otherwise the direction is TG -> SUT.
         """
         if expected:
             # The packet enters the TG from SUT
@@ -197,6 +270,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
         return Ether(packet.build())
 
     def verify(self, condition: bool, failure_description: str) -> None:
+        """Verify `condition` and handle failures.
+
+        When `condition` is :data:`False`, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            condition: The condition to check.
+            failure_description: A short description of the failure
+                that will be stored in the raised exception.
+
+        Raises:
+            TestCaseVerifyError: `condition` is :data:`False`.
+        """
         if not condition:
             self._fail_test_case_verify(failure_description)
 
@@ -216,6 +302,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None:
     def verify_packets(
         self, expected_packet: Packet, received_packets: list[Packet]
     ) -> None:
+        """Verify that `expected_packet` has been received.
+
+        Go through `received_packets` and check that `expected_packet` is among them.
+        If not, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            expected_packet: The packet we're expecting to receive.
+            received_packets: The packets where we're looking for `expected_packet`.
+
+        Raises:
+            TestCaseVerifyError: `expected_packet` is not among `received_packets`.
+        """
         for received_packet in received_packets:
             if self._compare_packets(expected_packet, received_packet):
                 break
@@ -303,10 +402,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         return True
 
     def run(self) -> None:
-        """
-        Setup, execute and teardown the whole suite.
-        Suite execution consists of running all test cases scheduled to be executed.
-        A test cast run consists of setup, execution and teardown of said test case.
+        """Set up, execute and tear down the whole suite.
+
+        Test suite execution consists of running all test cases scheduled to be executed.
+        A test case run consists of setup, execution and teardown of said test case.
+
+        Record the setup and the teardown and handle failures.
+
+        The list of scheduled test cases is constructed when creating the :class:`TestSuite` object.
         """
         test_suite_name = self.__class__.__name__
 
@@ -338,9 +441,7 @@ def run(self) -> None:
                 raise BlockingTestSuiteError(test_suite_name)
 
     def _execute_test_suite(self) -> None:
-        """
-        Execute all test cases scheduled to be executed in this suite.
-        """
+        """Execute all test cases scheduled to be executed in this suite."""
         if self._func:
             for test_case_method in self._get_functional_test_cases():
                 test_case_name = test_case_method.__name__
@@ -357,14 +458,18 @@ def _execute_test_suite(self) -> None:
                     self._run_test_case(test_case_method, test_case_result)
 
     def _get_functional_test_cases(self) -> list[MethodType]:
-        """
-        Get all functional test cases.
+        """Get all functional test cases defined in this TestSuite.
+
+        Returns:
+            The list of functional test cases of this TestSuite.
         """
         return self._get_test_cases(r"test_(?!perf_)")
 
     def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
-        """
-        Return a list of test cases matching test_case_regex.
+        """Return a list of test cases matching test_case_regex.
+
+        Returns:
+            The list of test cases matching test_case_regex of this TestSuite.
         """
         self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
         filtered_test_cases = []
@@ -378,9 +483,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
         return filtered_test_cases
 
     def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
-        """
-        Check whether the test case should be executed.
-        """
+        """Check whether the test case should be scheduled to be executed."""
         match = bool(re.match(test_case_regex, test_case_name))
         if self._test_cases_to_run:
             return match and test_case_name in self._test_cases_to_run
@@ -390,9 +493,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
     def _run_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Setup, execute and teardown a test case in this suite.
-        Exceptions are caught and recorded in logs and results.
+        """Setup, execute and teardown a test case in this suite.
+
+        Record the result of the setup and the teardown and handle failures.
         """
         test_case_name = test_case_method.__name__
 
@@ -427,9 +530,7 @@ def _run_test_case(
     def _execute_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Execute one test case and handle failures.
-        """
+        """Execute one test case, record the result and handle failures."""
         test_case_name = test_case_method.__name__
         try:
             self._logger.info(f"Starting test case execution: {test_case_name}")
@@ -452,6 +553,18 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+    r"""Find all :class:`TestSuite`\s in a Python module.
+
+    Args:
+        testsuite_module_path: The path to the Python module.
+
+    Returns:
+        The list of :class:`TestSuite`\s found within the Python module.
+
+    Raises:
+        ConfigurationError: The test suite module was not found.
+    """
+
     def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 09/21] dts: test result docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (7 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 08/21] dts: test suite " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-16 22:47                 ` Jeremy Spewock
  2023-11-15 13:09               ` [PATCH v7 10/21] dts: config " Juraj Linkeš
                                 ` (12 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++-------
 1 file changed, 234 insertions(+), 58 deletions(-)

diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 603e18872c..05e210f6e7 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -2,8 +2,25 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-Generic result container and reporters
+r"""Record and process DTS results.
+
+The results are recorded in a hierarchical manner:
+
+    * :class:`DTSResult` contains
+    * :class:`ExecutionResult` contains
+    * :class:`BuildTargetResult` contains
+    * :class:`TestSuiteResult` contains
+    * :class:`TestCaseResult`
+
+Each result may contain multiple lower level results, e.g. there are multiple
+:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
+The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`,
+which also defines some common behaviors in its methods.
+
+Each result class has its own idiosyncrasies which they implement in overridden methods.
+
+The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+variable modify the directory where the files with results will be stored.
 """
 
 import os.path
@@ -26,26 +43,34 @@
 
 
 class Result(Enum):
-    """
-    An Enum defining the possible states that
-    a setup, a teardown or a test case may end up in.
-    """
+    """The possible states that a setup, a teardown or a test case may end up in."""
 
+    #:
     PASS = auto()
+    #:
     FAIL = auto()
+    #:
     ERROR = auto()
+    #:
     SKIP = auto()
 
     def __bool__(self) -> bool:
+        """Only PASS is True."""
         return self is self.PASS
 
 
 class FixtureResult(object):
-    """
-    A record that stored the result of a setup or a teardown.
-    The default is FAIL because immediately after creating the object
-    the setup of the corresponding stage will be executed, which also guarantees
-    the execution of teardown.
+    """A record that stores the result of a setup or a teardown.
+
+    FAIL is a sensible default since it prevents false positives
+        (which could happen if the default was PASS).
+
+    Preventing false positives or other false results is preferable since a failure
+    is mostly likely to be investigated (the other false results may not be investigated at all).
+
+    Attributes:
+        result: The associated result.
+        error: The error in case of a failure.
     """
 
     result: Result
@@ -56,21 +81,32 @@ def __init__(
         result: Result = Result.FAIL,
         error: Exception | None = None,
     ):
+        """Initialize the constructor with the fixture result and store a possible error.
+
+        Args:
+            result: The result to store.
+            error: The error which happened when a failure occurred.
+        """
         self.result = result
         self.error = error
 
     def __bool__(self) -> bool:
+        """A wrapper around the stored :class:`Result`."""
         return bool(self.result)
 
 
 class Statistics(dict):
-    """
-    A helper class used to store the number of test cases by its result
-    along a few other basic information.
-    Using a dict provides a convenient way to format the data.
+    """How many test cases ended in which result state along some other basic information.
+
+    Subclassing :class:`dict` provides a convenient way to format the data.
     """
 
     def __init__(self, dpdk_version: str | None):
+        """Extend the constructor with relevant keys.
+
+        Args:
+            dpdk_version: The version of tested DPDK.
+        """
         super(Statistics, self).__init__()
         for result in Result:
             self[result.name] = 0
@@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None):
         self["DPDK VERSION"] = dpdk_version
 
     def __iadd__(self, other: Result) -> "Statistics":
-        """
-        Add a Result to the final count.
+        """Add a Result to the final count.
+
+        Example:
+            stats: Statistics = Statistics()  # empty Statistics
+            stats += Result.PASS  # add a Result to `stats`
+
+        Args:
+            other: The Result to add to this statistics object.
+
+        Returns:
+            The modified statistics object.
         """
         self[other.name] += 1
         self["PASS RATE"] = (
@@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics":
         return self
 
     def __str__(self) -> str:
-        """
-        Provide a string representation of the data.
-        """
+        """Each line contains the formatted key = value pair."""
         stats_str = ""
         for key, value in self.items():
             stats_str += f"{key:<12} = {value}\n"
@@ -102,10 +145,16 @@ def __str__(self) -> str:
 
 
 class BaseResult(object):
-    """
-    The Base class for all results. Stores the results of
-    the setup and teardown portions of the corresponding stage
-    and a list of results from each inner stage in _inner_results.
+    """Common data and behavior of DTS results.
+
+    Stores the results of the setup and teardown portions of the corresponding stage.
+    The hierarchical nature of DTS results is captured recursively in an internal list.
+    A stage is each level in this particular hierarchy (pre-execution or the top-most level,
+    execution, build target, test suite and test case.)
+
+    Attributes:
+        setup_result: The result of the setup of the particular stage.
+        teardown_result: The results of the teardown of the particular stage.
     """
 
     setup_result: FixtureResult
@@ -113,15 +162,28 @@ class BaseResult(object):
     _inner_results: MutableSequence["BaseResult"]
 
     def __init__(self):
+        """Initialize the constructor."""
         self.setup_result = FixtureResult()
         self.teardown_result = FixtureResult()
         self._inner_results = []
 
     def update_setup(self, result: Result, error: Exception | None = None) -> None:
+        """Store the setup result.
+
+        Args:
+            result: The result of the setup.
+            error: The error that occurred in case of a failure.
+        """
         self.setup_result.result = result
         self.setup_result.error = error
 
     def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+        """Store the teardown result.
+
+        Args:
+            result: The result of the teardown.
+            error: The error that occurred in case of a failure.
+        """
         self.teardown_result.result = result
         self.teardown_result.error = error
 
@@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]:
         ]
 
     def get_errors(self) -> list[Exception]:
+        """Compile errors from the whole result hierarchy.
+
+        Returns:
+            The errors from setup, teardown and all errors found in the whole result hierarchy.
+        """
         return self._get_setup_teardown_errors() + self._get_inner_errors()
 
     def add_stats(self, statistics: Statistics) -> None:
+        """Collate stats from the whole result hierarchy.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be collated.
+        """
         for inner_result in self._inner_results:
             inner_result.add_stats(statistics)
 
 
 class TestCaseResult(BaseResult, FixtureResult):
-    """
-    The test case specific result.
-    Stores the result of the actual test case.
-    Also stores the test case name.
+    r"""The test case specific result.
+
+    Stores the result of the actual test case. This is done by adding an extra superclass
+    in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and
+    the class is itself a record of the test case.
+
+    Attributes:
+        test_case_name: The test case name.
     """
 
     test_case_name: str
 
     def __init__(self, test_case_name: str):
+        """Extend the constructor with `test_case_name`.
+
+        Args:
+            test_case_name: The test case's name.
+        """
         super(TestCaseResult, self).__init__()
         self.test_case_name = test_case_name
 
     def update(self, result: Result, error: Exception | None = None) -> None:
+        """Update the test case result.
+
+        This updates the result of the test case itself and doesn't affect
+        the results of the setup and teardown steps in any way.
+
+        Args:
+            result: The result of the test case.
+            error: The error that occurred in case of a failure.
+        """
         self.result = result
         self.error = error
 
@@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]:
         return []
 
     def add_stats(self, statistics: Statistics) -> None:
+        r"""Add the test case result to statistics.
+
+        The base method goes through the hierarchy recursively and this method is here to stop
+        the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be added.
+        """
         statistics += self.result
 
     def __bool__(self) -> bool:
+        """The test case passed only if setup, teardown and the test case itself passed."""
         return (
             bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
         )
 
 
 class TestSuiteResult(BaseResult):
-    """
-    The test suite specific result.
-    The _inner_results list stores results of test cases in a given test suite.
-    Also stores the test suite name.
+    """The test suite specific result.
+
+    The internal list stores the results of all test cases in a given test suite.
+
+    Attributes:
+        suite_name: The test suite name.
     """
 
     suite_name: str
 
     def __init__(self, suite_name: str):
+        """Extend the constructor with `suite_name`.
+
+        Args:
+            suite_name: The test suite's name.
+        """
         super(TestSuiteResult, self).__init__()
         self.suite_name = suite_name
 
     def add_test_case(self, test_case_name: str) -> TestCaseResult:
+        """Add and return the inner result (test case).
+
+        Returns:
+            The test case's result.
+        """
         test_case_result = TestCaseResult(test_case_name)
         self._inner_results.append(test_case_result)
         return test_case_result
 
 
 class BuildTargetResult(BaseResult):
-    """
-    The build target specific result.
-    The _inner_results list stores results of test suites in a given build target.
-    Also stores build target specifics, such as compiler used to build DPDK.
+    """The build target specific result.
+
+    The internal list stores the results of all test suites in a given build target.
+
+    Attributes:
+        arch: The DPDK build target architecture.
+        os: The DPDK build target operating system.
+        cpu: The DPDK build target CPU.
+        compiler: The DPDK build target compiler.
+        compiler_version: The DPDK build target compiler version.
+        dpdk_version: The built DPDK version.
     """
 
     arch: Architecture
@@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult):
     dpdk_version: str | None
 
     def __init__(self, build_target: BuildTargetConfiguration):
+        """Extend the constructor with the `build_target`'s build target config.
+
+        Args:
+            build_target: The build target's test run configuration.
+        """
         super(BuildTargetResult, self).__init__()
         self.arch = build_target.arch
         self.os = build_target.os
@@ -222,20 +345,35 @@ def __init__(self, build_target: BuildTargetConfiguration):
         self.dpdk_version = None
 
     def add_build_target_info(self, versions: BuildTargetInfo) -> None:
+        """Add information about the build target gathered at runtime.
+
+        Args:
+            versions: The additional information.
+        """
         self.compiler_version = versions.compiler_version
         self.dpdk_version = versions.dpdk_version
 
     def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+        """Add and return the inner result (test suite).
+
+        Returns:
+            The test suite's result.
+        """
         test_suite_result = TestSuiteResult(test_suite_name)
         self._inner_results.append(test_suite_result)
         return test_suite_result
 
 
 class ExecutionResult(BaseResult):
-    """
-    The execution specific result.
-    The _inner_results list stores results of build targets in a given execution.
-    Also stores the SUT node configuration.
+    """The execution specific result.
+
+    The internal list stores the results of all build targets in a given execution.
+
+    Attributes:
+        sut_node: The SUT node used in the execution.
+        sut_os_name: The operating system of the SUT node.
+        sut_os_version: The operating system version of the SUT node.
+        sut_kernel_version: The operating system kernel version of the SUT node.
     """
 
     sut_node: NodeConfiguration
@@ -244,36 +382,55 @@ class ExecutionResult(BaseResult):
     sut_kernel_version: str
 
     def __init__(self, sut_node: NodeConfiguration):
+        """Extend the constructor with the `sut_node`'s config.
+
+        Args:
+            sut_node: The SUT node's test run configuration used in the execution.
+        """
         super(ExecutionResult, self).__init__()
         self.sut_node = sut_node
 
     def add_build_target(
         self, build_target: BuildTargetConfiguration
     ) -> BuildTargetResult:
+        """Add and return the inner result (build target).
+
+        Args:
+            build_target: The build target's test run configuration.
+
+        Returns:
+            The build target's result.
+        """
         build_target_result = BuildTargetResult(build_target)
         self._inner_results.append(build_target_result)
         return build_target_result
 
     def add_sut_info(self, sut_info: NodeInfo) -> None:
+        """Add SUT information gathered at runtime.
+
+        Args:
+            sut_info: The additional SUT node information.
+        """
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
 
 
 class DTSResult(BaseResult):
-    """
-    Stores environment information and test results from a DTS run, which are:
-    * Execution level information, such as SUT and TG hardware.
-    * Build target level information, such as compiler, target OS and cpu.
-    * Test suite results.
-    * All errors that are caught and recorded during DTS execution.
+    """Stores environment information and test results from a DTS run.
 
-    The information is stored in nested objects.
+        * Execution level information, such as testbed and the test suite list,
+        * Build target level information, such as compiler, target OS and cpu,
+        * Test suite and test case results,
+        * All errors that are caught and recorded during DTS execution.
 
-    The class is capable of computing the return code used to exit DTS with
-    from the stored error.
+    The information is stored hierarchically. This is the first level of the hierarchy
+    and as such is where the data form the whole hierarchy is collated or processed.
 
-    It also provides a brief statistical summary of passed/failed test cases.
+    The internal list stores the results of all executions.
+
+    Attributes:
+        dpdk_version: The DPDK version to record.
     """
 
     dpdk_version: str | None
@@ -284,6 +441,11 @@ class DTSResult(BaseResult):
     _stats_filename: str
 
     def __init__(self, logger: DTSLOG):
+        """Extend the constructor with top-level specifics.
+
+        Args:
+            logger: The logger instance the whole result will use.
+        """
         super(DTSResult, self).__init__()
         self.dpdk_version = None
         self._logger = logger
@@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG):
         self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
 
     def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+        """Add and return the inner result (execution).
+
+        Args:
+            sut_node: The SUT node's test run configuration.
+
+        Returns:
+            The execution's result.
+        """
         execution_result = ExecutionResult(sut_node)
         self._inner_results.append(execution_result)
         return execution_result
 
     def add_error(self, error: Exception) -> None:
+        """Record an error that occurred outside any execution.
+
+        Args:
+            error: The exception to record.
+        """
         self._errors.append(error)
 
     def process(self) -> None:
-        """
-        Process the data after a DTS run.
-        The data is added to nested objects during runtime and this parent object
-        is not updated at that time. This requires us to process the nested data
-        after it's all been gathered.
+        """Process the data after a whole DTS run.
+
+        The data is added to inner objects during runtime and this object is not updated
+        at that time. This requires us to process the inner data after it's all been gathered.
 
-        The processing gathers all errors and the result statistics of test cases.
+        The processing gathers all errors and the statistics of test case results.
         """
         self._errors += self.get_errors()
         if self._errors and self._logger:
@@ -321,8 +495,10 @@ def process(self) -> None:
             stats_file.write(str(self._stats_result))
 
     def get_return_code(self) -> int:
-        """
-        Go through all stored Exceptions and return the highest error code found.
+        """Go through all stored Exceptions and return the final DTS error code.
+
+        Returns:
+            The highest error code found.
         """
         for error in self._errors:
             error_return_code = ErrorSeverity.GENERIC_ERR
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 10/21] dts: config docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (8 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 09/21] dts: test result " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-21 15:08                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 11/21] dts: remote session " Juraj Linkeš
                                 ` (11 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py | 371 ++++++++++++++++++++++++++-----
 dts/framework/config/types.py    | 132 +++++++++++
 2 files changed, 446 insertions(+), 57 deletions(-)
 create mode 100644 dts/framework/config/types.py

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 2044c82611..0aa149a53d 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,8 +3,34 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Yaml config parsing methods
+"""Testbed configuration and test suite specification.
+
+This package offers classes that hold real-time information about the testbed, hold test run
+configuration describing the tested testbed and a loader function, :func:`load_config`, which loads
+the YAML test run configuration file
+and validates it according to :download:`the schema <conf_yaml_schema.json>`.
+
+The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout
+this package. The allowed keys and types inside this dictionary are defined in
+the :doc:`types <framework.config.types>` module.
+
+The test run configuration has two main sections:
+
+    * The :class:`ExecutionConfiguration` which defines what tests are going to be run
+      and how DPDK will be built. It also references the testbed where these tests and DPDK
+      are going to be run,
+    * The nodes of the testbed are defined in the other section,
+      a :class:`list` of :class:`NodeConfiguration` objects.
+
+The real-time information about testbed is supposed to be gathered at runtime.
+
+The classes defined in this package make heavy use of :mod:`dataclasses`.
+All of them use slots and are frozen:
+
+    * Slots enables some optimizations, by pre-allocating space for the defined
+      attributes in the underlying data structure,
+    * Frozen makes the object immutable. This enables further optimizations,
+      and makes it thread safe should we every want to move in that direction.
 """
 
 import json
@@ -12,11 +38,20 @@
 import pathlib
 from dataclasses import dataclass
 from enum import auto, unique
-from typing import Any, TypedDict, Union
+from typing import Union
 
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.config.types import (
+    BuildTargetConfigDict,
+    ConfigurationDict,
+    ExecutionConfigDict,
+    NodeConfigDict,
+    PortConfigDict,
+    TestSuiteConfigDict,
+    TrafficGeneratorConfigDict,
+)
 from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
@@ -24,55 +59,97 @@
 
 @unique
 class Architecture(StrEnum):
+    r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     i686 = auto()
+    #:
     x86_64 = auto()
+    #:
     x86_32 = auto()
+    #:
     arm64 = auto()
+    #:
     ppc64le = auto()
 
 
 @unique
 class OS(StrEnum):
+    r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     linux = auto()
+    #:
     freebsd = auto()
+    #:
     windows = auto()
 
 
 @unique
 class CPUType(StrEnum):
+    r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     native = auto()
+    #:
     armv8a = auto()
+    #:
     dpaa2 = auto()
+    #:
     thunderx = auto()
+    #:
     xgene1 = auto()
 
 
 @unique
 class Compiler(StrEnum):
+    r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     gcc = auto()
+    #:
     clang = auto()
+    #:
     icc = auto()
+    #:
     msvc = auto()
 
 
 @unique
 class TrafficGeneratorType(StrEnum):
+    """The supported traffic generators."""
+
+    #:
     SCAPY = auto()
 
 
-# Slots enables some optimizations, by pre-allocating space for the defined
-# attributes in the underlying data structure.
-#
-# Frozen makes the object immutable. This enables further optimizations,
-# and makes it thread safe should we every want to move in that direction.
 @dataclass(slots=True, frozen=True)
 class HugepageConfiguration:
+    r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        amount: The number of hugepages.
+        force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node.
+    """
+
     amount: int
     force_first_numa: bool
 
 
 @dataclass(slots=True, frozen=True)
 class PortConfig:
+    r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        node: The :class:`~framework.testbed_model.node.Node` where this port exists.
+        pci: The PCI address of the port.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK.
+        os_driver: The operating system driver name when the operating system controls the port.
+        peer_node: The :class:`~framework.testbed_model.node.Node` of the port
+            connected to this port.
+        peer_pci: The PCI address of the port connected to this port.
+    """
+
     node: str
     pci: str
     os_driver_for_dpdk: str
@@ -81,18 +158,44 @@ class PortConfig:
     peer_pci: str
 
     @staticmethod
-    def from_dict(node: str, d: dict) -> "PortConfig":
+    def from_dict(node: str, d: PortConfigDict) -> "PortConfig":
+        """A convenience method that creates the object from fewer inputs.
+
+        Args:
+            node: The node where this port exists.
+            d: The configuration dictionary.
+
+        Returns:
+            The port configuration instance.
+        """
         return PortConfig(node=node, **d)
 
 
 @dataclass(slots=True, frozen=True)
 class TrafficGeneratorConfig:
+    """The configuration of traffic generators.
+
+    The class will be expanded when more configuration is needed.
+
+    Attributes:
+        traffic_generator_type: The type of the traffic generator.
+    """
+
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
-        # This looks useless now, but is designed to allow expansion to traffic
-        # generators that require more configuration later.
+    def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig":
+        """A convenience method that produces traffic generator config of the proper type.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The traffic generator configuration instance.
+
+        Raises:
+            ConfigurationError: An unknown traffic generator type was encountered.
+        """
         match TrafficGeneratorType(d["type"]):
             case TrafficGeneratorType.SCAPY:
                 return ScapyTrafficGeneratorConfig(
@@ -106,11 +209,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
 
 @dataclass(slots=True, frozen=True)
 class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+    """Scapy traffic generator specific configuration."""
+
     pass
 
 
 @dataclass(slots=True, frozen=True)
 class NodeConfiguration:
+    r"""The configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        name: The name of the :class:`~framework.testbed_model.node.Node`.
+        hostname: The hostname of the :class:`~framework.testbed_model.node.Node`.
+            Can be an IP or a domain name.
+        user: The name of the user used to connect to
+            the :class:`~framework.testbed_model.node.Node`.
+        password: The password of the user. The use of passwords is heavily discouraged.
+            Please use keys instead.
+        arch: The architecture of the :class:`~framework.testbed_model.node.Node`.
+        os: The operating system of the :class:`~framework.testbed_model.node.Node`.
+        lcores: A comma delimited list of logical cores to use when running DPDK.
+        use_first_core: If :data:`True`, the first logical core won't be used.
+        hugepages: An optional hugepage configuration.
+        ports: The ports that can be used in testing.
+    """
+
     name: str
     hostname: str
     user: str
@@ -123,57 +246,91 @@ class NodeConfiguration:
     ports: list[PortConfig]
 
     @staticmethod
-    def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
-        hugepage_config = d.get("hugepages")
-        if hugepage_config:
-            if "force_first_numa" not in hugepage_config:
-                hugepage_config["force_first_numa"] = False
-            hugepage_config = HugepageConfiguration(**hugepage_config)
-
-        common_config = {
-            "name": d["name"],
-            "hostname": d["hostname"],
-            "user": d["user"],
-            "password": d.get("password"),
-            "arch": Architecture(d["arch"]),
-            "os": OS(d["os"]),
-            "lcores": d.get("lcores", "1"),
-            "use_first_core": d.get("use_first_core", False),
-            "hugepages": hugepage_config,
-            "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
-        }
-
+    def from_dict(
+        d: NodeConfigDict,
+    ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
+        """A convenience method that processes the inputs before creating a specialized instance.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            Either an SUT or TG configuration instance.
+        """
+        hugepage_config = None
+        if "hugepages" in d:
+            hugepage_config_dict = d["hugepages"]
+            if "force_first_numa" not in hugepage_config_dict:
+                hugepage_config_dict["force_first_numa"] = False
+            hugepage_config = HugepageConfiguration(**hugepage_config_dict)
+
+        # The calls here contain duplicated code which is here because Mypy doesn't
+        # properly support dictionary unpacking with TypedDicts
         if "traffic_generator" in d:
             return TGNodeConfiguration(
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
                 traffic_generator=TrafficGeneratorConfig.from_dict(
                     d["traffic_generator"]
                 ),
-                **common_config,
             )
         else:
             return SutNodeConfiguration(
-                memory_channels=d.get("memory_channels", 1), **common_config
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
+                memory_channels=d.get("memory_channels", 1),
             )
 
 
 @dataclass(slots=True, frozen=True)
 class SutNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration.
+
+    Attributes:
+        memory_channels: The number of memory channels to use when running DPDK.
+    """
+
     memory_channels: int
 
 
 @dataclass(slots=True, frozen=True)
 class TGNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration.
+
+    Attributes:
+        traffic_generator: The configuration of the traffic generator present on the TG node.
+    """
+
     traffic_generator: ScapyTrafficGeneratorConfig
 
 
 @dataclass(slots=True, frozen=True)
 class NodeInfo:
-    """Class to hold important versions within the node.
-
-    This class, unlike the NodeConfiguration class, cannot be generated at the start.
-    This is because we need to initialize a connection with the node before we can
-    collect the information needed in this class. Therefore, it cannot be a part of
-    the configuration class above.
+    """Supplemental node information.
+
+    Attributes:
+        os_name: The name of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        os_version: The version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        kernel_version: The kernel version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
     """
 
     os_name: str
@@ -183,6 +340,20 @@ class NodeInfo:
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetConfiguration:
+    """DPDK build configuration.
+
+    The configuration used for building DPDK.
+
+    Attributes:
+        arch: The target architecture to build for.
+        os: The target os to build for.
+        cpu: The target CPU to build for.
+        compiler: The compiler executable to use.
+        compiler_wrapper: This string will be put in front of the compiler when
+            executing the build. Useful for adding wrapper commands, such as ``ccache``.
+        name: The name of the compiler.
+    """
+
     arch: Architecture
     os: OS
     cpu: CPUType
@@ -191,7 +362,18 @@ class BuildTargetConfiguration:
     name: str
 
     @staticmethod
-    def from_dict(d: dict) -> "BuildTargetConfiguration":
+    def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration":
+        r"""A convenience method that processes the inputs before creating an instance.
+
+        `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and
+        `name` is constructed from `arch`, `os`, `cpu` and `compiler`.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The build target configuration instance.
+        """
         return BuildTargetConfiguration(
             arch=Architecture(d["arch"]),
             os=OS(d["os"]),
@@ -204,23 +386,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetInfo:
-    """Class to hold important versions within the build target.
+    """Various versions and other information about a build target.
 
-    This is very similar to the NodeInfo class, it just instead holds information
-    for the build target.
+    Attributes:
+        dpdk_version: The DPDK version that was built.
+        compiler_version: The version of the compiler used to build DPDK.
     """
 
     dpdk_version: str
     compiler_version: str
 
 
-class TestSuiteConfigDict(TypedDict):
-    suite: str
-    cases: list[str]
-
-
 @dataclass(slots=True, frozen=True)
 class TestSuiteConfig:
+    """Test suite configuration.
+
+    Information about a single test suite to be executed.
+
+    Attributes:
+        test_suite: The name of the test suite module without the starting ``TestSuite_``.
+        test_cases: The names of test cases from this test suite to execute.
+            If empty, all test cases will be executed.
+    """
+
     test_suite: str
     test_cases: list[str]
 
@@ -228,6 +416,14 @@ class TestSuiteConfig:
     def from_dict(
         entry: str | TestSuiteConfigDict,
     ) -> "TestSuiteConfig":
+        """Create an instance from two different types.
+
+        Args:
+            entry: Either a suite name or a dictionary containing the config.
+
+        Returns:
+            The test suite configuration instance.
+        """
         if isinstance(entry, str):
             return TestSuiteConfig(test_suite=entry, test_cases=[])
         elif isinstance(entry, dict):
@@ -238,19 +434,49 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class ExecutionConfiguration:
+    """The configuration of an execution.
+
+    The configuration contains testbed information, what tests to execute
+    and with what DPDK build.
+
+    Attributes:
+        build_targets: A list of DPDK builds to test.
+        perf: Whether to run performance tests.
+        func: Whether to run functional tests.
+        skip_smoke_tests: Whether to skip smoke tests.
+        test_suites: The names of test suites and/or test cases to execute.
+        system_under_test_node: The SUT node to use in this execution.
+        traffic_generator_node: The TG node to use in this execution.
+        vdevs: The names of virtual devices to test.
+    """
+
     build_targets: list[BuildTargetConfiguration]
     perf: bool
     func: bool
+    skip_smoke_tests: bool
     test_suites: list[TestSuiteConfig]
     system_under_test_node: SutNodeConfiguration
     traffic_generator_node: TGNodeConfiguration
     vdevs: list[str]
-    skip_smoke_tests: bool
 
     @staticmethod
     def from_dict(
-        d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
+        d: ExecutionConfigDict,
+        node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]],
     ) -> "ExecutionConfiguration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        The build target and the test suite config is transformed into their respective objects.
+        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
+        just stored.
+
+        Args:
+            d: The configuration dictionary.
+            node_map: A dictionary mapping node names to their config objects.
+
+        Returns:
+            The execution configuration instance.
+        """
         build_targets: list[BuildTargetConfiguration] = list(
             map(BuildTargetConfiguration.from_dict, d["build_targets"])
         )
@@ -291,10 +517,31 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class Configuration:
+    """DTS testbed and test configuration.
+
+    The node configuration is not stored in this object. Rather, all used node configurations
+    are stored inside the execution configuration where the nodes are actually used.
+
+    Attributes:
+        executions: Execution configurations.
+    """
+
     executions: list[ExecutionConfiguration]
 
     @staticmethod
-    def from_dict(d: dict) -> "Configuration":
+    def from_dict(d: ConfigurationDict) -> "Configuration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        Build target and test suite config is transformed into their respective objects.
+        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
+        just stored.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The whole configuration instance.
+        """
         nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
             map(NodeConfiguration.from_dict, d["nodes"])
         )
@@ -313,9 +560,17 @@ def from_dict(d: dict) -> "Configuration":
 
 
 def load_config() -> Configuration:
-    """
-    Loads the configuration file and the configuration file schema,
-    validates the configuration file, and creates a configuration object.
+    """Load DTS test run configuration from a file.
+
+    Load the YAML test run configuration file
+    and :download:`the configuration file schema <conf_yaml_schema.json>`,
+    validate the test run configuration file, and create a test run configuration object.
+
+    The YAML test run configuration file is specified in the :option:`--config-file` command line
+    argument or the :envvar:`DTS_CFG_FILE` environment variable.
+
+    Returns:
+        The parsed test run configuration.
     """
     with open(SETTINGS.config_file_path, "r") as f:
         config_data = yaml.safe_load(f)
@@ -326,6 +581,8 @@ def load_config() -> Configuration:
 
     with open(schema_path, "r") as f:
         schema = json.load(f)
-    config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
-    config_obj: Configuration = Configuration.from_dict(dict(config))
+    config = warlock.model_factory(schema, name="_Config")(config_data)
+    config_obj: Configuration = Configuration.from_dict(
+        dict(config)  # type: ignore[arg-type]
+    )
     return config_obj
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
new file mode 100644
index 0000000000..1927910d88
--- /dev/null
+++ b/dts/framework/config/types.py
@@ -0,0 +1,132 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Configuration dictionary contents specification.
+
+These type definitions serve as documentation of the configuration dictionary contents.
+
+The definitions use the built-in :class:`~typing.TypedDict` construct.
+"""
+
+from typing import TypedDict
+
+
+class PortConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    pci: str
+    #:
+    os_driver_for_dpdk: str
+    #:
+    os_driver: str
+    #:
+    peer_node: str
+    #:
+    peer_pci: str
+
+
+class TrafficGeneratorConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    type: str
+
+
+class HugepageConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    amount: int
+    #:
+    force_first_numa: bool
+
+
+class NodeConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    hugepages: HugepageConfigurationDict
+    #:
+    name: str
+    #:
+    hostname: str
+    #:
+    user: str
+    #:
+    password: str
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    lcores: str
+    #:
+    use_first_core: bool
+    #:
+    ports: list[PortConfigDict]
+    #:
+    memory_channels: int
+    #:
+    traffic_generator: TrafficGeneratorConfigDict
+
+
+class BuildTargetConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    cpu: str
+    #:
+    compiler: str
+    #:
+    compiler_wrapper: str
+
+
+class TestSuiteConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    suite: str
+    #:
+    cases: list[str]
+
+
+class ExecutionSUTConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    node_name: str
+    #:
+    vdevs: list[str]
+
+
+class ExecutionConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    build_targets: list[BuildTargetConfigDict]
+    #:
+    perf: bool
+    #:
+    func: bool
+    #:
+    skip_smoke_tests: bool
+    #:
+    test_suites: TestSuiteConfigDict
+    #:
+    system_under_test_node: ExecutionSUTConfigDict
+    #:
+    traffic_generator_node: str
+
+
+class ConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    nodes: list[NodeConfigDict]
+    #:
+    executions: list[ExecutionConfigDict]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 11/21] dts: remote session docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (9 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 10/21] dts: config " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-21 15:36                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 12/21] dts: interactive " Juraj Linkeš
                                 ` (10 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/remote_session/__init__.py      |  39 +++++-
 .../remote_session/remote_session.py          | 128 +++++++++++++-----
 dts/framework/remote_session/ssh_session.py   |  16 +--
 3 files changed, 135 insertions(+), 48 deletions(-)

diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 5e7ddb2b05..51a01d6b5e 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -2,12 +2,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-The package provides modules for managing remote connections to a remote host (node),
-differentiated by OS.
-The package provides a factory function, create_session, that returns the appropriate
-remote connection based on the passed configuration. The differences are in the
-underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""Remote interactive and non-interactive sessions.
+
+This package provides modules for managing remote connections to a remote host (node).
+
+The non-interactive sessions send commands and return their output and exit code.
+
+The interactive sessions open an interactive shell which is continuously open,
+allowing it to send and receive data within that particular shell.
 """
 
 # pylama:ignore=W0611
@@ -26,10 +28,35 @@
 def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> RemoteSession:
+    """Factory for non-interactive remote sessions.
+
+    The function returns an SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The SSH remote session.
+    """
     return SSHSession(node_config, name, logger)
 
 
 def create_interactive_session(
     node_config: NodeConfiguration, logger: DTSLOG
 ) -> InteractiveRemoteSession:
+    """Factory for interactive remote sessions.
+
+    The function returns an interactive SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The interactive SSH remote session.
+    """
     return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 0647d93de4..629c2d7b9c 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -3,6 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Base remote session.
+
+This module contains the abstract base class for remote sessions and defines
+the structure of the result of a command execution.
+"""
+
+
 import dataclasses
 from abc import ABC, abstractmethod
 from pathlib import PurePath
@@ -15,8 +22,14 @@
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class CommandResult:
-    """
-    The result of remote execution of a command.
+    """The result of remote execution of a command.
+
+    Attributes:
+        name: The name of the session that executed the command.
+        command: The executed command.
+        stdout: The standard output the command produced.
+        stderr: The standard error output the command produced.
+        return_code: The return code the command exited with.
     """
 
     name: str
@@ -26,6 +39,7 @@ class CommandResult:
     return_code: int
 
     def __str__(self) -> str:
+        """Format the command outputs."""
         return (
             f"stdout: '{self.stdout}'\n"
             f"stderr: '{self.stderr}'\n"
@@ -34,13 +48,24 @@ def __str__(self) -> str:
 
 
 class RemoteSession(ABC):
-    """
-    The base class for defining which methods must be implemented in order to connect
-    to a remote host (node) and maintain a remote session. The derived classes are
-    supposed to implement/use some underlying transport protocol (e.g. SSH) to
-    implement the methods. On top of that, it provides some basic services common to
-    all derived classes, such as keeping history and logging what's being executed
-    on the remote node.
+    """Non-interactive remote session.
+
+    The abstract methods must be implemented in order to connect to a remote host (node)
+    and maintain a remote session.
+    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
+    to implement the methods. On top of that, it provides some basic services common to all
+    subclasses, such as keeping history and logging what's being executed on the remote node.
+
+    Attributes:
+        name: The name of the session.
+        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
+            or a domain name.
+        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
+        port: The port of the node, if given in `hostname`.
+        username: The username used in the connection.
+        password: The password used in the connection. Most frequently empty,
+            as the use of passwords is discouraged.
+        history: The executed commands during this session.
     """
 
     name: str
@@ -59,6 +84,16 @@ def __init__(
         session_name: str,
         logger: DTSLOG,
     ):
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            session_name: The name of the session.
+            logger: The logger instance this session will use.
+
+        Raises:
+            SSHConnectionError: If the connection to the node was not successful.
+        """
         self._node_config = node_config
 
         self.name = session_name
@@ -79,8 +114,13 @@ def __init__(
 
     @abstractmethod
     def _connect(self) -> None:
-        """
-        Create connection to assigned node.
+        """Create a connection to the node.
+
+        The implementation must assign the established session to self.session.
+
+        The implementation must except all exceptions and convert them to an SSHConnectionError.
+
+        The implementation may optionally implement retry attempts.
         """
 
     def send_command(
@@ -90,11 +130,24 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        Send a command to the connected node using optional env vars
-        and return CommandResult.
-        If verify is True, check the return code of the executed command
-        and raise a RemoteCommandExecutionError if the command failed.
+        """Send `command` to the connected node.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds to execute `command`.
+            verify: If :data:`True`, will check the exit code of `command`.
+            env: A dictionary with environment variables to be used with `command` execution.
+
+        Raises:
+            SSHSessionDeadError: If the session isn't alive when sending `command`.
+            SSHTimeoutError: If `command` execution timed out.
+            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
+
+        Returns:
+            The output of the command along with the return code.
         """
         self._logger.info(
             f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
@@ -115,29 +168,36 @@ def send_command(
     def _send_command(
         self, command: str, timeout: float, env: dict | None
     ) -> CommandResult:
-        """
-        Use the underlying protocol to execute the command using optional env vars
-        and return CommandResult.
+        """Send a command to the connected node.
+
+        The implementation must execute the command remotely with `env` environment variables
+        and return the result.
+
+        The implementation must except all exceptions and raise an SSHSessionDeadError if
+        the session is not alive and an SSHTimeoutError if the command execution times out.
         """
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session and free all used resources.
+        """Close the remote session and free all used resources.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
         """
         self._logger.logger_exit()
         self._close(force)
 
     @abstractmethod
     def _close(self, force: bool = False) -> None:
-        """
-        Execute protocol specific steps needed to close the session properly.
+        """Protocol specific steps needed to close the session properly.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
+                This doesn't have to be implemented in the overloaded method.
         """
 
     @abstractmethod
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the remote session is still responding."""
 
     @abstractmethod
     def copy_from(
@@ -147,12 +207,12 @@ def copy_from(
     ) -> None:
         """Copy a file from the remote Node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote Node associated with this remote session
+        to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
-            destination_file: a file or directory path on the local filesystem.
+            source_file: The file on the remote Node.
+            destination_file: A file or directory path on the local filesystem.
         """
 
     @abstractmethod
@@ -163,10 +223,10 @@ def copy_to(
     ) -> None:
         """Copy a file from local filesystem to the remote Node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file` on the remote Node
+        associated with this remote session.
 
         Args:
-            source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            source_file: The file on the local filesystem.
+            destination_file: A file or directory path on the remote Node.
         """
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index cee11d14d6..7186490a9a 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""SSH session remote session."""
+
 import socket
 import traceback
 from pathlib import PurePath
@@ -26,13 +28,8 @@
 class SSHSession(RemoteSession):
     """A persistent SSH connection to a remote Node.
 
-    The connection is implemented with the Fabric Python library.
-
-    Args:
-        node_config: The configuration of the Node to connect to.
-        session_name: The name of the session.
-        logger: The logger used for logging.
-            This should be passed from the parent OSSession.
+    The connection is implemented with
+    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
 
     Attributes:
         session: The underlying Fabric SSH connection.
@@ -80,6 +77,7 @@ def _connect(self) -> None:
             raise SSHConnectionError(self.hostname, errors)
 
     def is_alive(self) -> bool:
+        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
         return self.session.is_connected
 
     def _send_command(
@@ -89,7 +87,7 @@ def _send_command(
 
         Args:
             command: The command to execute.
-            timeout: Wait at most this many seconds for the execution to complete.
+            timeout: Wait at most this long in seconds to execute the command.
             env: Extra environment variables that will be used in command execution.
 
         Raises:
@@ -118,6 +116,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
         self.session.get(str(destination_file), str(source_file))
 
     def copy_to(
@@ -125,6 +124,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
         self.session.put(str(source_file), str(destination_file))
 
     def _close(self, force: bool = False) -> None:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 12/21] dts: interactive remote session docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (10 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 11/21] dts: remote session " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-15 13:09               ` [PATCH v7 13/21] dts: port and virtual device " Juraj Linkeš
                                 ` (9 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../interactive_remote_session.py             | 36 +++----
 .../remote_session/interactive_shell.py       | 99 +++++++++++--------
 dts/framework/remote_session/python_shell.py  | 26 ++++-
 dts/framework/remote_session/testpmd_shell.py | 61 +++++++++---
 4 files changed, 150 insertions(+), 72 deletions(-)

diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
index 9085a668e8..c1bf30ac61 100644
--- a/dts/framework/remote_session/interactive_remote_session.py
+++ b/dts/framework/remote_session/interactive_remote_session.py
@@ -22,27 +22,23 @@
 class InteractiveRemoteSession:
     """SSH connection dedicated to interactive applications.
 
-    This connection is created using paramiko and is a persistent connection to the
-    host. This class defines methods for connecting to the node and configures this
-    connection to send "keep alive" packets every 30 seconds. Because paramiko attempts
-    to use SSH keys to establish a connection first, providing a password is optional.
-    This session is utilized by InteractiveShells and cannot be interacted with
-    directly.
-
-    Arguments:
-        node_config: Configuration class for the node you are connecting to.
-        _logger: Desired logger for this session to use.
+    The connection is created using `paramiko <https://docs.paramiko.org/en/latest/>`_
+    and is a persistent connection to the host. This class defines the methods for connecting
+    to the node and configures the connection to send "keep alive" packets every 30 seconds.
+    Because paramiko attempts to use SSH keys to establish a connection first, providing
+    a password is optional. This session is utilized by InteractiveShells
+    and cannot be interacted with directly.
 
     Attributes:
-        hostname: Hostname that will be used to initialize a connection to the node.
-        ip: A subsection of hostname that removes the port for the connection if there
+        hostname: The hostname that will be used to initialize a connection to the node.
+        ip: A subsection of `hostname` that removes the port for the connection if there
             is one. If there is no port, this will be the same as hostname.
-        port: Port to use for the ssh connection. This will be extracted from the
-            hostname if there is a port included, otherwise it will default to 22.
+        port: Port to use for the ssh connection. This will be extracted from `hostname`
+            if there is a port included, otherwise it will default to ``22``.
         username: User to connect to the node with.
         password: Password of the user connecting to the host. This will default to an
             empty string if a password is not provided.
-        session: Underlying paramiko connection.
+        session: The underlying paramiko connection.
 
     Raises:
         SSHConnectionError: There is an error creating the SSH connection.
@@ -58,9 +54,15 @@ class InteractiveRemoteSession:
     _node_config: NodeConfiguration
     _transport: Transport | None
 
-    def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None:
+    def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            logger: The logger instance this session will use.
+        """
         self._node_config = node_config
-        self._logger = _logger
+        self._logger = logger
         self.hostname = node_config.hostname
         self.username = node_config.user
         self.password = node_config.password if node_config.password else ""
diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
index c24376b2a8..a98a822e91 100644
--- a/dts/framework/remote_session/interactive_shell.py
+++ b/dts/framework/remote_session/interactive_shell.py
@@ -3,18 +3,20 @@
 
 """Common functionality for interactive shell handling.
 
-This base class, InteractiveShell, is meant to be extended by other classes that
-contain functionality specific to that shell type. These derived classes will often
-modify things like the prompt to expect or the arguments to pass into the application,
-but still utilize the same method for sending a command and collecting output. How
-this output is handled however is often application specific. If an application needs
-elevated privileges to start it is expected that the method for gaining those
-privileges is provided when initializing the class.
+The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain
+functionality specific to that shell type. These subclasses will often modify things like
+the prompt to expect or the arguments to pass into the application, but still utilize
+the same method for sending a command and collecting output. How this output is handled however
+is often application specific. If an application needs elevated privileges to start it is expected
+that the method for gaining those privileges is provided when initializing the class.
+
+The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+environment variable configure the timeout of getting the output from command execution.
 """
 
 from abc import ABC
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from paramiko import Channel, SSHClient, channel  # type: ignore[import]
 
@@ -30,28 +32,6 @@ class InteractiveShell(ABC):
     and collecting input until reaching a certain prompt. All interactive applications
     will use the same SSH connection, but each will create their own channel on that
     session.
-
-    Arguments:
-        interactive_session: The SSH session dedicated to interactive shells.
-        logger: Logger used for displaying information in the console.
-        get_privileged_command: Method for modifying a command to allow it to use
-            elevated privileges. If this is None, the application will not be started
-            with elevated privileges.
-        app_args: Command line arguments to be passed to the application on startup.
-        timeout: Timeout used for the SSH channel that is dedicated to this interactive
-            shell. This timeout is for collecting output, so if reading from the buffer
-            and no output is gathered within the timeout, an exception is thrown.
-
-    Attributes
-        _default_prompt: Prompt to expect at the end of output when sending a command.
-            This is often overridden by derived classes.
-        _command_extra_chars: Extra characters to add to the end of every command
-            before sending them. This is often overridden by derived classes and is
-            most commonly an additional newline character.
-        path: Path to the executable to start the interactive application.
-        dpdk_app: Whether this application is a DPDK app. If it is, the build
-            directory for DPDK on the node will be prepended to the path to the
-            executable.
     """
 
     _interactive_session: SSHClient
@@ -61,10 +41,22 @@ class InteractiveShell(ABC):
     _logger: DTSLOG
     _timeout: float
     _app_args: str
-    _default_prompt: str = ""
-    _command_extra_chars: str = ""
-    path: PurePath
-    dpdk_app: bool = False
+
+    #: Prompt to expect at the end of output when sending a command.
+    #: This is often overridden by subclasses.
+    _default_prompt: ClassVar[str] = ""
+
+    #: Extra characters to add to the end of every command
+    #: before sending them. This is often overridden by subclasses and is
+    #: most commonly an additional newline character.
+    _command_extra_chars: ClassVar[str] = ""
+
+    #: Path to the executable to start the interactive application.
+    path: ClassVar[PurePath]
+
+    #: Whether this application is a DPDK app. If it is, the build directory
+    #: for DPDK on the node will be prepended to the path to the executable.
+    dpdk_app: ClassVar[bool] = False
 
     def __init__(
         self,
@@ -74,6 +66,19 @@ def __init__(
         app_args: str = "",
         timeout: float = SETTINGS.timeout,
     ) -> None:
+        """Create an SSH channel during initialization.
+
+        Args:
+            interactive_session: The SSH session dedicated to interactive shells.
+            logger: The logger instance this session will use.
+            get_privileged_command: A method for modifying a command to allow it to use
+                elevated privileges. If :data:`None`, the application will not be started
+                with elevated privileges.
+            app_args: The command line arguments to be passed to the application on startup.
+            timeout: The timeout used for the SSH channel that is dedicated to this interactive
+                shell. This timeout is for collecting output, so if reading from the buffer
+                and no output is gathered within the timeout, an exception is thrown.
+        """
         self._interactive_session = interactive_session
         self._ssh_channel = self._interactive_session.invoke_shell()
         self._stdin = self._ssh_channel.makefile_stdin("w")
@@ -92,6 +97,10 @@ def _start_application(
 
         This method is often overridden by subclasses as their process for
         starting may look different.
+
+        Args:
+            get_privileged_command: A function (but could be any callable) that produces
+                the version of the command with elevated privileges.
         """
         start_command = f"{self.path} {self._app_args}"
         if get_privileged_command is not None:
@@ -99,16 +108,24 @@ def _start_application(
         self.send_command(start_command)
 
     def send_command(self, command: str, prompt: str | None = None) -> str:
-        """Send a command and get all output before the expected ending string.
+        """Send `command` and get all output before the expected ending string.
 
         Lines that expect input are not included in the stdout buffer, so they cannot
-        be used for expect. For example, if you were prompted to log into something
-        with a username and password, you cannot expect "username:" because it won't
-        yet be in the stdout buffer. A workaround for this could be consuming an
-        extra newline character to force the current prompt into the stdout buffer.
+        be used for expect.
+
+        Example:
+            If you were prompted to log into something with a username and password,
+            you cannot expect ``username:`` because it won't yet be in the stdout buffer.
+            A workaround for this could be consuming an extra newline character to force
+            the current `prompt` into the stdout buffer.
+
+        Args:
+            command: The command to send.
+            prompt: After sending the command, `send_command` will be expecting this string.
+                If :data:`None`, will use the class's default prompt.
 
         Returns:
-            All output in the buffer before expected string
+            All output in the buffer before expected string.
         """
         self._logger.info(f"Sending: '{command}'")
         if prompt is None:
@@ -126,8 +143,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str:
         return out
 
     def close(self) -> None:
+        """Properly free all resources."""
         self._stdin.close()
         self._ssh_channel.close()
 
     def __del__(self) -> None:
+        """Make sure the session is properly closed before deleting the object."""
         self.close()
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index cc3ad48a68..ccfd3783e8 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -1,12 +1,32 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Python interactive shell.
+
+Typical usage example in a TestSuite::
+
+    from framework.remote_session import PythonShell
+    python_shell = self.tg_node.create_interactive_shell(
+        PythonShell, timeout=5, privileged=True
+    )
+    python_shell.send_command("print('Hello World')")
+    python_shell.close()
+"""
+
 from pathlib import PurePath
+from typing import ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class PythonShell(InteractiveShell):
-    _default_prompt: str = ">>>"
-    _command_extra_chars: str = "\n"
-    path: PurePath = PurePath("python3")
+    """Python interactive shell."""
+
+    #: Python's prompt.
+    _default_prompt: ClassVar[str] = ">>>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
+
+    #: The Python executable.
+    path: ClassVar[PurePath] = PurePath("python3")
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 1455b5a199..2632515d74 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -1,45 +1,82 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Testpmd interactive shell.
+
+Typical usage example in a TestSuite::
+
+    testpmd_shell = self.sut_node.create_interactive_shell(
+            TestPmdShell, privileged=True
+        )
+    devices = testpmd_shell.get_devices()
+    for device in devices:
+        print(device)
+    testpmd_shell.close()
+"""
+
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class TestPmdDevice(object):
+    """The data of a device that testpmd can recognize.
+
+    Attributes:
+        pci_address: The PCI address of the device.
+    """
+
     pci_address: str
 
     def __init__(self, pci_address_line: str):
+        """Initialize the device from the testpmd output line string.
+
+        Args:
+            pci_address_line: A line of testpmd output that contains a device.
+        """
         self.pci_address = pci_address_line.strip().split(": ")[1].strip()
 
     def __str__(self) -> str:
+        """The PCI address captures what the device is."""
         return self.pci_address
 
 
 class TestPmdShell(InteractiveShell):
-    path: PurePath = PurePath("app", "dpdk-testpmd")
-    dpdk_app: bool = True
-    _default_prompt: str = "testpmd>"
-    _command_extra_chars: str = (
-        "\n"  # We want to append an extra newline to every command
-    )
+    """Testpmd interactive shell.
+
+    The testpmd shell users should never use
+    the :meth:`~framework.remote_session.interactive_shell.InteractiveShell.send_command` method
+    directly, but rather call specialized methods. If there isn't one that satisfies a need,
+    it should be added.
+    """
+
+    #: The path to the testpmd executable.
+    path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd")
+
+    #: Flag this as a DPDK app so that it's clear this is not a system app and
+    #: needs to be looked in a specific path.
+    dpdk_app: ClassVar[bool] = True
+
+    #: The testpmd's prompt.
+    _default_prompt: ClassVar[str] = "testpmd>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
 
     def _start_application(
         self, get_privileged_command: Callable[[str], str] | None
     ) -> None:
-        """See "_start_application" in InteractiveShell."""
         self._app_args += " -- -i"
         super()._start_application(get_privileged_command)
 
     def get_devices(self) -> list[TestPmdDevice]:
-        """Get a list of device names that are known to testpmd
+        """Get a list of device names that are known to testpmd.
 
-        Uses the device info listed in testpmd and then parses the output to
-        return only the names of the devices.
+        Uses the device info listed in testpmd and then parses the output.
 
         Returns:
-            A list of strings representing device names (e.g. 0000:14:00.1)
+            A list of devices.
         """
         dev_info: str = self.send_command("show device info all")
         dev_list: list[TestPmdDevice] = []
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 13/21] dts: port and virtual device docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (11 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 12/21] dts: interactive " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-15 13:09               ` [PATCH v7 14/21] dts: cpu " Juraj Linkeš
                                 ` (8 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/__init__.py       | 16 ++++--
 dts/framework/testbed_model/port.py           | 53 +++++++++++++++----
 dts/framework/testbed_model/virtual_device.py | 17 +++++-
 3 files changed, 71 insertions(+), 15 deletions(-)

diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ced05653b..a02be1f2d9 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,9 +2,19 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-This package contains the classes used to model the physical traffic generator,
-system under test and any other components that need to be interacted with.
+"""Testbed modelling.
+
+This package defines the testbed elements DTS works with:
+
+    * A system under test node: :class:`SutNode`,
+    * A traffic generator node: :class:`TGNode`,
+    * The ports of network interface cards (NICs) present on nodes: :class:`Port`,
+    * The logical cores of CPUs present on nodes: :class:`LogicalCore`,
+    * The virtual devices that can be created on nodes: :class:`VirtualDevice`,
+    * The operating systems running on nodes: :class:`LinuxSession` and :class:`PosixSession`.
+
+DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes
+to properly build and test DPDK.
 """
 
 # pylama:ignore=W0611
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 680c29bfe3..817405bea4 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""NIC port model.
+
+Basic port information, such as location (the port are identified by their PCI address on a node),
+drivers and address.
+"""
+
+
 from dataclasses import dataclass
 
 from framework.config import PortConfig
@@ -9,24 +16,35 @@
 
 @dataclass(slots=True, frozen=True)
 class PortIdentifier:
+    """The port identifier.
+
+    Attributes:
+        node: The node where the port resides.
+        pci: The PCI address of the port on `node`.
+    """
+
     node: str
     pci: str
 
 
 @dataclass(slots=True)
 class Port:
-    """
-    identifier: The PCI address of the port on a node.
-
-    os_driver: The driver used by this port when the OS is controlling it.
-        Example: i40e
-    os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it,
-        Example: vfio-pci.
+    """Physical port on a node.
 
-    Note: os_driver and os_driver_for_dpdk may be the same thing.
-        Example: mlx5_core
+    The ports are identified by the node they're on and their PCI addresses. The port on the other
+    side of the connection is also captured here.
+    Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
+    and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
 
-    peer: The identifier of a port this port is connected with.
+    Attributes:
+        identifier: The PCI address of the port on a node.
+        os_driver: The operating system driver name when the operating system controls the port,
+            e.g.: ``i40e``.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
+        peer: The identifier of a port this port is connected with.
+            The `peer` is on a different node.
+        mac_address: The MAC address of the port.
+        logical_name: The logical name of the port. Must be discovered.
     """
 
     identifier: PortIdentifier
@@ -37,6 +55,12 @@ class Port:
     logical_name: str = ""
 
     def __init__(self, node_name: str, config: PortConfig):
+        """Initialize the port from `node_name` and `config`.
+
+        Args:
+            node_name: The name of the port's node.
+            config: The test run configuration of the port.
+        """
         self.identifier = PortIdentifier(
             node=node_name,
             pci=config.pci,
@@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig):
 
     @property
     def node(self) -> str:
+        """The node where the port resides."""
         return self.identifier.node
 
     @property
     def pci(self) -> str:
+        """The PCI address of the port."""
         return self.identifier.pci
 
 
 @dataclass(slots=True, frozen=True)
 class PortLink:
+    """The physical, cabled connection between the ports.
+
+    Attributes:
+        sut_port: The port on the SUT node connected to `tg_port`.
+        tg_port: The port on the TG node connected to `sut_port`.
+    """
+
     sut_port: Port
     tg_port: Port
diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
index eb664d9f17..e9b5e9c3be 100644
--- a/dts/framework/testbed_model/virtual_device.py
+++ b/dts/framework/testbed_model/virtual_device.py
@@ -1,16 +1,29 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Virtual devices model.
+
+Alongside support for physical hardware, DPDK can create various virtual devices.
+"""
+
 
 class VirtualDevice(object):
-    """
-    Base class for virtual devices used by DPDK.
+    """Base class for virtual devices used by DPDK.
+
+    Attributes:
+        name: The name of the virtual device.
     """
 
     name: str
 
     def __init__(self, name: str):
+        """Initialize the virtual device.
+
+        Args:
+            name: The name of the virtual device.
+        """
         self.name = name
 
     def __str__(self) -> str:
+        """This corresponds to the name used for DPDK devices."""
         return self.name
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 14/21] dts: cpu docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (12 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 13/21] dts: port and virtual device " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-21 17:45                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 15/21] dts: os session " Juraj Linkeš
                                 ` (7 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++--------
 1 file changed, 144 insertions(+), 52 deletions(-)

diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 8fe785dfe4..4edeb4a7c2 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -1,6 +1,22 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""CPU core representation and filtering.
+
+This module provides a unified representation of logical CPU cores along
+with filtering capabilities.
+
+When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
+the physical CPU cores are split into logical CPU cores with different IDs.
+
+:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify
+the socket from which to filter the number of logical cores. It's also possible to not use all
+logical CPU cores from each physical core (e.g. only the first logical core of each physical core).
+
+:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that
+the logical cores are actually present on the server.
+"""
+
 import dataclasses
 from abc import ABC, abstractmethod
 from collections.abc import Iterable, ValuesView
@@ -11,9 +27,17 @@
 
 @dataclass(slots=True, frozen=True)
 class LogicalCore(object):
-    """
-    Representation of a CPU core. A physical core is represented in OS
-    by multiple logical cores (lcores) if CPU multithreading is enabled.
+    """Representation of a logical CPU core.
+
+    A physical core is represented in OS by multiple logical cores (lcores)
+    if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same.
+
+    Attributes:
+        lcore: The logical core ID of a CPU core. It's the same as `core` with
+            disabled multithreading.
+        core: The physical core ID of a CPU core.
+        socket: The physical socket ID where the CPU resides.
+        node: The NUMA node ID where the CPU resides.
     """
 
     lcore: int
@@ -22,27 +46,36 @@ class LogicalCore(object):
     node: int
 
     def __int__(self) -> int:
+        """The CPU is best represented by the logical core, as that's what we configure in EAL."""
         return self.lcore
 
 
 class LogicalCoreList(object):
-    """
-    Convert these options into a list of logical core ids.
-    lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
-    lcore_list=[0,1,2,3] - a list of int indices
-    lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
-    lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
-
-    The class creates a unified format used across the framework and allows
-    the user to use either a str representation (using str(instance) or directly
-    in f-strings) or a list representation (by accessing instance.lcore_list).
-    Empty lcore_list is allowed.
+    r"""A unified way to store :class:`LogicalCore`\s.
+
+    Create a unified format used across the framework and allow the user to use
+    either a :class:`str` representation (using ``str(instance)`` or directly in f-strings)
+    or a :class:`list` representation (by accessing the `lcore_list` property,
+    which stores logical core IDs).
     """
 
     _lcore_list: list[int]
     _lcore_str: str
 
     def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+        """Process `lcore_list`, then sort.
+
+        There are four supported logical core list formats::
+
+            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
+            lcore_list=[0,1,2,3]        # a list of int indices
+            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
+            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
+
+        Args:
+            lcore_list: Various ways to represent multiple logical cores.
+                Empty `lcore_list` is allowed.
+        """
         self._lcore_list = []
         if isinstance(lcore_list, str):
             lcore_list = lcore_list.split(",")
@@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
     @property
     def lcore_list(self) -> list[int]:
+        """The logical core IDs."""
         return self._lcore_list
 
     def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
@@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
         return formatted_core_list
 
     def __str__(self) -> str:
+        """The consecutive ranges of logical core IDs."""
         return self._lcore_str
 
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class LogicalCoreCount(object):
-    """
-    Define the number of logical cores to use.
-    If sockets is not None, socket_count is ignored.
-    """
+    """Define the number of logical cores per physical cores per sockets."""
 
+    #: Use this many logical cores per each physical core.
     lcores_per_core: int = 1
+    #: Use this many physical cores per each socket.
     cores_per_socket: int = 2
+    #: Use this many sockets.
     socket_count: int = 1
+    #: Use exactly these sockets. This takes precedence over `socket_count`,
+    #: so when `sockets` is not :data:`None`, `socket_count` is ignored.
     sockets: list[int] | None = None
 
 
 class LogicalCoreFilter(ABC):
-    """
-    Filter according to the input filter specifier. Each filter needs to be
-    implemented in a derived class.
-    This class only implements operations common to all filters, such as sorting
-    the list to be filtered beforehand.
+    """Common filtering class.
+
+    Each filter needs to be implemented in a subclass. This base class sorts the list of cores
+    and defines the filtering method, which must be implemented by subclasses.
     """
 
     _filter_specifier: LogicalCoreCount | LogicalCoreList
@@ -122,6 +158,17 @@ def __init__(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ):
+        """Filter according to the input filter specifier.
+
+        The input `lcore_list` is copied and sorted by physical core before filtering.
+        The list is copied so that the original is left intact.
+
+        Args:
+            lcore_list: The logical CPU cores to filter.
+            filter_specifier: Filter cores from `lcore_list` according to this filter.
+            ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`,
+                sort in descending order.
+        """
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
@@ -132,31 +179,45 @@ def __init__(
 
     @abstractmethod
     def filter(self) -> list[LogicalCore]:
-        """
-        Use self._filter_specifier to filter self._lcores_to_filter
-        and return the list of filtered LogicalCores.
-        self._lcores_to_filter is a sorted copy of the original list,
-        so it may be modified.
+        r"""Filter the cores.
+
+        Use `self._filter_specifier` to filter `self._lcores_to_filter` and return
+        the filtered :class:`LogicalCore`\s.
+        `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified.
+
+        Returns:
+            The filtered cores.
         """
 
 
 class LogicalCoreCountFilter(LogicalCoreFilter):
-    """
+    """Filter cores by specified counts.
+
     Filter the input list of LogicalCores according to specified rules:
-    Use cores from the specified number of sockets or from the specified socket ids.
-    If sockets is specified, it takes precedence over socket_count.
-    From each of those sockets, use only cores_per_socket of cores.
-    And for each core, use lcores_per_core of logical cores. Hypertheading
-    must be enabled for this to take effect.
-    If ascending is True, use cores with the lowest numerical id first
-    and continue in ascending order. If False, start with the highest
-    id and continue in descending order. This ordering affects which
-    sockets to consider first as well.
+
+        * The input `filter_specifier` is :class:`LogicalCoreCount`,
+        * Use cores from the specified number of sockets or from the specified socket ids,
+        * If `sockets` is specified, it takes precedence over `socket_count`,
+        * From each of those sockets, use only `cores_per_socket` of cores,
+        * And for each core, use `lcores_per_core` of logical cores. Hypertheading
+          must be enabled for this to take effect.
     """
 
     _filter_specifier: LogicalCoreCount
 
     def filter(self) -> list[LogicalCore]:
+        """Filter the cores according to :class:`LogicalCoreCount`.
+
+        Start by filtering the allowed sockets. The cores matching the allowed socket are returned.
+        The cores of each socket are stored in separate lists.
+
+        Then filter the allowed physical cores from those lists of cores per socket. When filtering
+        physical cores, store the desired number of logical cores per physical core which then
+        together constitute the final filtered list.
+
+        Returns:
+            The filtered cores.
+        """
         sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
         filtered_lcores = []
         for socket_to_filter in sockets_to_filter:
@@ -166,24 +227,37 @@ def filter(self) -> list[LogicalCore]:
     def _filter_sockets(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> ValuesView[list[LogicalCore]]:
-        """
-        Remove all lcores that don't match the specified socket(s).
-        If self._filter_specifier.sockets is not None, keep lcores from those sockets,
-        otherwise keep lcores from the first
-        self._filter_specifier.socket_count sockets.
+        """Filter a list of cores per each allowed socket.
+
+        The sockets may be specified in two ways, either a number or a specific list of sockets.
+        In case of a specific list, we just need to return the cores from those sockets.
+        If filtering a number of cores, we need to go through all cores and note which sockets
+        appear and only filter from the first n that appear.
+
+        Args:
+            lcores_to_filter: The cores to filter. These must be sorted by the physical core.
+
+        Returns:
+            A list of lists of logical CPU cores. Each list contains cores from one socket.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
         if self._filter_specifier.sockets:
+            # when sockets in filter is specified, the sockets are already set
             socket_count = len(self._filter_specifier.sockets)
             allowed_sockets = set(self._filter_specifier.sockets)
 
+        # filter socket_count sockets from all sockets by checking the socket of each CPU
         filtered_lcores: dict[int, list[LogicalCore]] = {}
         for lcore in lcores_to_filter:
             if not self._filter_specifier.sockets:
+                # this is when sockets is not set, so we do the actual filtering
+                # when it is set, allowed_sockets is already defined and can't be changed
                 if len(allowed_sockets) < socket_count:
+                    # allowed_sockets is a set, so adding an existing socket won't re-add it
                     allowed_sockets.add(lcore.socket)
             if lcore.socket in allowed_sockets:
+                # separate sockets per socket; this makes it easier in further processing
                 if lcore.socket in filtered_lcores:
                     filtered_lcores[lcore.socket].append(lcore)
                 else:
@@ -200,12 +274,13 @@ def _filter_sockets(
     def _filter_cores_from_socket(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> list[LogicalCore]:
-        """
-        Keep only the first self._filter_specifier.cores_per_socket cores.
-        In multithreaded environments, keep only
-        the first self._filter_specifier.lcores_per_core lcores of those cores.
-        """
+        """Filter a list of cores from the given socket.
+
+        Go through the cores and note how many logical cores per physical core have been filtered.
 
+        Returns:
+            The filtered logical CPU cores.
+        """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
         lcore_count_per_core_map: dict[int, int] = {}
@@ -248,15 +323,21 @@ def _filter_cores_from_socket(
 
 
 class LogicalCoreListFilter(LogicalCoreFilter):
-    """
-    Filter the input list of Logical Cores according to the input list of
-    lcore indices.
-    An empty LogicalCoreList won't filter anything.
+    """Filter the logical CPU cores by logical CPU core IDs.
+
+    This is a simple filter that looks at logical CPU IDs and only filter those that match.
+
+    The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything.
     """
 
     _filter_specifier: LogicalCoreList
 
     def filter(self) -> list[LogicalCore]:
+        """Filter based on logical CPU core ID.
+
+        Return:
+            The filtered logical CPU cores.
+        """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
 
@@ -279,6 +360,17 @@ def lcore_filter(
     filter_specifier: LogicalCoreCount | LogicalCoreList,
     ascending: bool,
 ) -> LogicalCoreFilter:
+    """Factory for using the right filter with `filter_specifier`.
+
+    Args:
+        core_list: The logical CPU cores to filter.
+        filter_specifier: The filter to use.
+        ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+            sort in descending order.
+
+    Returns:
+        The filter matching `filter_specifier`.
+    """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
     elif isinstance(filter_specifier, LogicalCoreCount):
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 15/21] dts: os session docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (13 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 14/21] dts: cpu " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-22 11:50                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 16/21] dts: posix and linux sessions " Juraj Linkeš
                                 ` (6 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/os_session.py | 275 ++++++++++++++++------
 1 file changed, 208 insertions(+), 67 deletions(-)

diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 76e595a518..72b9193a61 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -2,6 +2,29 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""OS-aware remote session.
+
+DPDK supports multiple different operating systems, meaning it can run on these different operating
+systems. This module defines the common API that OS-unaware layers use and translates the API into
+OS-aware calls/utility usage.
+
+Note:
+    Running commands with administrative privileges requires OS awareness. This is the only layer
+    that's aware of OS differences, so this is where non-privileged command get converted
+    to privileged commands.
+
+Example:
+    A user wishes to remove a directory on
+    a remote :class:`~framework.testbed_model.sut_node.SutNode`.
+    The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware what OS the node
+    is running - it delegates the OS translation logic
+    to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node calls
+    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
+    the :attr:`~framework.testbed_model.node.Node.main_session` translates that
+    to ``rm -rf`` if the node's OS is Linux and other commands for other OSs.
+    It also translates the path to match the underlying OS.
+"""
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from ipaddress import IPv4Interface, IPv6Interface
@@ -28,10 +51,16 @@
 
 
 class OSSession(ABC):
-    """
-    The OS classes create a DTS node remote session and implement OS specific
+    """OS-unaware to OS-aware translation API definition.
+
+    The OSSession classes create a remote session to a DTS node and implement OS specific
     behavior. There a few control methods implemented by the base class, the rest need
-    to be implemented by derived classes.
+    to be implemented by subclasses.
+
+    Attributes:
+        name: The name of the session.
+        remote_session: The remote session maintaining the connection to the node.
+        interactive_session: The interactive remote session maintaining the connection to the node.
     """
 
     _config: NodeConfiguration
@@ -46,6 +75,15 @@ def __init__(
         name: str,
         logger: DTSLOG,
     ):
+        """Initialize the OS-aware session.
+
+        Connect to the node right away and also create an interactive remote session.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            name: The name of the session.
+            logger: The logger instance this session will use.
+        """
         self._config = node_config
         self.name = name
         self._logger = logger
@@ -53,15 +91,15 @@ def __init__(
         self.interactive_session = create_interactive_session(node_config, logger)
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session.
+        """Close the underlying remote session.
+
+        Args:
+            force: Force the closure of the connection.
         """
         self.remote_session.close(force)
 
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the underlying remote session is still responding."""
         return self.remote_session.is_alive()
 
     def send_command(
@@ -72,10 +110,23 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        An all-purpose API in case the command to be executed is already
-        OS-agnostic, such as when the path to the executed command has been
-        constructed beforehand.
+        """An all-purpose API for OS-agnostic commands.
+
+        This can be used for an execution of a portable command that's executed the same way
+        on all operating systems, such as Python.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds to execute the command.
+            privileged: Whether to run the command with administrative privileges.
+            verify: If :data:`True`, will check the exit code of the command.
+            env: A dictionary with environment variables to be used with the command execution.
+
+        Raises:
+            RemoteCommandExecutionError: If verify is :data:`True` and the command failed.
         """
         if privileged:
             command = self._get_privileged_command(command)
@@ -89,8 +140,20 @@ def create_interactive_shell(
         privileged: bool,
         app_args: str,
     ) -> InteractiveShellType:
-        """
-        See "create_interactive_shell" in SutNode
+        """Factory for interactive session handlers.
+
+        Instantiate `shell_cls` according to the remote OS specifics.
+
+        Args:
+            shell_cls: The class of the shell.
+            timeout: Timeout for reading output from the SSH channel. If you are
+                reading from the buffer and don't receive any data within the timeout
+                it will throw an error.
+            privileged: Whether to run the shell with administrative privileges.
+            app_args: The arguments to be passed to the application.
+
+        Returns:
+            An instance of the desired interactive application shell.
         """
         return shell_cls(
             self.interactive_session.session,
@@ -114,27 +177,42 @@ def _get_privileged_command(command: str) -> str:
 
     @abstractmethod
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
-        """
-        Try to find DPDK remote dir in remote_dir.
+        """Try to find DPDK directory in `remote_dir`.
+
+        The directory is the one which is created after the extraction of the tarball. The files
+        are usually extracted into a directory starting with ``dpdk-``.
+
+        Returns:
+            The absolute path of the DPDK remote directory, empty path if not found.
         """
 
     @abstractmethod
     def get_remote_tmp_dir(self) -> PurePath:
-        """
-        Get the path of the temporary directory of the remote OS.
+        """Get the path of the temporary directory of the remote OS.
+
+        Returns:
+            The absolute path of the temporary directory.
         """
 
     @abstractmethod
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for the target architecture. Get
-        information from the node if needed.
+        """Create extra environment variables needed for the target architecture.
+
+        Different architectures may require different configuration, such as setting 32-bit CFLAGS.
+
+        Returns:
+            A dictionary with keys as environment variables.
         """
 
     @abstractmethod
     def join_remote_path(self, *args: str | PurePath) -> PurePath:
-        """
-        Join path parts using the path separator that fits the remote OS.
+        """Join path parts using the path separator that fits the remote OS.
+
+        Args:
+            args: Any number of paths to join.
+
+        Returns:
+            The resulting joined path.
         """
 
     @abstractmethod
@@ -143,13 +221,13 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from the remote Node to the local filesystem.
+        """Copy a file from the remote node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote node associated with this remote
+        session to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
+            source_file: the file on the remote node.
             destination_file: a file or directory path on the local filesystem.
         """
 
@@ -159,14 +237,14 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from local filesystem to the remote Node.
+        """Copy a file from local filesystem to the remote node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file`
+        on the remote node associated with this remote session.
 
         Args:
             source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            destination_file: a file or directory path on the remote node.
         """
 
     @abstractmethod
@@ -176,8 +254,12 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
-        """
-        Remove remote directory, by default remove recursively and forcefully.
+        """Remove remote directory, by default remove recursively and forcefully.
+
+        Args:
+            remote_dir_path: The path of the directory to remove.
+            recursive: If :data:`True`, also remove all contents inside the directory.
+            force: If :data:`True`, ignore all warnings and try to remove at all costs.
         """
 
     @abstractmethod
@@ -186,9 +268,12 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
-        """
-        Extract remote tarball in place. If expected_dir is a non-empty string, check
-        whether the dir exists after extracting the archive.
+        """Extract remote tarball in its remote directory.
+
+        Args:
+            remote_tarball_path: The path of the tarball on the remote node.
+            expected_dir: If non-empty, check whether `expected_dir` exists after extracting
+                the archive.
         """
 
     @abstractmethod
@@ -201,69 +286,119 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """
-        Build DPDK in the input dir with specified environment variables and meson
-        arguments.
+        """Build DPDK on the remote node.
+
+        An extracted DPDK tarball must be present on the node. The build consists of two steps::
+
+            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
+            ninja -C remote_dpdk_build_dir
+
+        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
+        environment variable configure the timeout of DPDK build.
+
+        Args:
+            env_vars: Use these environment variables then building DPDK.
+            meson_args: Use these meson arguments when building DPDK.
+            remote_dpdk_dir: The directory on the remote node where DPDK will be built.
+            remote_dpdk_build_dir: The target build directory on the remote node.
+            rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead
+                of ``meson setup``.
+            timeout: Wait at most this long in seconds for the build to execute.
         """
 
     @abstractmethod
     def get_dpdk_version(self, version_path: str | PurePath) -> str:
-        """
-        Inspect DPDK version on the remote node from version_path.
+        """Inspect the DPDK version on the remote node.
+
+        Args:
+            version_path: The path to the VERSION file containing the DPDK version.
+
+        Returns:
+            The DPDK version.
         """
 
     @abstractmethod
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
-        """
-        Compose a list of LogicalCores present on the remote node.
-        If use_first_core is False, the first physical core won't be used.
+        r"""Get the list of :class:`~framework.testbed_model.cpu.LogicalCore`\s on the remote node.
+
+        Args:
+            use_first_core: If :data:`False`, the first physical core won't be used.
+
+        Returns:
+            The logical cores present on the node.
         """
 
     @abstractmethod
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
-        """
-        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
-        dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+        """Kill and cleanup all DPDK apps.
+
+        Args:
+            dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`.
+                If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean.
         """
 
     @abstractmethod
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
-        """
-        Get the DPDK file prefix that will be used when running DPDK apps.
+        """Make OS-specific modification to the DPDK file prefix.
+
+        Args:
+           dpdk_prefix: The OS-unaware file prefix.
+
+        Returns:
+            The OS-specific file prefix.
         """
 
     @abstractmethod
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
-        """
-        Get the node's Hugepage Size, configure the specified amount of hugepages
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Configure hugepages on the node.
+
+        Get the node's Hugepage Size, configure the specified count of hugepages
         if needed and mount the hugepages if needed.
-        If force_first_numa is True, configure hugepages just on the first socket.
+
+        Args:
+            hugepage_count: Configure this many hugepages.
+            force_first_numa:  If :data:`True`, configure hugepages just on the first socket.
         """
 
     @abstractmethod
     def get_compiler_version(self, compiler_name: str) -> str:
-        """
-        Get installed version of compiler used for DPDK
+        """Get installed version of compiler used for DPDK.
+
+        Args:
+            compiler_name: The name of the compiler executable.
+
+        Returns:
+            The compiler's version.
         """
 
     @abstractmethod
     def get_node_info(self) -> NodeInfo:
-        """
-        Collect information about the node
+        """Collect additional information about the node.
+
+        Returns:
+            Node information.
         """
 
     @abstractmethod
     def update_ports(self, ports: list[Port]) -> None:
-        """
-        Get additional information about ports:
-            Logical name (e.g. enp7s0) if applicable
-            Mac address
+        """Get additional information about ports from the operating system and update them.
+
+        The additional information is:
+
+            * Logical name (e.g. ``enp7s0``) if applicable,
+            * Mac address.
+
+        Args:
+            ports: The ports to update.
         """
 
     @abstractmethod
     def configure_port_state(self, port: Port, enable: bool) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port` in the operating system.
+
+        Args:
+            port: The port to configure.
+            enable: If :data:`True`, enable the port, otherwise shut it down.
         """
 
     @abstractmethod
@@ -273,12 +408,18 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
-        """
-        Configure (add or delete) an IP address of the input port.
+        """Configure an IP address on `port` in the operating system.
+
+        Args:
+            address: The address to configure.
+            port: The port to configure.
+            delete: If :data:`True`, remove the IP address, otherwise configure it.
         """
 
     @abstractmethod
     def configure_ipv4_forwarding(self, enable: bool) -> None:
-        """
-        Enable IPv4 forwarding in the underlying OS.
+        """Enable IPv4 forwarding in the operating system.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
         """
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 16/21] dts: posix and linux sessions docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (14 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 15/21] dts: os session " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-22 13:24                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 17/21] dts: node " Juraj Linkeš
                                 ` (5 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/linux_session.py | 63 ++++++++++-----
 dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++---
 2 files changed, 113 insertions(+), 31 deletions(-)

diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index f472bb8f0f..279954ff63 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""Linux OS translator.
+
+Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly
+compliant with POSIX standards, so this module only implements the parts that aren't.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import json
 from ipaddress import IPv4Interface, IPv6Interface
 from typing import TypedDict, Union
@@ -17,43 +24,51 @@
 
 
 class LshwConfigurationOutput(TypedDict):
+    """The relevant parts of ``lshw``'s ``configuration`` section."""
+
+    #:
     link: str
 
 
 class LshwOutput(TypedDict):
-    """
-    A model of the relevant information from json lshw output, e.g.:
-    {
-    ...
-    "businfo" : "pci@0000:08:00.0",
-    "logicalname" : "enp8s0",
-    "version" : "00",
-    "serial" : "52:54:00:59:e1:ac",
-    ...
-    "configuration" : {
-      ...
-      "link" : "yes",
-      ...
-    },
-    ...
+    """A model of the relevant information from ``lshw``'s json output.
+
+    e.g.::
+
+        {
+        ...
+        "businfo" : "pci@0000:08:00.0",
+        "logicalname" : "enp8s0",
+        "version" : "00",
+        "serial" : "52:54:00:59:e1:ac",
+        ...
+        "configuration" : {
+          ...
+          "link" : "yes",
+          ...
+        },
+        ...
     """
 
+    #:
     businfo: str
+    #:
     logicalname: NotRequired[str]
+    #:
     serial: NotRequired[str]
+    #:
     configuration: LshwConfigurationOutput
 
 
 class LinuxSession(PosixSession):
-    """
-    The implementation of non-Posix compliant parts of Linux remote sessions.
-    """
+    """The implementation of non-Posix compliant parts of Linux."""
 
     @staticmethod
     def _get_privileged_command(command: str) -> str:
         return f"sudo -- sh -c '{command}'"
 
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
         cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout
         lcores = []
         for cpu_line in cpu_info.splitlines():
@@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
         return lcores
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
         self._logger.info("Getting Hugepage information.")
         hugepage_size = self._get_hugepage_size()
         hugepages_total = self._get_hugepages_total()
         self._numa_nodes = self._get_numa_nodes()
 
-        if force_first_numa or hugepages_total != hugepage_amount:
+        if force_first_numa or hugepages_total != hugepage_count:
             # when forcing numa, we need to clear existing hugepages regardless
             # of size, so they can be moved to the first numa node
-            self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+            self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
         else:
             self._logger.info("Hugepages already configured.")
         self._mount_huge_pages()
@@ -140,6 +157,7 @@ def _configure_huge_pages(
         )
 
     def update_ports(self, ports: list[Port]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
             assert (
@@ -178,6 +196,7 @@ def _update_port_attr(
             )
 
     def configure_port_state(self, port: Port, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
         state = "up" if enable else "down"
         self.send_command(
             f"ip link set dev {port.logical_name} {state}", privileged=True
@@ -189,6 +208,7 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`."""
         command = "del" if delete else "add"
         self.send_command(
             f"ip address {command} {address} dev {port.logical_name}",
@@ -197,5 +217,6 @@ def configure_port_ip_address(
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`."""
         state = 1 if enable else 0
         self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index 1d1d5b1b26..a4824aa274 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -2,6 +2,15 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""POSIX compliant OS translator.
+
+Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards
+for portability between Unix operating systems which not all Linux distributions
+(or the tools most frequently bundled with said distributions) adhere to. Most of Linux
+distributions are mostly compliant though.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import re
 from collections.abc import Iterable
 from pathlib import PurePath, PurePosixPath
@@ -15,13 +24,21 @@
 
 
 class PosixSession(OSSession):
-    """
-    An intermediary class implementing the Posix compliant parts of
-    Linux and other OS remote sessions.
-    """
+    """An intermediary class implementing the POSIX standard."""
 
     @staticmethod
     def combine_short_options(**opts: bool) -> str:
+        """Combine shell options into one argument.
+
+        These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``.
+
+        Args:
+            opts: The keys are option names (usually one letter) and the bool values indicate
+                whether to include the option in the resulting argument.
+
+        Returns:
+            The options combined into one argument.
+        """
         ret_opts = ""
         for opt, include in opts.items():
             if include:
@@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str:
         return ret_opts
 
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`."""
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
 
     def get_remote_tmp_dir(self) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`."""
         return PurePosixPath("/tmp")
 
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for i686 arch build. Get information
-        from the node if needed.
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`.
+
+        Supported architecture: ``i686``.
         """
         env_vars = {}
         if arch == Architecture.i686:
@@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
         return env_vars
 
     def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.join_remote_path`."""
         return PurePosixPath(*args)
 
     def copy_from(
@@ -70,6 +90,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_file)
 
     def copy_to(
@@ -77,6 +98,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_to`."""
         self.remote_session.copy_to(source_file, destination_file)
 
     def remove_remote_dir(
@@ -85,6 +107,7 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(r=recursive, f=force)
         self.send_command(f"rm{opts} {remote_dir_path}")
 
@@ -93,6 +116,7 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
             f"tar xfm {remote_tarball_path} "
             f"-C {PurePosixPath(remote_tarball_path).parent}",
@@ -110,6 +134,7 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
         try:
             if rebuild:
                 # reconfigure, then build
@@ -140,12 +165,14 @@ def build_dpdk(
             raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
         out = self.send_command(
             f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
         )
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`."""
         self._logger.info("Cleaning up DPDK apps.")
         dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
         if dpdk_runtime_dirs:
@@ -159,6 +186,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
     def _get_dpdk_runtime_dirs(
         self, dpdk_prefix_list: Iterable[str]
     ) -> list[PurePosixPath]:
+        """Find runtime directories DPDK apps are currently using.
+
+        Args:
+              dpdk_prefix_list: The prefixes DPDK apps were started with.
+
+        Returns:
+            The paths of DPDK apps' runtime dirs.
+        """
         prefix = PurePosixPath("/var", "run", "dpdk")
         if not dpdk_prefix_list:
             remote_prefixes = self._list_remote_dirs(prefix)
@@ -170,9 +205,13 @@ def _get_dpdk_runtime_dirs(
         return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
 
     def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
-        """
-        Return a list of directories of the remote_dir.
-        If remote_path doesn't exist, return None.
+        """Contents of remote_path.
+
+        Args:
+            remote_path: List the contents of this path.
+
+        Returns:
+            The contents of remote_path. If remote_path doesn't exist, return None.
         """
         out = self.send_command(
             f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
@@ -183,6 +222,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
             return out.splitlines()
 
     def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+        """Find PIDs of running DPDK apps.
+
+        Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes
+        that opened those file.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+
+        Returns:
+            The PIDs of running DPDK apps.
+        """
         pids = []
         pid_regex = r"p(\d+)"
         for dpdk_runtime_dir in dpdk_runtime_dirs:
@@ -203,6 +253,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool:
     def _check_dpdk_hugepages(
         self, dpdk_runtime_dirs: Iterable[str | PurePath]
     ) -> None:
+        """Check there aren't any leftover hugepages.
+
+        If any hugegapes are found, emit a warning. The hugepages are investigated in the
+        "hugepage_info" file of dpdk_runtime_dirs.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+        """
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
             if self._remote_files_exists(hugepage_info):
@@ -220,9 +278,11 @@ def _remove_dpdk_runtime_dirs(
             self.remove_remote_dir(dpdk_runtime_dir)
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -240,6 +300,7 @@ def get_compiler_version(self, compiler_name: str) -> str:
                 raise ValueError(f"Unknown compiler {compiler_name}")
 
     def get_node_info(self) -> NodeInfo:
+        """Overrides :meth:`~.os_session.OSSession.get_node_info`."""
         os_release_info = self.send_command(
             "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release",
             SETTINGS.timeout,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 17/21] dts: node docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (15 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 16/21] dts: posix and linux sessions " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-22 12:18                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 18/21] dts: sut and tg nodes " Juraj Linkeš
                                 ` (4 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 191 +++++++++++++++++++---------
 1 file changed, 131 insertions(+), 60 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index fa5b143cdd..f93b4acecd 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+A node is any host/server DTS connects to.
+
+The base class, :class:`Node`, provides functionality common to all nodes and is supposed
+to be extended by subclasses with functionality specific to each node type.
+The decorator :func:`Node.skip_setup` can be used without subclassing.
 """
 
 from abc import ABC
@@ -35,10 +40,22 @@
 
 
 class Node(ABC):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather subclassed.
+    It implements common methods to manage any node:
+
+        * Connection to the node,
+        * Hugepages setup.
+
+    Attributes:
+        main_session: The primary OS-aware remote session used to communicate with the node.
+        config: The node configuration.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and the test run configuration.
+        ports: The ports of this node specified in the test run configuration.
+        virtual_devices: The virtual devices used on the node.
     """
 
     main_session: OSSession
@@ -52,6 +69,17 @@ class Node(ABC):
     virtual_devices: list[VirtualDevice]
 
     def __init__(self, node_config: NodeConfiguration):
+        """Connect to the node and gather info during initialization.
+
+        Extra gathered information:
+
+        * The list of available logical CPUs. This is then filtered by
+          the ``lcores`` configuration in the YAML test run configuration file,
+        * Information about ports from the YAML test run configuration file.
+
+        Args:
+            node_config: The node's test run configuration.
+        """
         self.config = node_config
         self.name = node_config.name
         self._logger = getLogger(self.name)
@@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Connected to node: {self.name}")
 
         self._get_remote_cpus()
-        # filter the node lcores according to user config
+        # filter the node lcores according to the test run configuration
         self.lcores = LogicalCoreListFilter(
             self.lcores, LogicalCoreList(self.config.lcores)
         ).filter()
@@ -76,9 +104,14 @@ def _init_ports(self) -> None:
             self.configure_port_state(port)
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call :meth:`_set_up_execution` where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution test run configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
@@ -87,58 +120,74 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
             self.virtual_devices.append(VirtualDevice(vdev))
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps common to all DTS node types.
         """
         self.virtual_devices = []
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution teardown steps.
         """
 
     def set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps common to all DTS node types.
+
+        Args:
+            build_target_config: The build target test run configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-aware remote session.
+
+        The returned session won't be used by the node creating it. The session must be used by
+        the caller. The session will be maintained for the entire lifecycle of the node object,
+        at the end of which the session will be cleaned up automatically.
+
+        Note:
+            Any number of these supplementary sessions may be created.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-aware remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -156,19 +205,19 @@ def create_interactive_shell(
         privileged: bool = False,
         app_args: str = "",
     ) -> InteractiveShellType:
-        """Create a handler for an interactive session.
+        """Factory for interactive session handlers.
 
-        Instantiate shell_cls according to the remote OS specifics.
+        Instantiate `shell_cls` according to the remote OS specifics.
 
         Args:
             shell_cls: The class of the shell.
-            timeout: Timeout for reading output from the SSH channel. If you are
-                reading from the buffer and don't receive any data within the timeout
-                it will throw an error.
+            timeout: Timeout for reading output from the SSH channel. If you are reading from
+                the buffer and don't receive any data within the timeout it will throw an error.
             privileged: Whether to run the shell with administrative privileges.
             app_args: The arguments to be passed to the application.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not shell_cls.dpdk_app:
             shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
@@ -185,14 +234,22 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
+
+        Logical cores that DTS can use are the ones that are present on the node, but filtered
+        according to the test run configuration. The `filter_specifier` will filter cores from
+        those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies the number
+                of logical cores per core, cores per socket and the number of sockets,
+                and another one that specifies a logical core list.
+            ascending: If :data:`True`, use cores with the lowest numerical id first and continue
+                in ascending order. If :data:`False`, start with the highest id and continue
+                in descending order. This ordering affects which sockets to consider first as well.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Returns:
+            The filtered logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -202,17 +259,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self) -> None:
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the node.
+
+        Configure the hugepages only if they're specified in the node's test run configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -220,8 +274,11 @@ def _setup_hugepages(self) -> None:
             )
 
     def configure_port_state(self, port: Port, enable: bool = True) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port`.
+
+        Args:
+            port: The port to enable/disable.
+            enable: :data:`True` to enable, :data:`False` to disable.
         """
         self.main_session.configure_port_state(port, enable)
 
@@ -231,15 +288,17 @@ def configure_port_ip_address(
         port: Port,
         delete: bool = False,
     ) -> None:
-        """
-        Configure the IP address of a port on this node.
+        """Add an IP address to `port` on this node.
+
+        Args:
+            address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6.
+            port: The port to which to add the address.
+            delete: If :data:`True`, will delete the address from the port instead of adding it.
         """
         self.main_session.configure_port_ip_address(address, port, delete)
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -248,6 +307,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """Skip the decorated function.
+
+        The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP`
+        environment variable enable the decorator.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
@@ -257,6 +321,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
 def create_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> OSSession:
+    """Factory for OS-aware sessions.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+    """
     match node_config.os:
         case OS.linux:
             return LinuxSession(node_config, name, logger)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 18/21] dts: sut and tg nodes docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (16 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 17/21] dts: node " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-22 13:12                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 19/21] dts: base traffic generators " Juraj Linkeš
                                 ` (3 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/sut_node.py | 224 ++++++++++++++++--------
 dts/framework/testbed_model/tg_node.py  |  42 +++--
 2 files changed, 173 insertions(+), 93 deletions(-)

diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 17deea06e2..123b16fee0 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -3,6 +3,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""System under test (DPDK + hardware) node.
+
+A system under test (SUT) is the combination of DPDK
+and the hardware we're testing with DPDK (NICs, crypto and other devices).
+An SUT node is where this SUT runs.
+"""
+
+
 import os
 import tarfile
 import time
@@ -26,6 +34,11 @@
 
 
 class EalParameters(object):
+    """The environment abstraction layer parameters.
+
+    The string representation can be created by converting the instance to a string.
+    """
+
     def __init__(
         self,
         lcore_list: LogicalCoreList,
@@ -35,21 +48,23 @@ def __init__(
         vdevs: list[VirtualDevice],
         other_eal_param: str,
     ):
-        """
-        Generate eal parameters character string;
-        :param lcore_list: the list of logical cores to use.
-        :param memory_channels: the number of memory channels to use.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
+        """Initialize the parameters according to inputs.
+
+        Process the parameters into the format used on the command line.
+
+        Args:
+            lcore_list: The list of logical cores to use.
+            memory_channels: The number of memory channels to use.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``
         """
         self._lcore_list = f"-l {lcore_list}"
         self._memory_channels = f"-n {memory_channels}"
@@ -61,6 +76,7 @@ def __init__(
         self._other_eal_param = other_eal_param
 
     def __str__(self) -> str:
+        """Create the EAL string."""
         return (
             f"{self._lcore_list} "
             f"{self._memory_channels} "
@@ -72,11 +88,21 @@ def __str__(self) -> str:
 
 
 class SutNode(Node):
-    """
-    A class for managing connections to the System under Test, providing
-    methods that retrieve the necessary information about the node (such as
-    CPU, memory and NIC details) and configuration capabilities.
-    Another key capability is building DPDK according to given build target.
+    """The system under test node.
+
+    The SUT node extends :class:`Node` with DPDK specific features:
+
+        * DPDK build,
+        * Gathering of DPDK build info,
+        * The running of DPDK apps, interactively or one-time execution,
+        * DPDK apps cleanup.
+
+    The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL`
+    environment variable configure the path to the DPDK tarball
+    or the git commit ID, tag ID or tree ID to test.
+
+    Attributes:
+        config: The SUT node configuration
     """
 
     config: SutNodeConfiguration
@@ -94,6 +120,11 @@ class SutNode(Node):
     _path_to_devbind_script: PurePath | None
 
     def __init__(self, node_config: SutNodeConfiguration):
+        """Extend the constructor with SUT node specifics.
+
+        Args:
+            node_config: The SUT node's test run configuration.
+        """
         super(SutNode, self).__init__(node_config)
         self._dpdk_prefix_list = []
         self._build_target_config = None
@@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration):
 
     @property
     def _remote_dpdk_dir(self) -> PurePath:
+        """The remote DPDK dir.
+
+        This internal property should be set after extracting the DPDK tarball. If it's not set,
+        that implies the DPDK setup step has been skipped, in which case we can guess where
+        a previous build was located.
+        """
         if self.__remote_dpdk_dir is None:
             self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
         return self.__remote_dpdk_dir
@@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
 
     @property
     def remote_dpdk_build_dir(self) -> PurePath:
+        """The remote DPDK build directory.
+
+        This is the directory where DPDK was built.
+        We assume it was built in a subdirectory of the extracted tarball.
+        """
         if self._build_target_config:
             return self.main_session.join_remote_path(
                 self._remote_dpdk_dir, self._build_target_config.name
@@ -132,6 +174,7 @@ def remote_dpdk_build_dir(self) -> PurePath:
 
     @property
     def dpdk_version(self) -> str:
+        """Last built DPDK version."""
         if self._dpdk_version is None:
             self._dpdk_version = self.main_session.get_dpdk_version(
                 self._remote_dpdk_dir
@@ -140,12 +183,14 @@ def dpdk_version(self) -> str:
 
     @property
     def node_info(self) -> NodeInfo:
+        """Additional node information."""
         if self._node_info is None:
             self._node_info = self.main_session.get_node_info()
         return self._node_info
 
     @property
     def compiler_version(self) -> str:
+        """The node's compiler version."""
         if self._compiler_version is None:
             if self._build_target_config is not None:
                 self._compiler_version = self.main_session.get_compiler_version(
@@ -161,6 +206,7 @@ def compiler_version(self) -> str:
 
     @property
     def path_to_devbind_script(self) -> PurePath:
+        """The path to the dpdk-devbind.py script on the node."""
         if self._path_to_devbind_script is None:
             self._path_to_devbind_script = self.main_session.join_remote_path(
                 self._remote_dpdk_dir, "usertools", "dpdk-devbind.py"
@@ -168,6 +214,11 @@ def path_to_devbind_script(self) -> PurePath:
         return self._path_to_devbind_script
 
     def get_build_target_info(self) -> BuildTargetInfo:
+        """Get additional build target information.
+
+        Returns:
+            The build target information,
+        """
         return BuildTargetInfo(
             dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
         )
@@ -178,8 +229,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
     def _set_up_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Setup DPDK on the SUT node.
+        """Setup DPDK on the SUT node.
+
+        Additional build target setup steps on top of those in :class:`Node`.
         """
         # we want to ensure that dpdk_version and compiler_version is reset for new
         # build targets
@@ -200,9 +252,7 @@ def _tear_down_build_target(self) -> None:
     def _configure_build_target(
         self, build_target_config: BuildTargetConfiguration
     ) -> None:
-        """
-        Populate common environment variables and set build target config.
-        """
+        """Populate common environment variables and set build target config."""
         self._env_vars = {}
         self._build_target_config = build_target_config
         self._env_vars.update(
@@ -217,9 +267,7 @@ def _configure_build_target(
 
     @Node.skip_setup
     def _copy_dpdk_tarball(self) -> None:
-        """
-        Copy to and extract DPDK tarball on the SUT node.
-        """
+        """Copy to and extract DPDK tarball on the SUT node."""
         self._logger.info("Copying DPDK tarball to SUT.")
         self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
 
@@ -250,8 +298,9 @@ def _copy_dpdk_tarball(self) -> None:
 
     @Node.skip_setup
     def _build_dpdk(self) -> None:
-        """
-        Build DPDK. Uses the already configured target. Assumes that the tarball has
+        """Build DPDK.
+
+        Uses the already configured target. Assumes that the tarball has
         already been copied to and extracted on the SUT node.
         """
         self.main_session.build_dpdk(
@@ -262,15 +311,19 @@ def _build_dpdk(self) -> None:
         )
 
     def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
-        """
-        Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
-        When app_name is 'all', build all example apps.
-        When app_name is any other string, tries to build that example app.
-        Return the directory path of the built app. If building all apps, return
-        the path to the examples directory (where all apps reside).
-        The meson_dpdk_args are keyword arguments
-        found in meson_option.txt in root DPDK directory. Do not use -D with them,
-        for example: enable_kmods=True.
+        """Build one or all DPDK apps.
+
+        Requires DPDK to be already built on the SUT node.
+
+        Args:
+            app_name: The name of the DPDK app to build.
+                When `app_name` is ``all``, build all example apps.
+            meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Returns:
+            The directory path of the built app. If building all apps, return
+            the path to the examples directory (where all apps reside).
         """
         self.main_session.build_dpdk(
             self._env_vars,
@@ -291,9 +344,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
     def kill_cleanup_dpdk_apps(self) -> None:
-        """
-        Kill all dpdk applications on the SUT. Cleanup hugepages.
-        """
+        """Kill all dpdk applications on the SUT, then clean up hugepages."""
         if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
             # we can use the session if it exists and responds
             self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
@@ -312,33 +363,34 @@ def create_eal_parameters(
         vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
-        """
-        Generate eal parameters character string;
-        :param lcore_filter_specifier: a number of lcores/cores/sockets to use
-                        or a list of lcore ids to use.
-                        The default will select one lcore for each of two cores
-                        on one socket, in ascending order of core ids.
-        :param ascending_cores: True, use cores with the lowest numerical id first
-                        and continue in ascending order. If False, start with the
-                        highest id and continue in descending order. This ordering
-                        affects which sockets to consider first as well.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param append_prefix_timestamp: if True, will append a timestamp to
-                        DPDK file prefix.
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
-        :return: eal param string, eg:
-                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
-        """
+        """Compose the EAL parameters.
+
+        Process the list of cores and the DPDK prefix and pass that along with
+        the rest of the arguments.
 
+        Args:
+            lcore_filter_specifier: A number of lcores/cores/sockets to use
+                or a list of lcore ids to use.
+                The default will select one lcore for each of two cores
+                on one socket, in ascending order of core ids.
+            ascending_cores: Sort cores in ascending order (lowest to highest IDs).
+                If :data:`False`, sort in descending order.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``.
+
+        Returns:
+            An EAL param string, such as
+            ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``.
+        """
         lcore_list = LogicalCoreList(
             self.filter_lcores(lcore_filter_specifier, ascending_cores)
         )
@@ -364,14 +416,29 @@ def create_eal_parameters(
     def run_dpdk_app(
         self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
     ) -> CommandResult:
-        """
-        Run DPDK application on the remote node.
+        """Run DPDK application on the remote node.
+
+        The application is not run interactively - the command that starts the application
+        is executed and then the call waits for it to finish execution.
+
+        Args:
+            app_path: The remote path to the DPDK application.
+            eal_args: EAL parameters to run the DPDK application with.
+            timeout: Wait at most this long in seconds to execute the command.
+
+        Returns:
+            The result of the DPDK app execution.
         """
         return self.main_session.send_command(
             f"{app_path} {eal_args}", timeout, privileged=True, verify=True
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Enable/disable IPv4 forwarding on the node.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
+        """
         self.main_session.configure_ipv4_forwarding(enable)
 
     def create_interactive_shell(
@@ -381,9 +448,13 @@ def create_interactive_shell(
         privileged: bool = False,
         eal_parameters: EalParameters | str | None = None,
     ) -> InteractiveShellType:
-        """Factory method for creating a handler for an interactive session.
+        """Extend the factory for interactive session handlers.
+
+        The extensions are SUT node specific:
 
-        Instantiate shell_cls according to the remote OS specifics.
+            * The default for `eal_parameters`,
+            * The interactive shell path `shell_cls.path` is prepended with path to the remote
+              DPDK build directory for DPDK apps.
 
         Args:
             shell_cls: The class of the shell.
@@ -393,9 +464,10 @@ def create_interactive_shell(
             privileged: Whether to run the shell with administrative privileges.
             eal_parameters: List of EAL parameters to use to launch the app. If this
                 isn't provided or an empty string is passed, it will default to calling
-                create_eal_parameters().
+                :meth:`create_eal_parameters`.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not eal_parameters:
             eal_parameters = self.create_eal_parameters()
@@ -414,8 +486,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
         """Bind all ports on the SUT to a driver.
 
         Args:
-            for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk
-            or, when False, binds to os_driver. Defaults to True.
+            for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
+                If :data:`False`, binds to os_driver.
         """
         for port in self.ports:
             driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 166eb8430e..69eb33ccb1 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -5,13 +5,8 @@
 
 """Traffic generator node.
 
-This is the node where the traffic generator resides.
-The distinction between a node and a traffic generator is as follows:
-A node is a host that DTS connects to. It could be a baremetal server,
-a VM or a container.
-A traffic generator is software running on the node.
-A traffic generator node is a node running a traffic generator.
-A node can be a traffic generator node as well as system under test node.
+A traffic generator (TG) generates traffic that's sent towards the SUT node.
+A TG node is where the TG runs.
 """
 
 from scapy.packet import Packet  # type: ignore[import]
@@ -24,13 +19,16 @@
 
 
 class TGNode(Node):
-    """Manage connections to a node with a traffic generator.
+    """The traffic generator node.
 
-    Apart from basic node management capabilities, the Traffic Generator node has
-    specialized methods for handling the traffic generator running on it.
+    The TG node extends :class:`Node` with TG specific features:
 
-    Arguments:
-        node_config: The user configuration of the traffic generator node.
+        * Traffic generator initialization,
+        * The sending of traffic and receiving packets,
+        * The sending of traffic without receiving packets.
+
+    Not all traffic generators are capable of capturing traffic, which is why there
+    must be a way to send traffic without that.
 
     Attributes:
         traffic_generator: The traffic generator running on the node.
@@ -39,6 +37,13 @@ class TGNode(Node):
     traffic_generator: CapturingTrafficGenerator
 
     def __init__(self, node_config: TGNodeConfiguration):
+        """Extend the constructor with TG node specifics.
+
+        Initialize the traffic generator on the TG node.
+
+        Args:
+            node_config: The TG node's test run configuration.
+        """
         super(TGNode, self).__init__(node_config)
         self.traffic_generator = create_traffic_generator(
             self, node_config.traffic_generator
@@ -52,17 +57,17 @@ def send_packet_and_capture(
         receive_port: Port,
         duration: float = 1,
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet`, return received traffic.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given duration. Also record the captured traffic
         in a pcap file.
 
         Args:
             packet: The packet to send.
             send_port: The egress port on the TG node.
             receive_port: The ingress port in the TG node.
-            duration: Capture traffic for this amount of time after sending the packet.
+            duration: Capture traffic for this amount of time after sending `packet`.
 
         Returns:
              A list of received packets. May be empty if no packets are captured.
@@ -72,6 +77,9 @@ def send_packet_and_capture(
         )
 
     def close(self) -> None:
-        """Free all resources used by the node"""
+        """Free all resources used by the node.
+
+        This extends the superclass method with TG cleanup.
+        """
         self.traffic_generator.close()
         super(TGNode, self).close()
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 19/21] dts: base traffic generators docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (17 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 18/21] dts: sut and tg nodes " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-21 16:20                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 20/21] dts: scapy tg " Juraj Linkeš
                                 ` (2 subsequent siblings)
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../traffic_generator/__init__.py             | 22 ++++++++-
 .../capturing_traffic_generator.py            | 46 +++++++++++--------
 .../traffic_generator/traffic_generator.py    | 33 +++++++------
 3 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 11bfa1ee0f..51cca77da4 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -1,6 +1,19 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""DTS traffic generators.
+
+A traffic generator is capable of generating traffic and then monitor returning traffic.
+A traffic generator may just count the number of received packets
+and it may additionally capture individual packets.
+
+A traffic generator may be software running on generic hardware or it could be specialized hardware.
+
+The traffic generators that only count the number of received packets are suitable only for
+performance testing. In functional testing, we need to be able to dissect each arrived packet
+and a capturing traffic generator is required.
+"""
+
 from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
 from framework.exception import ConfigurationError
 from framework.testbed_model.node import Node
@@ -12,8 +25,15 @@
 def create_traffic_generator(
     tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
 ) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
+    """The factory function for creating traffic generator objects from the test run configuration.
+
+    Args:
+        tg_node: The traffic generator node where the created traffic generator will be running.
+        traffic_generator_config: The traffic generator config.
 
+    Returns:
+        A traffic generator capable of capturing received packets.
+    """
     match traffic_generator_config.traffic_generator_type:
         case TrafficGeneratorType.SCAPY:
             return ScapyTrafficGenerator(tg_node, traffic_generator_config)
diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index e521211ef0..b0a43ad003 100644
--- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -23,19 +23,22 @@
 
 
 def _get_default_capture_name() -> str:
-    """
-    This is the function used for the default implementation of capture names.
-    """
     return str(uuid.uuid4())
 
 
 class CapturingTrafficGenerator(TrafficGenerator):
     """Capture packets after sending traffic.
 
-    A mixin interface which enables a packet generator to declare that it can capture
+    The intermediary interface which enables a packet generator to declare that it can capture
     packets and return them to the user.
 
+    Similarly to
+    :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
+    this class exposes the public methods specific to capturing traffic generators and defines
+    a private method that must implement the traffic generation and capturing logic in subclasses.
+
     The methods of capturing traffic generators obey the following workflow:
+
         1. send packets
         2. capture packets
         3. write the capture to a .pcap file
@@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
 
     @property
     def is_capturing(self) -> bool:
+        """This traffic generator can capture traffic."""
         return True
 
     def send_packet_and_capture(
@@ -54,11 +58,12 @@ def send_packet_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet` and capture received traffic.
+
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        The captured traffic is recorded in the `capture_name`.pcap file.
 
         Args:
             packet: The packet to send.
@@ -68,7 +73,7 @@ def send_packet_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         return self.send_packets_and_capture(
             [packet], send_port, receive_port, duration, capture_name
@@ -82,11 +87,14 @@ def send_packets_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send packets, return received traffic.
+        """Send `packets` and capture received traffic.
 
-        Send packets on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        Send `packets` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
+
+        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
+        can be configured with the :option:`--output-dir` command line argument or
+        the :envvar:`DTS_OUTPUT_DIR` environment variable.
 
         Args:
             packets: The packets to send.
@@ -96,7 +104,7 @@ def send_packets_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         self._logger.debug(get_packet_summaries(packets))
         self._logger.debug(
@@ -124,10 +132,12 @@ def _send_packets_and_capture(
         receive_port: Port,
         duration: float,
     ) -> list[Packet]:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port and receives packets on the receive_port
-        for the specified duration. It must be able to handle no received packets.
+        """The implementation of :method:`send_packets_and_capture`.
+
+        The subclasses must implement this method which sends `packets` on `send_port`
+        and receives packets on `receive_port` for the specified `duration`.
+
+        It must be able to handle no received packets.
         """
 
     def _write_capture_from_packets(
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index ea7c3963da..ed396c6a2f 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -22,7 +22,8 @@
 class TrafficGenerator(ABC):
     """The base traffic generator.
 
-    Defines the few basic methods that each traffic generator must implement.
+    Exposes the common public methods of all traffic generators and defines private methods
+    that must implement the traffic generation logic in subclasses.
     """
 
     _config: TrafficGeneratorConfig
@@ -30,6 +31,12 @@ class TrafficGenerator(ABC):
     _logger: DTSLOG
 
     def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        """Initialize the traffic generator.
+
+        Args:
+            tg_node: The traffic generator node where the created traffic generator will be running.
+            config: The traffic generator's test run configuration.
+        """
         self._config = config
         self._tg_node = tg_node
         self._logger = getLogger(
@@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
         )
 
     def send_packet(self, packet: Packet, port: Port) -> None:
-        """Send a packet and block until it is fully sent.
+        """Send `packet` and block until it is fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packet` on `port`, then wait until `packet` is fully sent.
 
         Args:
             packet: The packet to send.
@@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
         self.send_packets([packet], port)
 
     def send_packets(self, packets: list[Packet], port: Port) -> None:
-        """Send packets and block until they are fully sent.
+        """Send `packets` and block until they are fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packets` on `port`, then wait until `packets` are fully sent.
 
         Args:
             packets: The packets to send.
@@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
 
     @abstractmethod
     def _send_packets(self, packets: list[Packet], port: Port) -> None:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port. The method should block until all packets
-        are fully sent.
+        """The implementation of :method:`send_packets`.
+
+        The subclasses must implement this method which sends `packets` on `port`.
+        The method should block until all `packets` are fully sent.
+
+        What full sent means is defined by the traffic generator.
         """
 
     @property
     def is_capturing(self) -> bool:
-        """Whether this traffic generator can capture traffic.
-
-        Returns:
-            True if the traffic generator can capture traffic, False otherwise.
-        """
+        """This traffic generator can't capture traffic."""
         return False
 
     @abstractmethod
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 20/21] dts: scapy tg docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (18 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 19/21] dts: base traffic generators " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-21 16:33                 ` Yoan Picchi
  2023-11-15 13:09               ` [PATCH v7 21/21] dts: test suites " Juraj Linkeš
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
 1 file changed, 54 insertions(+), 37 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index 51864b6e6b..ed4f879925 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -2,14 +2,15 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""Scapy traffic generator.
+"""The Scapy traffic generator.
 
-Traffic generator used for functional testing, implemented using the Scapy library.
+A traffic generator used for functional testing, implemented with
+`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
 The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
 
-The XML-RPC server runs in an interactive remote SSH session running Python console,
-where we start the server. The communication with the server is facilitated with
-a local server proxy.
+The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
+in an interactive remote Python SSH session. The communication with the server is facilitated
+with a local server proxy from the :mod:`xmlrpc.client` module.
 """
 
 import inspect
@@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
     recv_iface: str,
     duration: float,
 ) -> list[bytes]:
-    """RPC function to send and capture packets.
+    """The RPC function to send and capture packets.
 
-    The function is meant to be executed on the remote TG node.
+    The function is meant to be executed on the remote TG node via the server proxy.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
         recv_iface: The logical name of the ingress interface.
         duration: Capture for this amount of time, in seconds.
 
     Returns:
         A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
+        to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     sniffer = scapy.all.AsyncSniffer(
@@ -98,19 +99,15 @@ def scapy_send_packets_and_capture(
 def scapy_send_packets(
     xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
 ) -> None:
-    """RPC function to send packets.
+    """The RPC function to send packets.
 
-    The function is meant to be executed on the remote TG node.
-    It doesn't return anything, only sends packets.
+    The function is meant to be executed on the remote TG node via the server proxy.
+    It only sends `xmlrpc_packets`, without capturing them.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
-
-    Returns:
-        A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
@@ -130,11 +127,19 @@ def scapy_send_packets(
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-    """Basic XML-RPC server that may be extended
-    by functions serializable by the marshal module.
+    r"""Basic XML-RPC server.
+
+    The server may be augmented by functions serializable by the :mod:`marshal` module.
     """
 
     def __init__(self, *args, **kwargs):
+        """Extend the XML-RPC server initialization.
+
+        Args:
+            args: The positional arguments that will be passed to the superclass's constructor.
+            kwargs: The keyword arguments that will be passed to the superclass's constructor.
+                The `allow_none` argument will be set to :data:`True`.
+        """
         kwargs["allow_none"] = True
         super().__init__(*args, **kwargs)
         self.register_introspection_functions()
@@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs):
         self.register_function(self.add_rpc_function)
 
     def quit(self) -> None:
+        """Quit the server."""
         self._BaseServer__shutdown_request = True
         return None
 
     def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
-        """Add a function to the server.
-
-        This is meant to be executed remotely.
+        """Add a function to the server from the local server proxy.
 
         Args:
               name: The name of the function.
@@ -159,6 +163,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
         self.register_function(function)
 
     def serve_forever(self, poll_interval: float = 0.5) -> None:
+        """Extend the superclass method with an additional print.
+
+        Once executed in the local server proxy, the print gives us a clear string to expect
+        when starting the server. The print means the function was executed on the XML-RPC server.
+        """
         print("XMLRPC OK")
         super().serve_forever(poll_interval)
 
@@ -166,19 +175,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
 class ScapyTrafficGenerator(CapturingTrafficGenerator):
     """Provides access to scapy functions via an RPC interface.
 
-    The traffic generator first starts an XML-RPC on the remote TG node.
-    Then it populates the server with functions which use the Scapy library
-    to send/receive traffic.
-
-    Any packets sent to the remote server are first converted to bytes.
-    They are received as xmlrpc.client.Binary objects on the server side.
-    When the server sends the packets back, they are also received as
-    xmlrpc.client.Binary object on the client side, are converted back to Scapy
-    packets and only then returned from the methods.
+    The class extends the base with remote execution of scapy functions.
 
-    Arguments:
-        tg_node: The node where the traffic generator resides.
-        config: The user configuration of the traffic generator.
+    Any packets sent to the remote server are first converted to bytes. They are received as
+    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
+    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
+    converted back to :class:`scapy.packet.Packet` objects and only then returned from the methods.
 
     Attributes:
         session: The exclusive interactive remote session created by the Scapy
@@ -192,6 +194,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     _config: ScapyTrafficGeneratorConfig
 
     def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        """Extend the constructor with Scapy TG specifics.
+
+        The traffic generator first starts an XML-RPC on the remote `tg_node`.
+        Then it populates the server with functions which use the Scapy library
+        to send/receive traffic:
+
+            * :func:`scapy_send_packets_and_capture`
+            * :func:`scapy_send_packets`
+
+        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
+        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
+
+        Args:
+            tg_node: The node where the traffic generator resides.
+            config: The traffic generator's test run configuration.
+        """
         super().__init__(tg_node, config)
 
         assert (
@@ -237,10 +255,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
             [line for line in src.splitlines() if not line.isspace() and line != ""]
         )
 
-        spacing = "\n" * 4
-
         # execute it in the python terminal
-        self.session.send_command(spacing + src + spacing)
+        self.session.send_command(src + "\n")
         self.session.send_command(
             f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
             f"server.serve_forever()",
@@ -274,6 +290,7 @@ def _send_packets_and_capture(
         return scapy_packets
 
     def close(self) -> None:
+        """Close the traffic generator."""
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v7 21/21] dts: test suites docstring update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (19 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 20/21] dts: scapy tg " Juraj Linkeš
@ 2023-11-15 13:09               ` Juraj Linkeš
  2023-11-16 17:36                 ` Yoan Picchi
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:09 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/tests/TestSuite_hello_world.py | 16 +++++----
 dts/tests/TestSuite_os_udp.py      | 19 +++++++----
 dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++---
 3 files changed, 70 insertions(+), 18 deletions(-)

diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 7e3d95c0cf..662a8f8726 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
 
-"""
+"""The DPDK hello world app test suite.
+
 Run the helloworld example app and verify it prints a message for each used core.
 No other EAL parameters apart from cores are used.
 """
@@ -15,22 +16,25 @@
 
 
 class TestHelloWorld(TestSuite):
+    """DPDK hello world app test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Build the app we're about to test - helloworld.
         """
         self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
 
     def test_hello_world_single_core(self) -> None:
-        """
+        """Single core test case.
+
         Steps:
             Run the helloworld app on the first usable logical core.
         Verify:
             The app prints a message from the used core:
             "hello from core <core_id>"
         """
-
         # get the first usable core
         lcore_amount = LogicalCoreCount(1, 1, 1)
         lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
@@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
         )
 
     def test_hello_world_all_cores(self) -> None:
-        """
+        """All cores test case.
+
         Steps:
             Run the helloworld app on all usable logical cores.
         Verify:
             The app prints a message from all used cores:
             "hello from core <core_id>"
         """
-
         # get the maximum logical core number
         eal_para = self.sut_node.create_eal_parameters(
             lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
index bf6b93deb5..e0c5239612 100644
--- a/dts/tests/TestSuite_os_udp.py
+++ b/dts/tests/TestSuite_os_udp.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
+"""Basic IPv4 OS routing test suite.
+
 Configure SUT node to route traffic from if1 to if2.
 Send a packet to the SUT node, verify it comes back on the second port on the TG node.
 """
@@ -13,24 +14,27 @@
 
 
 class TestOSUdp(TestSuite):
+    """IPv4 UDP OS routing test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
-            Configure SUT ports and SUT to route traffic from if1 to if2.
+            Bind the SUT ports to the OS driver, configure the ports and configure the SUT
+            to route traffic from if1 to if2.
         """
 
-        # This test uses kernel drivers
         self.sut_node.bind_ports_to_driver(for_dpdk=False)
         self.configure_testbed_ipv4()
 
     def test_os_udp(self) -> None:
-        """
+        """Basic UDP IPv4 traffic test case.
+
         Steps:
             Send a UDP packet.
         Verify:
             The packet with proper addresses arrives at the other TG port.
         """
-
         packet = Ether() / IP() / UDP()
 
         received_packets = self.send_packet_and_capture(packet)
@@ -40,7 +44,8 @@ def test_os_udp(self) -> None:
         self.verify_packets(expected_packet, received_packets)
 
     def tear_down_suite(self) -> None:
-        """
+        """Tear down the test suite.
+
         Teardown:
             Remove the SUT port configuration configured in setup.
         """
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index e8016d1b54..6fae099a0e 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -1,6 +1,17 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Smoke test suite.
+
+Smoke tests are a class of tests which are used for validating a minimal set of important features.
+These are the most important features without which (or when they're faulty) the software wouldn't
+work properly. Thus, if any failure occurs while testing these features,
+there isn't that much of a reason to continue testing, as the software is fundamentally broken.
+
+These tests don't have to include only DPDK tests, as the reason for failures could be
+in the infrastructure (a faulty link between NICs or a misconfiguration).
+"""
+
 import re
 
 from framework.config import PortConfig
@@ -11,13 +22,25 @@
 
 
 class SmokeTests(TestSuite):
+    """DPDK and infrastructure smoke test suite.
+
+    The test cases validate the most basic DPDK functionality needed for all other test suites.
+    The infrastructure also needs to be tested, as that is also used by all other test suites.
+
+    Attributes:
+        is_blocking: This test suite will block the execution of all other test suites
+            in the build target after it.
+        nics_in_node: The NICs present on the SUT node.
+    """
+
     is_blocking = True
     # dicts in this list are expected to have two keys:
     # "pci_address" and "current_driver"
     nics_in_node: list[PortConfig] = []
 
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Set the build directory path and generate a list of NICs in the SUT node.
         """
@@ -25,7 +48,13 @@ def set_up_suite(self) -> None:
         self.nics_in_node = self.sut_node.config.ports
 
     def test_unit_tests(self) -> None:
-        """
+        """DPDK meson fast-tests unit tests.
+
+        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
+        These need to be addressed before other testing.
+
+        The fast-tests unit tests are a subset with only the most basic tests.
+
         Test:
             Run the fast-test unit-test suite through meson.
         """
@@ -37,7 +66,14 @@ def test_unit_tests(self) -> None:
         )
 
     def test_driver_tests(self) -> None:
-        """
+        """DPDK meson driver-tests unit tests.
+
+        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
+        These need to be addressed before other testing.
+
+        The driver-tests unit tests are a subset that test only drivers. These may be run
+        with virtual devices as well.
+
         Test:
             Run the driver-test unit-test suite through meson.
         """
@@ -63,7 +99,10 @@ def test_driver_tests(self) -> None:
         )
 
     def test_devices_listed_in_testpmd(self) -> None:
-        """
+        """Testpmd device discovery.
+
+        If the configured devices can't be found in testpmd, they can't be tested.
+
         Test:
             Uses testpmd driver to verify that devices have been found by testpmd.
         """
@@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None:
             )
 
     def test_device_bound_to_driver(self) -> None:
-        """
+        """Device driver in OS.
+
+        The devices must be bound to the proper driver, otherwise they can't be used by DPDK
+        or the traffic generators.
+
         Test:
             Ensure that all drivers listed in the config are bound to the correct
             driver.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v1 0/2] dts: api docs generation
  2023-11-08 12:53           ` [PATCH v6 23/23] dts: add doc generation Juraj Linkeš
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
@ 2023-11-15 13:36             ` Juraj Linkeš
  2023-11-15 13:36               ` [PATCH v1 1/2] dts: add doc generation dependencies Juraj Linkeš
                                 ` (4 more replies)
  1 sibling, 5 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:36 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

The generation is done with Sphinx, which DPDK already uses, with
slightly modified configuration (the sidebar: unlimited depth and better
collapsing - I need comment on this).

Dependencies are installed using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts-doc

This patchet depends on the series which updates the DTS docstrings. The
technical reason for this dependency is the hash value of the
poetry.lock file, which wouldn't match the contents were the patches to
be applied individually (the hash value would need to be recomputed
after applying the second patch).
The logical reason is that there's basically no point in generating the
documentation with the configuration in this patch series that's
tailored to the google format, which is contained in the depended
series. The generation would produce confusing errors and incomplete,
not good looking docs.

This patch series is much less important that the one updating the
docstrings. The docstrings series must be finished and applied as soon
as possible, as it has a dramatic impact on future development, while
this series doesn't hamper other development in any way.

Depends-on: series-30302 ("dts: docstrings update")

Juraj Linkeš (2):
  dts: add doc generation dependencies
  dts: add doc generation

 buildtools/call-sphinx-build.py |  29 +-
 doc/api/meson.build             |   1 +
 doc/guides/conf.py              |  34 ++-
 doc/guides/meson.build          |   1 +
 doc/guides/tools/dts.rst        |  32 +-
 dts/doc/conf_yaml_schema.json   |   1 +
 dts/doc/index.rst               |  17 ++
 dts/doc/meson.build             |  60 ++++
 dts/meson.build                 |  16 +
 dts/poetry.lock                 | 499 +++++++++++++++++++++++++++++++-
 dts/pyproject.toml              |   7 +
 meson.build                     |   1 +
 12 files changed, 681 insertions(+), 17 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v1 1/2] dts: add doc generation dependencies
  2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
@ 2023-11-15 13:36               ` Juraj Linkeš
  2023-11-15 13:36               ` [PATCH v1 2/2] dts: add doc generation Juraj Linkeš
                                 ` (3 subsequent siblings)
  4 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:36 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 499 ++++++++++++++++++++++++++++++++++++++++++++-
 dts/pyproject.toml |   7 +
 2 files changed, 505 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index a734fa71f0..8b27b0d751 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.13.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
+    {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
+]
+
+[package.dependencies]
+setuptools = {version = "*", markers = "python_version >= \"3.12\""}
+
+[package.extras]
+dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +201,105 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.3.2"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"},
+    {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.6"
@@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"]
 test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
 test-randomorder = ["pytest-randomly"]
 
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
 [[package]]
 name = "fabric"
 version = "2.7.1"
@@ -252,6 +401,28 @@ pathlib2 = "*"
 pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
 testing = ["mock (>=2.0.0,<3.0)"]
 
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
+
 [[package]]
 name = "invoke"
 version = "1.7.3"
@@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
 plugins = ["setuptools"]
 requirements-deprecated-finder = ["pip-api", "pipreqs"]
 
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
 [[package]]
 name = "jsonpatch"
 version = "1.33"
@@ -340,6 +528,65 @@ files = [
 [package.dependencies]
 referencing = ">=0.28.0"
 
+[[package]]
+name = "markupsafe"
+version = "2.1.3"
+description = "Safely add untrusted strings to HTML/XML markup."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
+    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -404,6 +651,17 @@ files = [
     {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
 ]
 
+[[package]]
+name = "packaging"
+version = "23.2"
+description = "Core utilities for Python packages"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
+    {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
+]
+
 [[package]]
 name = "paramiko"
 version = "3.2.0"
@@ -515,6 +773,20 @@ files = [
     {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
 ]
 
+[[package]]
+name = "pygments"
+version = "2.16.1"
+description = "Pygments is a syntax highlighting package written in Python."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"},
+    {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
 [[package]]
 name = "pylama"
 version = "8.4.1"
@@ -632,6 +904,27 @@ files = [
 attrs = ">=22.2.0"
 rpds-py = ">=0.7.0"
 
+[[package]]
+name = "requests"
+version = "2.31.0"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
+    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
 [[package]]
 name = "rpds-py"
 version = "0.9.2"
@@ -753,6 +1046,22 @@ basic = ["ipython"]
 complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
 docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
 
+[[package]]
+name = "setuptools"
+version = "68.2.2"
+description = "Easily download, build, install, upgrade, and uninstall Python packages"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"},
+    {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
+testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
+testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
+
 [[package]]
 name = "six"
 version = "1.16.0"
@@ -775,6 +1084,177 @@ files = [
     {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
 ]
 
+[[package]]
+name = "sphinx"
+version = "6.2.1"
+description = "Python documentation generator"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
+    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18.1,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.2"
+description = "Read the Docs theme for Sphinx"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
+    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = ">=4,<5"
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.7"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"},
+    {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.5"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"},
+    {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.4"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"},
+    {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.6"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"},
+    {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.9"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"},
+    {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
 [[package]]
 name = "toml"
 version = "0.10.2"
@@ -819,6 +1299,23 @@ files = [
     {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
 ]
 
+[[package]]
+name = "urllib3"
+version = "2.0.7"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
+    {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
+
 [[package]]
 name = "warlock"
 version = "2.0.1"
@@ -837,4 +1334,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
+content-hash = "44c0fd4ebd7a1630ad104d55a25ef85d361abc968157254e757d44e340bca06f"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 3943c87c87..98df431b3b 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -35,6 +35,13 @@ pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+sphinx = "<7"
+sphinx-rtd-theme = "^1.2.2"
+
 [build-system]
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v1 2/2] dts: add doc generation
  2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
  2023-11-15 13:36               ` [PATCH v1 1/2] dts: add doc generation dependencies Juraj Linkeš
@ 2023-11-15 13:36               ` Juraj Linkeš
  2024-01-22 12:00               ` [PATCH v2 0/3] dts: API docs generation Juraj Linkeš
                                 ` (2 subsequent siblings)
  4 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-15 13:36 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is Sphinx, which is already
used in DPDK. The same configuration is used to preserve style, but it's
been augmented with doc-generating configuration and a change to how the
sidebar displays the content hierarchy.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 29 ++++++++++------
 doc/api/meson.build             |  1 +
 doc/guides/conf.py              | 34 ++++++++++++++++---
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 32 +++++++++++++++++-
 dts/doc/conf_yaml_schema.json   |  1 +
 dts/doc/index.rst               | 17 ++++++++++
 dts/doc/meson.build             | 60 +++++++++++++++++++++++++++++++++
 dts/meson.build                 | 16 +++++++++
 meson.build                     |  1 +
 10 files changed, 176 insertions(+), 16 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..c2f3acfb1d 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,46 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default='.')
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..92fe10d9e7 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..169b1d24bc 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,31 @@
           file=stderr)
     pass
 
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_typehints_description_target = 'documented'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_preprocess_types = True
+add_module_names = False
+toc_object_entries = False
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
+
+# Sidebar config
+html_theme_options = {
+    'collapse_navigation': False,
+    'navigation_depth': -1,
+}
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +59,8 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+path.append(environ.get('DTS_ROOT'))
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index cd771a428c..77d9434c1c 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -283,7 +283,10 @@ When adding code to the DTS framework, pay attention to the rest of the code
 and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
 warnings when some of the basics are not met.
 
-The code must be properly documented with docstrings. The style must conform to
+The API documentation, which is a helpful reference when developing, may be accessed
+in the code directly or generated with the `API docs build steps <building_api_docs>`_.
+
+Speaking of which, the code must be properly documented with docstrings. The style must conform to
 the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
 See an example of the style
 `here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
@@ -408,3 +411,30 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+.. _building_api_docs:
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+.. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+The documentation is built using the standard DPDK build system. After executing the meson command
+and entering Poetry's shell, build the documentation with:
+
+.. code-block:: console
+
+   ninja -C build dts-doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run
+   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json
new file mode 120000
index 0000000000..d89eb81b72
--- /dev/null
+++ b/dts/doc/conf_yaml_schema.json
@@ -0,0 +1 @@
+../framework/config/conf_yaml_schema.json
\ No newline at end of file
diff --git a/dts/doc/index.rst b/dts/doc/index.rst
new file mode 100644
index 0000000000..f5dcd553f2
--- /dev/null
+++ b/dts/doc/index.rst
@@ -0,0 +1,17 @@
+.. DPDK Test Suite documentation.
+
+Welcome to DPDK Test Suite's documentation!
+===========================================
+
+.. toctree::
+   :titlesonly:
+   :caption: Contents:
+
+   framework
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..e11ab83843
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,60 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: false)
+sphinx_apidoc = find_program('sphinx-apidoc', required: false)
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_api_framework_dir = join_paths(dts_dir, 'framework')
+dts_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+if meson.version().version_compare('>=0.57.0')
+    dts_api_src = custom_target('dts_api_src',
+            output: 'modules.rst',
+            env: {'SPHINX_APIDOC_OPTIONS': 'members,show-inheritance'},
+            command: [sphinx_apidoc, '--append-syspath', '--force',
+                '--module-first', '--separate', '-V', meson.project_version(),
+                '--output-dir', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
+                dts_api_framework_dir],
+            build_by_default: false)
+else
+    dts_api_src = custom_target('dts_api_src',
+            output: 'modules.rst',
+            command: ['SPHINX_APIDOC_OPTIONS=members,show-inheritance',
+                sphinx_apidoc, '--append-syspath', '--force',
+                '--module-first', '--separate', '-V', meson.project_version(),
+                '--output-dir', dts_api_build_dir, '--no-toc', '--implicit-namespaces',
+                dts_api_framework_dir],
+            build_by_default: false)
+endif
+doc_targets += dts_api_src
+doc_target_names += 'DTS_API_sphinx_sources'
+
+cp = find_program('cp')
+cp_index = custom_target('cp_index',
+        input: ['index.rst', 'conf_yaml_schema.json'],
+        output: 'index.rst',
+        depends: dts_api_src,
+        command: [cp, '--dereference', '@INPUT@', dts_api_build_dir],
+        build_by_default: false)
+doc_targets += cp_index
+doc_target_names += 'DTS_API_sphinx_index'
+
+extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        depends: cp_index,
+        command: [sphinx_wrapper, sphinx, meson.project_version(),
+            dts_api_build_dir, dts_api_build_dir, extra_sphinx_args],
+        build_by_default: false,
+        install: false,
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..e8ce0f06ac
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts-doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 5e161f43e5..001fdcbbbf 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 21/21] dts: test suites docstring update
  2023-11-15 13:09               ` [PATCH v7 21/21] dts: test suites " Juraj Linkeš
@ 2023-11-16 17:36                 ` Yoan Picchi
  2023-11-20 10:17                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-16 17:36 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/tests/TestSuite_hello_world.py | 16 +++++----
>   dts/tests/TestSuite_os_udp.py      | 19 +++++++----
>   dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++---
>   3 files changed, 70 insertions(+), 18 deletions(-)
> 
> diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
> index 7e3d95c0cf..662a8f8726 100644
> --- a/dts/tests/TestSuite_hello_world.py
> +++ b/dts/tests/TestSuite_hello_world.py
> @@ -1,7 +1,8 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2010-2014 Intel Corporation
>   
> -"""
> +"""The DPDK hello world app test suite.
> +
>   Run the helloworld example app and verify it prints a message for each used core.
>   No other EAL parameters apart from cores are used.
>   """
> @@ -15,22 +16,25 @@
>   
>   
>   class TestHelloWorld(TestSuite):
> +    """DPDK hello world app test suite."""
> +
>       def set_up_suite(self) -> None:
> -        """
> +        """Set up the test suite.
> +
>           Setup:
>               Build the app we're about to test - helloworld.
>           """
>           self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
>   
>       def test_hello_world_single_core(self) -> None:
> -        """
> +        """Single core test case.
> +
>           Steps:
>               Run the helloworld app on the first usable logical core.
>           Verify:
>               The app prints a message from the used core:
>               "hello from core <core_id>"
>           """
> -
>           # get the first usable core
>           lcore_amount = LogicalCoreCount(1, 1, 1)
>           lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
> @@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
>           )
>   
>       def test_hello_world_all_cores(self) -> None:
> -        """
> +        """All cores test case.
> +
>           Steps:
>               Run the helloworld app on all usable logical cores.
>           Verify:
>               The app prints a message from all used cores:
>               "hello from core <core_id>"
>           """
> -
>           # get the maximum logical core number
>           eal_para = self.sut_node.create_eal_parameters(
>               lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
> diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
> index bf6b93deb5..e0c5239612 100644
> --- a/dts/tests/TestSuite_os_udp.py
> +++ b/dts/tests/TestSuite_os_udp.py
> @@ -1,7 +1,8 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   
> -"""
> +"""Basic IPv4 OS routing test suite.
> +
>   Configure SUT node to route traffic from if1 to if2.
>   Send a packet to the SUT node, verify it comes back on the second port on the TG node.
>   """
> @@ -13,24 +14,27 @@
>   
>   
>   class TestOSUdp(TestSuite):
> +    """IPv4 UDP OS routing test suite."""
> +
>       def set_up_suite(self) -> None:
> -        """
> +        """Set up the test suite.
> +
>           Setup:
> -            Configure SUT ports and SUT to route traffic from if1 to if2.
> +            Bind the SUT ports to the OS driver, configure the ports and configure the SUT
> +            to route traffic from if1 to if2.
>           """
>   
> -        # This test uses kernel drivers
>           self.sut_node.bind_ports_to_driver(for_dpdk=False)
>           self.configure_testbed_ipv4()
>   
>       def test_os_udp(self) -> None:
> -        """
> +        """Basic UDP IPv4 traffic test case.
> +
>           Steps:
>               Send a UDP packet.
>           Verify:
>               The packet with proper addresses arrives at the other TG port.
>           """
> -
>           packet = Ether() / IP() / UDP()
>   
>           received_packets = self.send_packet_and_capture(packet)
> @@ -40,7 +44,8 @@ def test_os_udp(self) -> None:
>           self.verify_packets(expected_packet, received_packets)
>   
>       def tear_down_suite(self) -> None:
> -        """
> +        """Tear down the test suite.
> +
>           Teardown:
>               Remove the SUT port configuration configured in setup.
>           """
> diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
> index e8016d1b54..6fae099a0e 100644
> --- a/dts/tests/TestSuite_smoke_tests.py
> +++ b/dts/tests/TestSuite_smoke_tests.py
> @@ -1,6 +1,17 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2023 University of New Hampshire
>   
> +"""Smoke test suite.
> +
> +Smoke tests are a class of tests which are used for validating a minimal set of important features.
> +These are the most important features without which (or when they're faulty) the software wouldn't
> +work properly. Thus, if any failure occurs while testing these features,
> +there isn't that much of a reason to continue testing, as the software is fundamentally broken.
> +
> +These tests don't have to include only DPDK tests, as the reason for failures could be
> +in the infrastructure (a faulty link between NICs or a misconfiguration).
> +"""
> +
>   import re
>   
>   from framework.config import PortConfig
> @@ -11,13 +22,25 @@
>   
>   
>   class SmokeTests(TestSuite):
> +    """DPDK and infrastructure smoke test suite.
> +
> +    The test cases validate the most basic DPDK functionality needed for all other test suites.
> +    The infrastructure also needs to be tested, as that is also used by all other test suites.
> +
> +    Attributes:
> +        is_blocking: This test suite will block the execution of all other test suites
> +            in the build target after it.
> +        nics_in_node: The NICs present on the SUT node.
> +    """
> +
>       is_blocking = True
>       # dicts in this list are expected to have two keys:
>       # "pci_address" and "current_driver"
>       nics_in_node: list[PortConfig] = []
>   
>       def set_up_suite(self) -> None:
> -        """
> +        """Set up the test suite.
> +
>           Setup:
>               Set the build directory path and generate a list of NICs in the SUT node.
>           """
> @@ -25,7 +48,13 @@ def set_up_suite(self) -> None:
>           self.nics_in_node = self.sut_node.config.ports
>   
>       def test_unit_tests(self) -> None:
> -        """
> +        """DPDK meson fast-tests unit tests.
> +
> +        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
> +        These need to be addressed before other testing.
> +
> +        The fast-tests unit tests are a subset with only the most basic tests.
> +
>           Test:
>               Run the fast-test unit-test suite through meson.
>           """
> @@ -37,7 +66,14 @@ def test_unit_tests(self) -> None:
>           )
>   
>       def test_driver_tests(self) -> None:
> -        """
> +        """DPDK meson driver-tests unit tests.
> +

Copy paste from the previous unit test in the driver tests. If it is on 
purpose as both are considered unit tests, then the previous function is 
test_unit_tests and deal with fast-tests

> +
> +        The driver-tests unit tests are a subset that test only drivers. These may be run
> +        with virtual devices as well.
> +
>           Test:
>               Run the driver-test unit-test suite through meson.
>           """
> @@ -63,7 +99,10 @@ def test_driver_tests(self) -> None:
>           )
>   
>       def test_devices_listed_in_testpmd(self) -> None:
> -        """
> +        """Testpmd device discovery.
> +
> +        If the configured devices can't be found in testpmd, they can't be tested.

Maybe a bit nitpicky. This is more of a statement as to why the test 
exist than a description of the test. Suggestion: "Tests that the 
configured devices can be found in testpmd. If they aren't, the 
configuration might be wrong and tests might be skipped"

> +
>           Test:
>               Uses testpmd driver to verify that devices have been found by testpmd.
>           """
> @@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None:
>               )
>   
>       def test_device_bound_to_driver(self) -> None:
> -        """
> +        """Device driver in OS.
> +
> +        The devices must be bound to the proper driver, otherwise they can't be used by DPDK
> +        or the traffic generators.

Same as the previous comment. It is more of a statement as to why the 
test exist than a description of the test

> +
>           Test:
>               Ensure that all drivers listed in the config are bound to the correct
>               driver.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 01/21] dts: code adjustments for doc generation
  2023-11-15 13:09               ` [PATCH v7 01/21] dts: code adjustments for doc generation Juraj Linkeš
@ 2023-11-16 21:04                 ` Jeremy Spewock
  2023-11-20 16:10                   ` Juraj Linkeš
  2023-11-20 16:02                 ` Yoan Picchi
  1 sibling, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-11-16 21:04 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

[-- Attachment #1: Type: text/plain, Size: 53719 bytes --]

On Wed, Nov 15, 2023 at 8:11 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> The standard Python tool for generating API documentation, Sphinx,
> imports modules one-by-one when generating the documentation. This
> requires code changes:
> * properly guarding argument parsing in the if __name__ == '__main__'
>   block,
> * the logger used by DTS runner underwent the same treatment so that it
>   doesn't create log files outside of a DTS run,
> * however, DTS uses the arguments to construct an object holding global
>   variables. The defaults for the global variables needed to be moved
>   from argument parsing elsewhere,
> * importing the remote_session module from framework resulted in
>   circular imports because of one module trying to import another
>   module. This is fixed by reorganizing the code,
> * some code reorganization was done because the resulting structure
>   makes more sense, improving documentation clarity.
>
> The are some other changes which are documentation related:
> * added missing type annotation so they appear in the generated docs,
> * reordered arguments in some methods,
> * removed superfluous arguments and attributes,
> * change private functions/methods/attributes to private and vice-versa.
>
> The above all appear in the generated documentation and the with them,
> the documentation is improved.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/config/__init__.py              | 10 ++-
>  dts/framework/dts.py                          | 33 +++++--
>  dts/framework/exception.py                    | 54 +++++-------
>  dts/framework/remote_session/__init__.py      | 41 ++++-----
>  .../interactive_remote_session.py             |  0
>  .../{remote => }/interactive_shell.py         |  0
>  .../{remote => }/python_shell.py              |  0
>  .../remote_session/remote/__init__.py         | 27 ------
>  .../{remote => }/remote_session.py            |  0
>  .../{remote => }/ssh_session.py               | 12 +--
>  .../{remote => }/testpmd_shell.py             |  0
>  dts/framework/settings.py                     | 87 +++++++++++--------
>  dts/framework/test_result.py                  |  4 +-
>  dts/framework/test_suite.py                   |  7 +-
>  dts/framework/testbed_model/__init__.py       | 12 +--
>  dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
>  dts/framework/testbed_model/hw/__init__.py    | 27 ------
>  .../linux_session.py                          |  6 +-
>  dts/framework/testbed_model/node.py           | 25 ++++--
>  .../os_session.py                             | 22 ++---
>  dts/framework/testbed_model/{hw => }/port.py  |  0
>  .../posix_session.py                          |  4 +-
>  dts/framework/testbed_model/sut_node.py       |  8 +-
>  dts/framework/testbed_model/tg_node.py        | 30 +------
>  .../traffic_generator/__init__.py             | 24 +++++
>  .../capturing_traffic_generator.py            |  6 +-
>  .../{ => traffic_generator}/scapy.py          | 23 ++---
>  .../traffic_generator.py                      | 16 +++-
>  .../testbed_model/{hw => }/virtual_device.py  |  0
>  dts/framework/utils.py                        | 46 +++-------
>  dts/main.py                                   |  9 +-
>  31 files changed, 258 insertions(+), 288 deletions(-)
>  rename dts/framework/remote_session/{remote =>
> }/interactive_remote_session.py (100%)
>  rename dts/framework/remote_session/{remote => }/interactive_shell.py
> (100%)
>  rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
>  delete mode 100644 dts/framework/remote_session/remote/__init__.py
>  rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
>  rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
>  rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
>  rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
>  delete mode 100644 dts/framework/testbed_model/hw/__init__.py
>  rename dts/framework/{remote_session => testbed_model}/linux_session.py
> (97%)
>  rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
>  rename dts/framework/testbed_model/{hw => }/port.py (100%)
>  rename dts/framework/{remote_session => testbed_model}/posix_session.py
> (98%)
>  create mode 100644
> dts/framework/testbed_model/traffic_generator/__init__.py
>  rename dts/framework/testbed_model/{ =>
> traffic_generator}/capturing_traffic_generator.py (96%)
>  rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
>  rename dts/framework/testbed_model/{ =>
> traffic_generator}/traffic_generator.py (80%)
>  rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
>
> diff --git a/dts/framework/config/__init__.py
> b/dts/framework/config/__init__.py
> index cb7e00ba34..2044c82611 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -17,6 +17,7 @@
>  import warlock  # type: ignore[import]
>  import yaml
>
> +from framework.exception import ConfigurationError
>  from framework.settings import SETTINGS
>  from framework.utils import StrEnum
>
> @@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
>      traffic_generator_type: TrafficGeneratorType
>
>      @staticmethod
> -    def from_dict(d: dict):
> +    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
>          # This looks useless now, but is designed to allow expansion to
> traffic
>          # generators that require more configuration later.
>          match TrafficGeneratorType(d["type"]):
> @@ -97,6 +98,10 @@ def from_dict(d: dict):
>                  return ScapyTrafficGeneratorConfig(
>                      traffic_generator_type=TrafficGeneratorType.SCAPY
>                  )
> +            case _:
> +                raise ConfigurationError(
> +                    f'Unknown traffic generator type "{d["type"]}".'
> +                )
>
>
>  @dataclass(slots=True, frozen=True)
> @@ -324,6 +329,3 @@ def load_config() -> Configuration:
>      config: dict[str, Any] = warlock.model_factory(schema,
> name="_Config")(config_data)
>      config_obj: Configuration = Configuration.from_dict(dict(config))
>      return config_obj
> -
> -
> -CONFIGURATION = load_config()
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index f773f0c38d..4c7fb0c40a 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -6,19 +6,19 @@
>  import sys
>
>  from .config import (
> -    CONFIGURATION,
>      BuildTargetConfiguration,
>      ExecutionConfiguration,
>      TestSuiteConfig,
> +    load_config,
>  )
>  from .exception import BlockingTestSuiteError
>  from .logger import DTSLOG, getLogger
>  from .test_result import BuildTargetResult, DTSResult, ExecutionResult,
> Result
>  from .test_suite import get_test_suites
>  from .testbed_model import SutNode, TGNode
> -from .utils import check_dts_python_version
>
> -dts_logger: DTSLOG = getLogger("DTSRunner")
> +# dummy defaults to satisfy linters
> +dts_logger: DTSLOG = None  # type: ignore[assignment]
>  result: DTSResult = DTSResult(dts_logger)
>
>
> @@ -30,14 +30,18 @@ def run_all() -> None:
>      global dts_logger
>      global result
>
> +    # create a regular DTS logger and create a new result with it
> +    dts_logger = getLogger("DTSRunner")
> +    result = DTSResult(dts_logger)
> +
>      # check the python version of the server that run dts
> -    check_dts_python_version()
> +    _check_dts_python_version()
>
>      sut_nodes: dict[str, SutNode] = {}
>      tg_nodes: dict[str, TGNode] = {}
>      try:
>          # for all Execution sections
> -        for execution in CONFIGURATION.executions:
> +        for execution in load_config().executions:
>              sut_node = sut_nodes.get(
> execution.system_under_test_node.name)
>              tg_node = tg_nodes.get(execution.traffic_generator_node.name)
>
> @@ -82,6 +86,25 @@ def run_all() -> None:
>      _exit_dts()
>
>
> +def _check_dts_python_version() -> None:
> +    def RED(text: str) -> str:
> +        return f"\u001B[31;1m{str(text)}\u001B[0m"
> +
> +    if sys.version_info.major < 3 or (
> +        sys.version_info.major == 3 and sys.version_info.minor < 10
> +    ):
> +        print(
> +            RED(
> +                (
> +                    "WARNING: DTS execution node's python version is
> lower than"
> +                    "python 3.10, is deprecated and will not work in
> future releases."
> +                )
> +            ),
> +            file=sys.stderr,
> +        )
> +        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> +
> +
>  def _run_execution(
>      sut_node: SutNode,
>      tg_node: TGNode,
> diff --git a/dts/framework/exception.py b/dts/framework/exception.py
> index 001a5a5496..7489c03570 100644
> --- a/dts/framework/exception.py
> +++ b/dts/framework/exception.py
> @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
>      Command execution timeout.
>      """
>
> -    command: str
> -    output: str
>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _command: str
>
> -    def __init__(self, command: str, output: str):
> -        self.command = command
> -        self.output = output
> +    def __init__(self, command: str):
> +        self._command = command
>
>      def __str__(self) -> str:
> -        return f"TIMEOUT on {self.command}"
> -
> -    def get_output(self) -> str:
> -        return self.output
> +        return f"TIMEOUT on {self._command}"
>
>
>  class SSHConnectionError(DTSError):
> @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
>      SSH connection error.
>      """
>
> -    host: str
> -    errors: list[str]
>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _host: str
> +    _errors: list[str]
>
>      def __init__(self, host: str, errors: list[str] | None = None):
> -        self.host = host
> -        self.errors = [] if errors is None else errors
> +        self._host = host
> +        self._errors = [] if errors is None else errors
>
>      def __str__(self) -> str:
> -        message = f"Error trying to connect with {self.host}."
> -        if self.errors:
> -            message += f" Errors encountered while retrying: {',
> '.join(self.errors)}"
> +        message = f"Error trying to connect with {self._host}."
> +        if self._errors:
> +            message += f" Errors encountered while retrying: {',
> '.join(self._errors)}"
>
>          return message
>
> @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
>      It can no longer be used.
>      """
>
> -    host: str
>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _host: str
>
>      def __init__(self, host: str):
> -        self.host = host
> +        self._host = host
>
>      def __str__(self) -> str:
> -        return f"SSH session with {self.host} has died"
> +        return f"SSH session with {self._host} has died"
>
>
>  class ConfigurationError(DTSError):
> @@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError):
>      Raised when a command executed on a Node returns a non-zero exit
> status.
>      """
>
> -    command: str
> -    command_return_code: int
>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
> +    command: str
> +    _command_return_code: int
>
>      def __init__(self, command: str, command_return_code: int):
>          self.command = command
> -        self.command_return_code = command_return_code
> +        self._command_return_code = command_return_code
>
>      def __str__(self) -> str:
>          return (
>              f"Command {self.command} returned a non-zero exit code: "
> -            f"{self.command_return_code}"
> +            f"{self._command_return_code}"
>          )
>
>
> @@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError):
>      Used in test cases to verify the expected behavior.
>      """
>
> -    value: str
>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
>
> -    def __init__(self, value: str):
> -        self.value = value
> -
> -    def __str__(self) -> str:
> -        return repr(self.value)
> -
>

Does this change mean we are no longer providing descriptions for what
failing the verification means? I guess there isn't really harm in removing
that functionality, but I'm not sure I see the value in removing the extra
information either.


>
>  class BlockingTestSuiteError(DTSError):
> -    suite_name: str
>      severity: ClassVar[ErrorSeverity] =
> ErrorSeverity.BLOCKING_TESTSUITE_ERR
> +    _suite_name: str
>
>      def __init__(self, suite_name: str) -> None:
> -        self.suite_name = suite_name
> +        self._suite_name = suite_name
>
>      def __str__(self) -> str:
> -        return f"Blocking suite {self.suite_name} failed."
> +        return f"Blocking suite {self._suite_name} failed."
> diff --git a/dts/framework/remote_session/__init__.py
> b/dts/framework/remote_session/__init__.py
> index 00b6d1f03a..5e7ddb2b05 100644
> --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__init__.py
> @@ -12,29 +12,24 @@
>
>  # pylama:ignore=W0611
>
> -from framework.config import OS, NodeConfiguration
> -from framework.exception import ConfigurationError
> +from framework.config import NodeConfiguration
>  from framework.logger import DTSLOG
>
> -from .linux_session import LinuxSession
> -from .os_session import InteractiveShellType, OSSession
> -from .remote import (
> -    CommandResult,
> -    InteractiveRemoteSession,
> -    InteractiveShell,
> -    PythonShell,
> -    RemoteSession,
> -    SSHSession,
> -    TestPmdDevice,
> -    TestPmdShell,
> -)
> -
> -
> -def create_session(
> +from .interactive_remote_session import InteractiveRemoteSession
> +from .interactive_shell import InteractiveShell
> +from .python_shell import PythonShell
> +from .remote_session import CommandResult, RemoteSession
> +from .ssh_session import SSHSession
> +from .testpmd_shell import TestPmdShell
> +
> +
> +def create_remote_session(
>      node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> OSSession:
> -    match node_config.os:
> -        case OS.linux:
> -            return LinuxSession(node_config, name, logger)
> -        case _:
> -            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> +) -> RemoteSession:
> +    return SSHSession(node_config, name, logger)
> +
> +
> +def create_interactive_session(
> +    node_config: NodeConfiguration, logger: DTSLOG
> +) -> InteractiveRemoteSession:
> +    return InteractiveRemoteSession(node_config, logger)
> diff --git
> a/dts/framework/remote_session/remote/interactive_remote_session.py
> b/dts/framework/remote_session/interactive_remote_session.py
> similarity index 100%
> rename from
> dts/framework/remote_session/remote/interactive_remote_session.py
> rename to dts/framework/remote_session/interactive_remote_session.py
> diff --git a/dts/framework/remote_session/remote/interactive_shell.py
> b/dts/framework/remote_session/interactive_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/interactive_shell.py
> rename to dts/framework/remote_session/interactive_shell.py
> diff --git a/dts/framework/remote_session/remote/python_shell.py
> b/dts/framework/remote_session/python_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/python_shell.py
> rename to dts/framework/remote_session/python_shell.py
> diff --git a/dts/framework/remote_session/remote/__init__.py
> b/dts/framework/remote_session/remote/__init__.py
> deleted file mode 100644
> index 06403691a5..0000000000
> --- a/dts/framework/remote_session/remote/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -# Copyright(c) 2023 University of New Hampshire
> -
> -# pylama:ignore=W0611
> -
> -from framework.config import NodeConfiguration
> -from framework.logger import DTSLOG
> -
> -from .interactive_remote_session import InteractiveRemoteSession
> -from .interactive_shell import InteractiveShell
> -from .python_shell import PythonShell
> -from .remote_session import CommandResult, RemoteSession
> -from .ssh_session import SSHSession
> -from .testpmd_shell import TestPmdDevice, TestPmdShell
> -
> -
> -def create_remote_session(
> -    node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> RemoteSession:
> -    return SSHSession(node_config, name, logger)
> -
> -
> -def create_interactive_session(
> -    node_config: NodeConfiguration, logger: DTSLOG
> -) -> InteractiveRemoteSession:
> -    return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote/remote_session.py
> b/dts/framework/remote_session/remote_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/remote_session.py
> rename to dts/framework/remote_session/remote_session.py
> diff --git a/dts/framework/remote_session/remote/ssh_session.py
> b/dts/framework/remote_session/ssh_session.py
> similarity index 91%
> rename from dts/framework/remote_session/remote/ssh_session.py
> rename to dts/framework/remote_session/ssh_session.py
> index 8d127f1601..cee11d14d6 100644
> --- a/dts/framework/remote_session/remote/ssh_session.py
> +++ b/dts/framework/remote_session/ssh_session.py
> @@ -18,9 +18,7 @@
>      SSHException,
>  )
>
> -from framework.config import NodeConfiguration
>  from framework.exception import SSHConnectionError, SSHSessionDeadError,
> SSHTimeoutError
> -from framework.logger import DTSLOG
>
>  from .remote_session import CommandResult, RemoteSession
>
> @@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
>
>      session: Connection
>
> -    def __init__(
> -        self,
> -        node_config: NodeConfiguration,
> -        session_name: str,
> -        logger: DTSLOG,
> -    ):
> -        super(SSHSession, self).__init__(node_config, session_name,
> logger)
> -
>      def _connect(self) -> None:
>          errors = []
>          retry_attempts = 10
> @@ -117,7 +107,7 @@ def _send_command(
>
>          except CommandTimedOut as e:
>              self._logger.exception(e)
> -            raise SSHTimeoutError(command, e.result.stderr) from e
> +            raise SSHTimeoutError(command) from e
>
>          return CommandResult(
>              self.name, command, output.stdout, output.stderr,
> output.return_code
> diff --git a/dts/framework/remote_session/remote/testpmd_shell.py
> b/dts/framework/remote_session/testpmd_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/testpmd_shell.py
> rename to dts/framework/remote_session/testpmd_shell.py
> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> index cfa39d011b..7f5841d073 100644
> --- a/dts/framework/settings.py
> +++ b/dts/framework/settings.py
> @@ -6,7 +6,7 @@
>  import argparse
>  import os
>  from collections.abc import Callable, Iterable, Sequence
> -from dataclasses import dataclass
> +from dataclasses import dataclass, field
>  from pathlib import Path
>  from typing import Any, TypeVar
>
> @@ -22,8 +22,8 @@ def __init__(
>              option_strings: Sequence[str],
>              dest: str,
>              nargs: str | int | None = None,
> -            const: str | None = None,
> -            default: str = None,
> +            const: bool | None = None,
> +            default: Any = None,
>              type: Callable[[str], _T | argparse.FileType | None] = None,
>              choices: Iterable[_T] | None = None,
>              required: bool = False,
> @@ -32,6 +32,12 @@ def __init__(
>          ) -> None:
>              env_var_value = os.environ.get(env_var)
>              default = env_var_value or default
> +            if const is not None:
> +                nargs = 0
> +                default = const if env_var_value else default
> +                type = None
> +                choices = None
> +                metavar = None
>              super(_EnvironmentArgument, self).__init__(
>                  option_strings,
>                  dest,
> @@ -52,22 +58,28 @@ def __call__(
>              values: Any,
>              option_string: str = None,
>          ) -> None:
> -            setattr(namespace, self.dest, values)
> +            if self.const is not None:
> +                setattr(namespace, self.dest, self.const)
> +            else:
> +                setattr(namespace, self.dest, values)
>
>      return _EnvironmentArgument
>
>
> -@dataclass(slots=True, frozen=True)
> -class _Settings:
> -    config_file_path: str
> -    output_dir: str
> -    timeout: float
> -    verbose: bool
> -    skip_setup: bool
> -    dpdk_tarball_path: Path
> -    compile_timeout: float
> -    test_cases: list
> -    re_run: int
> +@dataclass(slots=True)
> +class Settings:
> +    config_file_path: Path =
> Path(__file__).parent.parent.joinpath("conf.yaml")
> +    output_dir: str = "output"
> +    timeout: float = 15
> +    verbose: bool = False
> +    skip_setup: bool = False
> +    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
> +    compile_timeout: float = 1200
> +    test_cases: list[str] = field(default_factory=list)
> +    re_run: int = 0
> +
> +
> +SETTINGS: Settings = Settings()
>
>
>  def _get_parser() -> argparse.ArgumentParser:
> @@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
>      parser.add_argument(
>          "--config-file",
>          action=_env_arg("DTS_CFG_FILE"),
> -        default="conf.yaml",
> +        default=SETTINGS.config_file_path,
> +        type=Path,
>          help="[DTS_CFG_FILE] configuration file that describes the test
> cases, SUTs "
>          "and targets.",
>      )
> @@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
>          "--output-dir",
>          "--output",
>          action=_env_arg("DTS_OUTPUT_DIR"),
> -        default="output",
> +        default=SETTINGS.output_dir,
>          help="[DTS_OUTPUT_DIR] Output directory where dts logs and
> results are saved.",
>      )
>
> @@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
>          "-t",
>          "--timeout",
>          action=_env_arg("DTS_TIMEOUT"),
> -        default=15,
> +        default=SETTINGS.timeout,
>          type=float,
>          help="[DTS_TIMEOUT] The default timeout for all DTS operations
> except for "
>          "compiling DPDK.",
> @@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
>          "-v",
>          "--verbose",
>          action=_env_arg("DTS_VERBOSE"),
> -        default="N",
> -        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging
> all messages "
> +        default=SETTINGS.verbose,
> +        const=True,
> +        help="[DTS_VERBOSE] Specify to enable verbose output, logging all
> messages "
>          "to the console.",
>      )
>
> @@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
>          "-s",
>          "--skip-setup",
>          action=_env_arg("DTS_SKIP_SETUP"),
> -        default="N",
> -        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT
> and TG nodes.",
> +        const=True,
> +        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and
> TG nodes.",
>      )
>
>      parser.add_argument(
> @@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
>          "--snapshot",
>          "--git-ref",
>          action=_env_arg("DTS_DPDK_TARBALL"),
> -        default="dpdk.tar.xz",
> +        default=SETTINGS.dpdk_tarball_path,
>          type=Path,
>          help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a
> git commit ID, "
>          "tag ID or tree ID to test. To test local changes, first commit
> them, "
> @@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
>      parser.add_argument(
>          "--compile-timeout",
>          action=_env_arg("DTS_COMPILE_TIMEOUT"),
> -        default=1200,
> +        default=SETTINGS.compile_timeout,
>          type=float,
>          help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
>      )
> @@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
>          "--re-run",
>          "--re_run",
>          action=_env_arg("DTS_RERUN"),
> -        default=0,
> +        default=SETTINGS.re_run,
>          type=int,
>          help="[DTS_RERUN] Re-run each test case the specified amount of
> times "
>          "if a test failure occurs",
> @@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
>      return parser
>
>
> -def _get_settings() -> _Settings:
> +def get_settings() -> Settings:
>      parsed_args = _get_parser().parse_args()
> -    return _Settings(
> +    return Settings(
>          config_file_path=parsed_args.config_file,
>          output_dir=parsed_args.output_dir,
>          timeout=parsed_args.timeout,
> -        verbose=(parsed_args.verbose == "Y"),
> -        skip_setup=(parsed_args.skip_setup == "Y"),
> +        verbose=parsed_args.verbose,
> +        skip_setup=parsed_args.skip_setup,
>          dpdk_tarball_path=Path(
> -            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
> -        )
> -        if not os.path.exists(parsed_args.tarball)
> -        else Path(parsed_args.tarball),
> +            Path(DPDKGitTarball(parsed_args.tarball,
> parsed_args.output_dir))
> +            if not os.path.exists(parsed_args.tarball)
> +            else Path(parsed_args.tarball)
> +        ),
>          compile_timeout=parsed_args.compile_timeout,
> -        test_cases=parsed_args.test_cases.split(",") if
> parsed_args.test_cases else [],
> +        test_cases=(
> +            parsed_args.test_cases.split(",") if parsed_args.test_cases
> else []
> +        ),
>          re_run=parsed_args.re_run,
>      )
> -
> -
> -SETTINGS: _Settings = _get_settings()
> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
> index f0fbe80f6f..603e18872c 100644
> --- a/dts/framework/test_result.py
> +++ b/dts/framework/test_result.py
> @@ -254,7 +254,7 @@ def add_build_target(
>          self._inner_results.append(build_target_result)
>          return build_target_result
>
> -    def add_sut_info(self, sut_info: NodeInfo):
> +    def add_sut_info(self, sut_info: NodeInfo) -> None:
>          self.sut_os_name = sut_info.os_name
>          self.sut_os_version = sut_info.os_version
>          self.sut_kernel_version = sut_info.kernel_version
> @@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration)
> -> ExecutionResult:
>          self._inner_results.append(execution_result)
>          return execution_result
>
> -    def add_error(self, error) -> None:
> +    def add_error(self, error: Exception) -> None:
>          self._errors.append(error)
>
>      def process(self) -> None:
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index 3b890c0451..d53553bf34 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -11,7 +11,7 @@
>  import re
>  from ipaddress import IPv4Interface, IPv6Interface, ip_interface
>  from types import MethodType
> -from typing import Union
> +from typing import Any, Union
>
>  from scapy.layers.inet import IP  # type: ignore[import]
>  from scapy.layers.l2 import Ether  # type: ignore[import]
> @@ -26,8 +26,7 @@
>  from .logger import DTSLOG, getLogger
>  from .settings import SETTINGS
>  from .test_result import BuildTargetResult, Result, TestCaseResult,
> TestSuiteResult
> -from .testbed_model import SutNode, TGNode
> -from .testbed_model.hw.port import Port, PortLink
> +from .testbed_model import Port, PortLink, SutNode, TGNode
>  from .utils import get_packet_summaries
>
>
> @@ -453,7 +452,7 @@ def _execute_test_case(
>
>
>  def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
> -    def is_test_suite(object) -> bool:
> +    def is_test_suite(object: Any) -> bool:
>          try:
>              if issubclass(object, TestSuite) and object is not TestSuite:
>                  return True
> diff --git a/dts/framework/testbed_model/__init__.py
> b/dts/framework/testbed_model/__init__.py
> index 5cbb859e47..8ced05653b 100644
> --- a/dts/framework/testbed_model/__init__.py
> +++ b/dts/framework/testbed_model/__init__.py
> @@ -9,15 +9,9 @@
>
>  # pylama:ignore=W0611
>
> -from .hw import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -    VirtualDevice,
> -    lcore_filter,
> -)
> +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
>  from .node import Node
> +from .port import Port, PortLink
>  from .sut_node import SutNode
>  from .tg_node import TGNode
> +from .virtual_device import VirtualDevice
> diff --git a/dts/framework/testbed_model/hw/cpu.py
> b/dts/framework/testbed_model/cpu.py
> similarity index 95%
> rename from dts/framework/testbed_model/hw/cpu.py
> rename to dts/framework/testbed_model/cpu.py
> index d1918a12dc..8fe785dfe4 100644
> --- a/dts/framework/testbed_model/hw/cpu.py
> +++ b/dts/framework/testbed_model/cpu.py
> @@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
>              )
>
>          return filtered_lcores
> +
> +
> +def lcore_filter(
> +    core_list: list[LogicalCore],
> +    filter_specifier: LogicalCoreCount | LogicalCoreList,
> +    ascending: bool,
> +) -> LogicalCoreFilter:
> +    if isinstance(filter_specifier, LogicalCoreList):
> +        return LogicalCoreListFilter(core_list, filter_specifier,
> ascending)
> +    elif isinstance(filter_specifier, LogicalCoreCount):
> +        return LogicalCoreCountFilter(core_list, filter_specifier,
> ascending)
> +    else:
> +        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/testbed_model/hw/__init__.py
> b/dts/framework/testbed_model/hw/__init__.py
> deleted file mode 100644
> index 88ccac0b0e..0000000000
> --- a/dts/framework/testbed_model/hw/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -
> -# pylama:ignore=W0611
> -
> -from .cpu import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -)
> -from .virtual_device import VirtualDevice
> -
> -
> -def lcore_filter(
> -    core_list: list[LogicalCore],
> -    filter_specifier: LogicalCoreCount | LogicalCoreList,
> -    ascending: bool,
> -) -> LogicalCoreFilter:
> -    if isinstance(filter_specifier, LogicalCoreList):
> -        return LogicalCoreListFilter(core_list, filter_specifier,
> ascending)
> -    elif isinstance(filter_specifier, LogicalCoreCount):
> -        return LogicalCoreCountFilter(core_list, filter_specifier,
> ascending)
> -    else:
> -        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/remote_session/linux_session.py
> b/dts/framework/testbed_model/linux_session.py
> similarity index 97%
> rename from dts/framework/remote_session/linux_session.py
> rename to dts/framework/testbed_model/linux_session.py
> index a3f1a6bf3b..f472bb8f0f 100644
> --- a/dts/framework/remote_session/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -9,10 +9,10 @@
>  from typing_extensions import NotRequired
>
>  from framework.exception import RemoteCommandExecutionError
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
>  from framework.utils import expand_range
>
> +from .cpu import LogicalCore
> +from .port import Port
>  from .posix_session import PosixSession
>
>
> @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) ->
> list[LogicalCore]:
>              lcores.append(LogicalCore(lcore, core, socket, node))
>          return lcores
>
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>          return dpdk_prefix
>
>      def setup_hugepages(self, hugepage_amount: int, force_first_numa:
> bool) -> None:
> diff --git a/dts/framework/testbed_model/node.py
> b/dts/framework/testbed_model/node.py
> index fc01e0bf8e..fa5b143cdd 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -12,23 +12,26 @@
>  from typing import Any, Callable, Type, Union
>
>  from framework.config import (
> +    OS,
>      BuildTargetConfiguration,
>      ExecutionConfiguration,
>      NodeConfiguration,
>  )
> +from framework.exception import ConfigurationError
>  from framework.logger import DTSLOG, getLogger
> -from framework.remote_session import InteractiveShellType, OSSession,
> create_session
>  from framework.settings import SETTINGS
>
> -from .hw import (
> +from .cpu import (
>      LogicalCore,
>      LogicalCoreCount,
>      LogicalCoreList,
>      LogicalCoreListFilter,
> -    VirtualDevice,
>      lcore_filter,
>  )
> -from .hw.port import Port
> +from .linux_session import LinuxSession
> +from .os_session import InteractiveShellType, OSSession
> +from .port import Port
> +from .virtual_device import VirtualDevice
>
>
>  class Node(ABC):
> @@ -172,9 +175,9 @@ def create_interactive_shell(
>
>          return self.main_session.create_interactive_shell(
>              shell_cls,
> -            app_args,
>              timeout,
>              privileged,
> +            app_args,
>          )
>
>      def filter_lcores(
> @@ -205,7 +208,7 @@ def _get_remote_cpus(self) -> None:
>          self._logger.info("Getting CPU information.")
>          self.lcores =
> self.main_session.get_remote_cpus(self.config.use_first_core)
>
> -    def _setup_hugepages(self):
> +    def _setup_hugepages(self) -> None:
>          """
>          Setup hugepages on the Node. Different architectures can supply
> different
>          amounts of memory for hugepages and numa-based hugepage
> allocation may need
> @@ -249,3 +252,13 @@ def skip_setup(func: Callable[..., Any]) ->
> Callable[..., Any]:
>              return lambda *args: None
>          else:
>              return func
> +
> +
> +def create_session(
> +    node_config: NodeConfiguration, name: str, logger: DTSLOG
> +) -> OSSession:
> +    match node_config.os:
> +        case OS.linux:
> +            return LinuxSession(node_config, name, logger)
> +        case _:
> +            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> diff --git a/dts/framework/remote_session/os_session.py
> b/dts/framework/testbed_model/os_session.py
> similarity index 95%
> rename from dts/framework/remote_session/os_session.py
> rename to dts/framework/testbed_model/os_session.py
> index 8a709eac1c..76e595a518 100644
> --- a/dts/framework/remote_session/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -10,19 +10,19 @@
>
>  from framework.config import Architecture, NodeConfiguration, NodeInfo
>  from framework.logger import DTSLOG
> -from framework.remote_session.remote import InteractiveShell
> -from framework.settings import SETTINGS
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
> -from framework.utils import MesonArgs
> -
> -from .remote import (
> +from framework.remote_session import (
>      CommandResult,
>      InteractiveRemoteSession,
> +    InteractiveShell,
>      RemoteSession,
>      create_interactive_session,
>      create_remote_session,
>  )
> +from framework.settings import SETTINGS
> +from framework.utils import MesonArgs
> +
> +from .cpu import LogicalCore
> +from .port import Port
>
>  InteractiveShellType = TypeVar("InteractiveShellType",
> bound=InteractiveShell)
>
> @@ -85,9 +85,9 @@ def send_command(
>      def create_interactive_shell(
>          self,
>          shell_cls: Type[InteractiveShellType],
> -        eal_parameters: str,
>          timeout: float,
>          privileged: bool,
> +        app_args: str,
>      ) -> InteractiveShellType:
>          """
>          See "create_interactive_shell" in SutNode
> @@ -96,7 +96,7 @@ def create_interactive_shell(
>              self.interactive_session.session,
>              self._logger,
>              self._get_privileged_command if privileged else None,
> -            eal_parameters,
> +            app_args,
>              timeout,
>          )
>
> @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
>          """
>
>      @abstractmethod
> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) ->
> PurePath:
>          """
>          Try to find DPDK remote dir in remote_dir.
>          """
> @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list:
> Iterable[str]) -> None:
>          """
>
>      @abstractmethod
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>          """
>          Get the DPDK file prefix that will be used when running DPDK apps.
>          """
> diff --git a/dts/framework/testbed_model/hw/port.py
> b/dts/framework/testbed_model/port.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/port.py
> rename to dts/framework/testbed_model/port.py
> diff --git a/dts/framework/remote_session/posix_session.py
> b/dts/framework/testbed_model/posix_session.py
> similarity index 98%
> rename from dts/framework/remote_session/posix_session.py
> rename to dts/framework/testbed_model/posix_session.py
> index 5da0516e05..1d1d5b1b26 100644
> --- a/dts/framework/remote_session/posix_session.py
> +++ b/dts/framework/testbed_model/posix_session.py
> @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
>
>          return ret_opts
>
> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) ->
> PurePosixPath:
>          remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
>          result = self.send_command(f"ls -d {remote_guess} | tail -1")
>          return PurePosixPath(result.stdout)
> @@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs(
>          for dpdk_runtime_dir in dpdk_runtime_dirs:
>              self.remove_remote_dir(dpdk_runtime_dir)
>
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>          return ""
>
>      def get_compiler_version(self, compiler_name: str) -> str:
> diff --git a/dts/framework/testbed_model/sut_node.py
> b/dts/framework/testbed_model/sut_node.py
> index 4161d3a4d5..17deea06e2 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -15,12 +15,14 @@
>      NodeInfo,
>      SutNodeConfiguration,
>  )
> -from framework.remote_session import CommandResult, InteractiveShellType,
> OSSession
> +from framework.remote_session import CommandResult
>  from framework.settings import SETTINGS
>  from framework.utils import MesonArgs
>
> -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
> +from .cpu import LogicalCoreCount, LogicalCoreList
>  from .node import Node
> +from .os_session import InteractiveShellType, OSSession
> +from .virtual_device import VirtualDevice
>
>
>  class EalParameters(object):
> @@ -307,7 +309,7 @@ def create_eal_parameters(
>          prefix: str = "dpdk",
>          append_prefix_timestamp: bool = True,
>          no_pci: bool = False,
> -        vdevs: list[VirtualDevice] = None,
> +        vdevs: list[VirtualDevice] | None = None,
>          other_eal_param: str = "",
>      ) -> "EalParameters":
>          """
> diff --git a/dts/framework/testbed_model/tg_node.py
> b/dts/framework/testbed_model/tg_node.py
> index 27025cfa31..166eb8430e 100644
> --- a/dts/framework/testbed_model/tg_node.py
> +++ b/dts/framework/testbed_model/tg_node.py
> @@ -16,16 +16,11 @@
>
>  from scapy.packet import Packet  # type: ignore[import]
>
> -from framework.config import (
> -    ScapyTrafficGeneratorConfig,
> -    TGNodeConfiguration,
> -    TrafficGeneratorType,
> -)
> -from framework.exception import ConfigurationError
> -
> -from .capturing_traffic_generator import CapturingTrafficGenerator
> -from .hw.port import Port
> +from framework.config import TGNodeConfiguration
> +
>  from .node import Node
> +from .port import Port
> +from .traffic_generator import CapturingTrafficGenerator,
> create_traffic_generator
>
>
>  class TGNode(Node):
> @@ -80,20 +75,3 @@ def close(self) -> None:
>          """Free all resources used by the node"""
>          self.traffic_generator.close()
>          super(TGNode, self).close()
> -
> -
> -def create_traffic_generator(
> -    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
> -) -> CapturingTrafficGenerator:
> -    """A factory function for creating traffic generator object from user
> config."""
> -
> -    from .scapy import ScapyTrafficGenerator
> -
> -    match traffic_generator_config.traffic_generator_type:
> -        case TrafficGeneratorType.SCAPY:
> -            return ScapyTrafficGenerator(tg_node,
> traffic_generator_config)
> -        case _:
> -            raise ConfigurationError(
> -                "Unknown traffic generator: "
> -                f"{traffic_generator_config.traffic_generator_type}"
> -            )
> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py
> b/dts/framework/testbed_model/traffic_generator/__init__.py
> new file mode 100644
> index 0000000000..11bfa1ee0f
> --- /dev/null
> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> @@ -0,0 +1,24 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +from framework.config import ScapyTrafficGeneratorConfig,
> TrafficGeneratorType
> +from framework.exception import ConfigurationError
> +from framework.testbed_model.node import Node
> +
> +from .capturing_traffic_generator import CapturingTrafficGenerator
> +from .scapy import ScapyTrafficGenerator
> +
> +
> +def create_traffic_generator(
> +    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
> +) -> CapturingTrafficGenerator:
> +    """A factory function for creating traffic generator object from user
> config."""
> +
> +    match traffic_generator_config.traffic_generator_type:
> +        case TrafficGeneratorType.SCAPY:
> +            return ScapyTrafficGenerator(tg_node,
> traffic_generator_config)
> +        case _:
> +            raise ConfigurationError(
> +                "Unknown traffic generator: "
> +                f"{traffic_generator_config.traffic_generator_type}"
> +            )
> diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py
> b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> similarity index 96%
> rename from dts/framework/testbed_model/capturing_traffic_generator.py
> rename to
> dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> index ab98987f8e..e521211ef0 100644
> --- a/dts/framework/testbed_model/capturing_traffic_generator.py
> +++
> b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> @@ -16,9 +16,9 @@
>  from scapy.packet import Packet  # type: ignore[import]
>
>  from framework.settings import SETTINGS
> +from framework.testbed_model.port import Port
>  from framework.utils import get_packet_summaries
>
> -from .hw.port import Port
>  from .traffic_generator import TrafficGenerator
>
>
> @@ -130,7 +130,9 @@ def _send_packets_and_capture(
>          for the specified duration. It must be able to handle no received
> packets.
>          """
>
> -    def _write_capture_from_packets(self, capture_name: str, packets:
> list[Packet]):
> +    def _write_capture_from_packets(
> +        self, capture_name: str, packets: list[Packet]
> +    ) -> None:
>          file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
>          self._logger.debug(f"Writing packets to {file_name}.")
>          scapy.utils.wrpcap(file_name, packets)
> diff --git a/dts/framework/testbed_model/scapy.py
> b/dts/framework/testbed_model/traffic_generator/scapy.py
> similarity index 95%
> rename from dts/framework/testbed_model/scapy.py
> rename to dts/framework/testbed_model/traffic_generator/scapy.py
> index af0d4dbb25..51864b6e6b 100644
> --- a/dts/framework/testbed_model/scapy.py
> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
> @@ -24,16 +24,15 @@
>  from scapy.packet import Packet  # type: ignore[import]
>
>  from framework.config import OS, ScapyTrafficGeneratorConfig
> -from framework.logger import DTSLOG, getLogger
>  from framework.remote_session import PythonShell
>  from framework.settings import SETTINGS
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>
>  from .capturing_traffic_generator import (
>      CapturingTrafficGenerator,
>      _get_default_capture_name,
>  )
> -from .hw.port import Port
> -from .tg_node import TGNode
>
>  """
>  ========= BEGIN RPC FUNCTIONS =========
> @@ -146,7 +145,7 @@ def quit(self) -> None:
>          self._BaseServer__shutdown_request = True
>          return None
>
> -    def add_rpc_function(self, name: str, function_bytes:
> xmlrpc.client.Binary):
> +    def add_rpc_function(self, name: str, function_bytes:
> xmlrpc.client.Binary) -> None:
>          """Add a function to the server.
>
>          This is meant to be executed remotely.
> @@ -191,15 +190,9 @@ class
> ScapyTrafficGenerator(CapturingTrafficGenerator):
>      session: PythonShell
>      rpc_server_proxy: xmlrpc.client.ServerProxy
>      _config: ScapyTrafficGeneratorConfig
> -    _tg_node: TGNode
> -    _logger: DTSLOG
> -
> -    def __init__(self, tg_node: TGNode, config:
> ScapyTrafficGeneratorConfig):
> -        self._config = config
> -        self._tg_node = tg_node
> -        self._logger = getLogger(
> -            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> -        )
> +
> +    def __init__(self, tg_node: Node, config:
> ScapyTrafficGeneratorConfig):
> +        super().__init__(tg_node, config)
>
>          assert (
>              self._tg_node.config.os == OS.linux
> @@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config:
> ScapyTrafficGeneratorConfig):
>              function_bytes = marshal.dumps(function.__code__)
>              self.rpc_server_proxy.add_rpc_function(function.__name__,
> function_bytes)
>
> -    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
> +    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) ->
> None:
>          # load the source of the function
>          src = inspect.getsource(QuittableXMLRPCServer)
>          # Lines with only whitespace break the repl if in the middle of a
> function
> @@ -280,7 +273,7 @@ def _send_packets_and_capture(
>          scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
>          return scapy_packets
>
> -    def close(self):
> +    def close(self) -> None:
>          try:
>              self.rpc_server_proxy.quit()
>          except ConnectionRefusedError:
> diff --git a/dts/framework/testbed_model/traffic_generator.py
> b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> similarity index 80%
> rename from dts/framework/testbed_model/traffic_generator.py
> rename to
> dts/framework/testbed_model/traffic_generator/traffic_generator.py
> index 28c35d3ce4..ea7c3963da 100644
> --- a/dts/framework/testbed_model/traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> @@ -12,11 +12,12 @@
>
>  from scapy.packet import Packet  # type: ignore[import]
>
> -from framework.logger import DTSLOG
> +from framework.config import TrafficGeneratorConfig
> +from framework.logger import DTSLOG, getLogger
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>  from framework.utils import get_packet_summaries
>
> -from .hw.port import Port
> -
>
>  class TrafficGenerator(ABC):
>      """The base traffic generator.
> @@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
>      Defines the few basic methods that each traffic generator must
> implement.
>      """
>
> +    _config: TrafficGeneratorConfig
> +    _tg_node: Node
>

Is there a benefit to changing this to be a node instead of a TGNode?
Wouldn't we want the capabilities of the TGNode to be accessible in the
TrafficGenerator class?


>      _logger: DTSLOG
>
> +    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> +        self._config = config
> +        self._tg_node = tg_node
> +        self._logger = getLogger(
> +            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> +        )
> +
>      def send_packet(self, packet: Packet, port: Port) -> None:
>          """Send a packet and block until it is fully sent.
>
> diff --git a/dts/framework/testbed_model/hw/virtual_device.py
> b/dts/framework/testbed_model/virtual_device.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/virtual_device.py
> rename to dts/framework/testbed_model/virtual_device.py
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index d27c2c5b5f..f0c916471c 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -7,7 +7,6 @@
>  import json
>  import os
>  import subprocess
> -import sys
>  from enum import Enum
>  from pathlib import Path
>  from subprocess import SubprocessError
> @@ -16,35 +15,7 @@
>
>  from .exception import ConfigurationError
>
> -
> -class StrEnum(Enum):
> -    @staticmethod
> -    def _generate_next_value_(
> -        name: str, start: int, count: int, last_values: object
> -    ) -> str:
> -        return name
> -
> -    def __str__(self) -> str:
> -        return self.name
> -
> -
> -REGEX_FOR_PCI_ADDRESS =
> "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
> -
> -
> -def check_dts_python_version() -> None:
> -    if sys.version_info.major < 3 or (
> -        sys.version_info.major == 3 and sys.version_info.minor < 10
> -    ):
> -        print(
> -            RED(
> -                (
> -                    "WARNING: DTS execution node's python version is
> lower than"
> -                    "python 3.10, is deprecated and will not work in
> future releases."
> -                )
> -            ),
> -            file=sys.stderr,
> -        )
> -        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> +REGEX_FOR_PCI_ADDRESS: str =
> "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
>
>
>  def expand_range(range_str: str) -> list[int]:
> @@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
>      return expanded_range
>
>
> -def get_packet_summaries(packets: list[Packet]):
> +def get_packet_summaries(packets: list[Packet]) -> str:
>      if len(packets) == 1:
>          packet_summaries = packets[0].summary()
>      else:
> @@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
>      return f"Packet contents: \n{packet_summaries}"
>
>
> -def RED(text: str) -> str:
> -    return f"\u001B[31;1m{str(text)}\u001B[0m"
> +class StrEnum(Enum):
> +    @staticmethod
> +    def _generate_next_value_(
> +        name: str, start: int, count: int, last_values: object
> +    ) -> str:
> +        return name
> +
> +    def __str__(self) -> str:
> +        return self.name
>
>
>  class MesonArgs(object):
> @@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
>          if self._tarball_path and os.path.exists(self._tarball_path):
>              os.remove(self._tarball_path)
>
> -    def __fspath__(self):
> +    def __fspath__(self) -> str:
>          return str(self._tarball_path)
> diff --git a/dts/main.py b/dts/main.py
> index 43311fa847..5d4714b0c3 100755
> --- a/dts/main.py
> +++ b/dts/main.py
> @@ -10,10 +10,17 @@
>
>  import logging
>
> -from framework import dts
> +from framework import settings
>
>
>  def main() -> None:
> +    """Set DTS settings, then run DTS.
> +
> +    The DTS settings are taken from the command line arguments and the
> environment variables.
> +    """
> +    settings.SETTINGS = settings.get_settings()
> +    from framework import dts
> +
>      dts.run_all()
>
>
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 62293 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 07/21] dts: dts runner and main docstring update
  2023-11-15 13:09               ` [PATCH v7 07/21] dts: dts runner and main " Juraj Linkeš
@ 2023-11-16 21:51                 ` Jeremy Spewock
  2023-11-20 16:13                   ` Juraj Linkeš
  2023-11-20 17:43                 ` Yoan Picchi
  1 sibling, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-11-16 21:51 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

[-- Attachment #1: Type: text/plain, Size: 9699 bytes --]

On Wed, Nov 15, 2023 at 8:11 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++-------
>  dts/main.py          |   8 ++-
>  2 files changed, 112 insertions(+), 24 deletions(-)
>
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index 4c7fb0c40a..331fed7dc4 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -3,6 +3,33 @@
>  # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>  # Copyright(c) 2022-2023 University of New Hampshire
>
> +r"""Test suite runner module.
> +
> +A DTS run is split into stages:
> +
> +    #. Execution stage,
> +    #. Build target stage,
> +    #. Test suite stage,
> +    #. Test case stage.
> +
> +The module is responsible for running tests on testbeds defined in the
> test run configuration.
> +Each setup or teardown of each stage is recorded in a
> :class:`~framework.test_result.DTSResult` or
> +one of its subclasses. The test case results are also recorded.
> +
> +If an error occurs, the current stage is aborted, the error is recorded
> and the run continues in
> +the next iteration of the same stage. The return code is the highest
> `severity` of all
> +:class:`~.framework.exception.DTSError`\s.
> +
> +Example:
> +    An error occurs in a build target setup. The current build target is
> aborted and the run
> +    continues with the next build target. If the errored build target was
> the last one in the given
> +    execution, the next execution begins.
> +
> +Attributes:
> +    dts_logger: The logger instance used in this module.
> +    result: The top level result used in the module.
> +"""
> +
>  import sys
>
>  from .config import (
> @@ -23,9 +50,38 @@
>
>
>  def run_all() -> None:
> -    """
> -    The main process of DTS. Runs all build targets in all executions
> from the main
> -    config file.
> +    """Run all build targets in all executions from the test run
> configuration.
> +
> +    Before running test suites, executions and build targets are first
> set up.
> +    The executions and build targets defined in the test run
> configuration are iterated over.
> +    The executions define which tests to run and where to run them and
> build targets define
> +    the DPDK build setup.
> +
> +    The tests suites are set up for each execution/build target tuple and
> each scheduled
> +    test case within the test suite is set up, executed and torn down.
> After all test cases
> +    have been executed, the test suite is torn down and the next build
> target will be tested.
> +
> +    All the nested steps look like this:
> +
> +        #. Execution setup
> +
> +            #. Build target setup
> +
> +                #. Test suite setup
> +
> +                    #. Test case setup
> +                    #. Test case logic
> +                    #. Test case teardown
> +
> +                #. Test suite teardown
> +
> +            #. Build target teardown
> +
> +        #. Execution teardown
> +
> +    The test cases are filtered according to the specification in the
> test run configuration and
> +    the :option:`--test-cases` command line argument or
> +    the :envvar:`DTS_TESTCASES` environment variable.
>      """
>      global dts_logger
>      global result
> @@ -87,6 +143,8 @@ def run_all() -> None:
>
>
>  def _check_dts_python_version() -> None:
> +    """Check the required Python version - v3.10."""
> +
>      def RED(text: str) -> str:
>          return f"\u001B[31;1m{str(text)}\u001B[0m"
>
> @@ -111,9 +169,16 @@ def _run_execution(
>      execution: ExecutionConfiguration,
>      result: DTSResult,
>  ) -> None:
> -    """
> -    Run the given execution. This involves running the execution setup as
> well as
> -    running all build targets in the given execution.
> +    """Run the given execution.
> +
> +    This involves running the execution setup as well as running all
> build targets
> +    in the given execution. After that, execution teardown is run.
> +
> +    Args:
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        execution: An execution's test run configuration.
> +        result: The top level result object.
>      """
>      dts_logger.info(
>          f"Running execution with SUT '{
> execution.system_under_test_node.name}'."
> @@ -150,8 +215,18 @@ def _run_build_target(
>      execution: ExecutionConfiguration,
>      execution_result: ExecutionResult,
>  ) -> None:
> -    """
> -    Run the given build target.
> +    """Run the given build target.
> +
> +    This involves running the build target setup as well as running all
> test suites
> +    in the given execution the build target is defined in.
> +    After that, build target teardown is run.
> +
> +    Args:
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        build_target: A build target's test run configuration.
> +        execution: The build target's execution's test run configuration.
> +        execution_result: The execution level result object associated
> with the execution.
>      """
>      dts_logger.info(f"Running build target '{build_target.name}'.")
>      build_target_result = execution_result.add_build_target(build_target)
> @@ -183,10 +258,17 @@ def _run_all_suites(
>      execution: ExecutionConfiguration,
>      build_target_result: BuildTargetResult,
>  ) -> None:
> -    """
> -    Use the given build_target to run execution's test suites
> -    with possibly only a subset of test cases.
> -    If no subset is specified, run all test cases.
> +    """Run the execution's (possibly a subset) test suites using the
> current build_target.
> +
> +    The function assumes the build target we're testing has already been
> built on the SUT node.
> +    The current build target thus corresponds to the current DPDK build
> present on the SUT node.
> +
> +    Args:
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        execution: The execution's test run configuration associated with
> the current build target.
> +        build_target_result: The build target level result object
> associated
> +            with the current build target.
>      """
>

Is it worth mentioning in this method or the _run_build_target method that
when a blocking suite fails that no more suites will be run on that build
target?


>      end_build_target = False
>      if not execution.skip_smoke_tests:
> @@ -215,16 +297,22 @@ def _run_single_suite(
>      build_target_result: BuildTargetResult,
>      test_suite_config: TestSuiteConfig,
>  ) -> None:
> -    """Runs a single test suite.
> +    """Run all test suite in a single test suite module.
> +
> +    The function assumes the build target we're testing has already been
> built on the SUT node.
> +    The current build target thus corresponds to the current DPDK build
> present on the SUT node.
>
>      Args:
> -        sut_node: Node to run tests on.
> -        execution: Execution the test case belongs to.
> -        build_target_result: Build target configuration test case is run
> on
> -        test_suite_config: Test suite configuration
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        execution: The execution's test run configuration associated with
> the current build target.
> +        build_target_result: The build target level result object
> associated
> +            with the current build target.
> +        test_suite_config: Test suite test run configuration specifying
> the test suite module
> +            and possibly a subset of test cases of test suites in that
> module.
>
>      Raises:
> -        BlockingTestSuiteError: If a test suite that was marked as
> blocking fails.
> +        BlockingTestSuiteError: If a blocking test suite fails.
>      """
>      try:
>          full_suite_path =
> f"tests.TestSuite_{test_suite_config.test_suite}"
> @@ -248,9 +336,7 @@ def _run_single_suite(
>
>
>  def _exit_dts() -> None:
> -    """
> -    Process all errors and exit with the proper exit code.
> -    """
> +    """Process all errors and exit with the proper exit code."""
>      result.process()
>
>      if dts_logger:
> diff --git a/dts/main.py b/dts/main.py
> index 5d4714b0c3..f703615d11 100755
> --- a/dts/main.py
> +++ b/dts/main.py
> @@ -4,9 +4,7 @@
>  # Copyright(c) 2022 PANTHEON.tech s.r.o.
>  # Copyright(c) 2022 University of New Hampshire
>
> -"""
> -A test framework for testing DPDK.
> -"""
> +"""The DTS executable."""
>
>  import logging
>
> @@ -17,6 +15,10 @@ def main() -> None:
>      """Set DTS settings, then run DTS.
>
>      The DTS settings are taken from the command line arguments and the
> environment variables.
> +    The settings object is stored in the module-level variable
> settings.SETTINGS which the entire
> +    framework uses. After importing the module (or the variable), any
> changes to the variable are
> +    not going to be reflected without a re-import. This means that the
> SETTINGS variable must
> +    be modified before the settings module is imported anywhere else in
> the framework.
>      """
>      settings.SETTINGS = settings.get_settings()
>      from framework import dts
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 11957 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 08/21] dts: test suite docstring update
  2023-11-15 13:09               ` [PATCH v7 08/21] dts: test suite " Juraj Linkeš
@ 2023-11-16 22:16                 ` Jeremy Spewock
  2023-11-20 16:25                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-11-16 22:16 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

[-- Attachment #1: Type: text/plain, Size: 16583 bytes --]

On Wed, Nov 15, 2023 at 8:12 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/test_suite.py | 223 +++++++++++++++++++++++++++---------
>  1 file changed, 168 insertions(+), 55 deletions(-)
>
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index d53553bf34..9e5251ffc6 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -2,8 +2,19 @@
>  # Copyright(c) 2010-2014 Intel Corporation
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>
> -"""
> -Base class for creating DTS test cases.
> +"""Features common to all test suites.
> +
> +The module defines the :class:`TestSuite` class which doesn't contain any
> test cases, and as such
> +must be extended by subclasses which add test cases. The
> :class:`TestSuite` contains the basics
> +needed by subclasses:
> +
> +    * Test suite and test case execution flow,
> +    * Testbed (SUT, TG) configuration,
> +    * Packet sending and verification,
> +    * Test case verification.
> +
> +The module also defines a function, :func:`get_test_suites`,
> +for gathering test suites from a Python module.
>  """
>
>  import importlib
> @@ -31,25 +42,44 @@
>
>
>  class TestSuite(object):
> -    """
> -    The base TestSuite class provides methods for handling basic flow of
> a test suite:
> -    * test case filtering and collection
> -    * test suite setup/cleanup
> -    * test setup/cleanup
> -    * test case execution
> -    * error handling and results storage
> -    Test cases are implemented by derived classes. Test cases are all
> methods
> -    starting with test_, further divided into performance test cases
> -    (starting with test_perf_) and functional test cases (all other test
> cases).
> -    By default, all test cases will be executed. A list of testcase str
> names
> -    may be specified in conf.yaml or on the command line
> -    to filter which test cases to run.
> -    The methods named [set_up|tear_down]_[suite|test_case] should be
> overridden
> -    in derived classes if the appropriate suite/test case fixtures are
> needed.
> +    """The base class with methods for handling the basic flow of a test
> suite.
> +
> +        * Test case filtering and collection,
> +        * Test suite setup/cleanup,
> +        * Test setup/cleanup,
> +        * Test case execution,
> +        * Error handling and results storage.
> +
> +    Test cases are implemented by subclasses. Test cases are all methods
> starting with ``test_``,
> +    further divided into performance test cases (starting with
> ``test_perf_``)
> +    and functional test cases (all other test cases).
> +
> +    By default, all test cases will be executed. A list of testcase names
> may be specified
> +    in the YAML test run configuration file and in the
> :option:`--test-cases` command line argument
> +    or in the :envvar:`DTS_TESTCASES` environment variable to filter
> which test cases to run.
> +    The union of both lists will be used. Any unknown test cases from the
> latter lists
> +    will be silently ignored.
> +
> +    If the :option:`--re-run` command line argument or the
> :envvar:`DTS_RERUN` environment variable
> +    is set, in case of a test case failure, the test case will be
> executed again until it passes
> +    or it fails that many times in addition of the first failure.
> +
> +    The methods named ``[set_up|tear_down]_[suite|test_case]`` should be
> overridden in subclasses
> +    if the appropriate test suite/test case fixtures are needed.
> +
> +    The test suite is aware of the testbed (the SUT and TG) it's running
> on. From this, it can
> +    properly choose the IP addresses and other configuration that must be
> tailored to the testbed.
> +
> +    Attributes:
> +        sut_node: The SUT node where the test suite is running.
> +        tg_node: The TG node where the test suite is running.
> +        is_blocking: Whether the test suite is blocking. A failure of a
> blocking test suite
> +            will block the execution of all subsequent test suites in the
> current build target.
>      """
>

Should this attribute section instead be comments in the form "#:" because
they are class variables instead of instance ones?


>
>      sut_node: SutNode
> -    is_blocking = False
> +    tg_node: TGNode
> +    is_blocking: bool = False
>      _logger: DTSLOG
>      _test_cases_to_run: list[str]
>      _func: bool
> @@ -72,6 +102,19 @@ def __init__(
>          func: bool,
>          build_target_result: BuildTargetResult,
>      ):
> +        """Initialize the test suite testbed information and basic
> configuration.
> +
> +        Process what test cases to run, create the associated
> :class:`TestSuiteResult`,
> +        find links between ports and set up default IP addresses to be
> used when configuring them.
> +
> +        Args:
> +            sut_node: The SUT node where the test suite will run.
> +            tg_node: The TG node where the test suite will run.
> +            test_cases: The list of test cases to execute.
> +                If empty, all test cases will be executed.
> +            func: Whether to run functional tests.
> +            build_target_result: The build target result this test suite
> is run in.
> +        """
>          self.sut_node = sut_node
>          self.tg_node = tg_node
>          self._logger = getLogger(self.__class__.__name__)
> @@ -95,6 +138,7 @@ def __init__(
>          self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
>
>      def _process_links(self) -> None:
> +        """Construct links between SUT and TG ports."""
>          for sut_port in self.sut_node.ports:
>              for tg_port in self.tg_node.ports:
>                  if (sut_port.identifier, sut_port.peer) == (
> @@ -106,27 +150,42 @@ def _process_links(self) -> None:
>                      )
>
>      def set_up_suite(self) -> None:
> -        """
> -        Set up test fixtures common to all test cases; this is done before
> -        any test case is run.
> +        """Set up test fixtures common to all test cases.
> +
> +        This is done before any test case has been run.
>          """
>
>      def tear_down_suite(self) -> None:
> -        """
> -        Tear down the previously created test fixtures common to all test
> cases.
> +        """Tear down the previously created test fixtures common to all
> test cases.
> +
> +        This is done after all test have been run.
>          """
>
>      def set_up_test_case(self) -> None:
> -        """
> -        Set up test fixtures before each test case.
> +        """Set up test fixtures before each test case.
> +
> +        This is done before *each* test case.
>          """
>
>      def tear_down_test_case(self) -> None:
> -        """
> -        Tear down the previously created test fixtures after each test
> case.
> +        """Tear down the previously created test fixtures after each test
> case.
> +
> +        This is done after *each* test case.
>          """
>
>      def configure_testbed_ipv4(self, restore: bool = False) -> None:
> +        """Configure IPv4 addresses on all testbed ports.
> +
> +        The configured ports are:
> +
> +        * SUT ingress port,
> +        * SUT egress port,
> +        * TG ingress port,
> +        * TG egress port.
> +
> +        Args:
> +            restore: If :data:`True`, will remove the configuration
> instead.
> +        """
>          delete = True if restore else False
>          enable = False if restore else True
>          self._configure_ipv4_forwarding(enable)
> @@ -153,11 +212,13 @@ def _configure_ipv4_forwarding(self, enable: bool)
> -> None:
>      def send_packet_and_capture(
>          self, packet: Packet, duration: float = 1
>      ) -> list[Packet]:
> -        """
> -        Send a packet through the appropriate interface and
> -        receive on the appropriate interface.
> -        Modify the packet with l3/l2 addresses corresponding
> -        to the testbed and desired traffic.
> +        """Send and receive `packet` using the associated TG.
> +
> +        Send `packet` through the appropriate interface and receive on
> the appropriate interface.
> +        Modify the packet with l3/l2 addresses corresponding to the
> testbed and desired traffic.
> +
> +        Returns:
> +            A list of received packets.
>          """
>          packet = self._adjust_addresses(packet)
>          return self.tg_node.send_packet_and_capture(
> @@ -165,13 +226,25 @@ def send_packet_and_capture(
>          )
>
>      def get_expected_packet(self, packet: Packet) -> Packet:
> +        """Inject the proper L2/L3 addresses into `packet`.
> +
> +        Args:
> +            packet: The packet to modify.
> +
> +        Returns:
> +            `packet` with injected L2/L3 addresses.
> +        """
>          return self._adjust_addresses(packet, expected=True)
>
>      def _adjust_addresses(self, packet: Packet, expected: bool = False)
> -> Packet:
> -        """
> +        """L2 and L3 address additions in both directions.
> +
>          Assumptions:
> -            Two links between SUT and TG, one link is TG -> SUT,
> -            the other SUT -> TG.
> +            Two links between SUT and TG, one link is TG -> SUT, the
> other SUT -> TG.
> +
> +        Args:
> +            packet: The packet to modify.
> +            expected: If :data:`True`, the direction is SUT -> TG,
> otherwise the direction is TG -> SUT.
>          """
>          if expected:
>              # The packet enters the TG from SUT
> @@ -197,6 +270,19 @@ def _adjust_addresses(self, packet: Packet, expected:
> bool = False) -> Packet:
>          return Ether(packet.build())
>
>      def verify(self, condition: bool, failure_description: str) -> None:
> +        """Verify `condition` and handle failures.
> +
> +        When `condition` is :data:`False`, raise an exception and log the
> last 10 commands
> +        executed on both the SUT and TG.
> +
> +        Args:
> +            condition: The condition to check.
> +            failure_description: A short description of the failure
> +                that will be stored in the raised exception.
> +
> +        Raises:
> +            TestCaseVerifyError: `condition` is :data:`False`.
> +        """
>          if not condition:
>              self._fail_test_case_verify(failure_description)
>
> @@ -216,6 +302,19 @@ def _fail_test_case_verify(self, failure_description:
> str) -> None:
>      def verify_packets(
>          self, expected_packet: Packet, received_packets: list[Packet]
>      ) -> None:
> +        """Verify that `expected_packet` has been received.
> +
> +        Go through `received_packets` and check that `expected_packet` is
> among them.
> +        If not, raise an exception and log the last 10 commands
> +        executed on both the SUT and TG.
> +
> +        Args:
> +            expected_packet: The packet we're expecting to receive.
> +            received_packets: The packets where we're looking for
> `expected_packet`.
> +
> +        Raises:
> +            TestCaseVerifyError: `expected_packet` is not among
> `received_packets`.
> +        """
>          for received_packet in received_packets:
>              if self._compare_packets(expected_packet, received_packet):
>                  break
> @@ -303,10 +402,14 @@ def _verify_l3_packet(self, received_packet: IP,
> expected_packet: IP) -> bool:
>          return True
>
>      def run(self) -> None:
> -        """
> -        Setup, execute and teardown the whole suite.
> -        Suite execution consists of running all test cases scheduled to
> be executed.
> -        A test cast run consists of setup, execution and teardown of said
> test case.
> +        """Set up, execute and tear down the whole suite.
> +
> +        Test suite execution consists of running all test cases scheduled
> to be executed.
> +        A test case run consists of setup, execution and teardown of said
> test case.
> +
> +        Record the setup and the teardown and handle failures.
> +
> +        The list of scheduled test cases is constructed when creating the
> :class:`TestSuite` object.
>          """
>          test_suite_name = self.__class__.__name__
>
> @@ -338,9 +441,7 @@ def run(self) -> None:
>                  raise BlockingTestSuiteError(test_suite_name)
>
>      def _execute_test_suite(self) -> None:
> -        """
> -        Execute all test cases scheduled to be executed in this suite.
> -        """
> +        """Execute all test cases scheduled to be executed in this
> suite."""
>          if self._func:
>              for test_case_method in self._get_functional_test_cases():
>                  test_case_name = test_case_method.__name__
> @@ -357,14 +458,18 @@ def _execute_test_suite(self) -> None:
>                      self._run_test_case(test_case_method,
> test_case_result)
>
>      def _get_functional_test_cases(self) -> list[MethodType]:
> -        """
> -        Get all functional test cases.
> +        """Get all functional test cases defined in this TestSuite.
> +
> +        Returns:
> +            The list of functional test cases of this TestSuite.
>          """
>          return self._get_test_cases(r"test_(?!perf_)")
>
>      def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
> -        """
> -        Return a list of test cases matching test_case_regex.
> +        """Return a list of test cases matching test_case_regex.
> +
> +        Returns:
> +            The list of test cases matching test_case_regex of this
> TestSuite.
>          """
>          self._logger.debug(f"Searching for test cases in
> {self.__class__.__name__}.")
>          filtered_test_cases = []
> @@ -378,9 +483,7 @@ def _get_test_cases(self, test_case_regex: str) ->
> list[MethodType]:
>          return filtered_test_cases
>
>      def _should_be_executed(self, test_case_name: str, test_case_regex:
> str) -> bool:
> -        """
> -        Check whether the test case should be executed.
> -        """
> +        """Check whether the test case should be scheduled to be
> executed."""
>          match = bool(re.match(test_case_regex, test_case_name))
>          if self._test_cases_to_run:
>              return match and test_case_name in self._test_cases_to_run
> @@ -390,9 +493,9 @@ def _should_be_executed(self, test_case_name: str,
> test_case_regex: str) -> bool
>      def _run_test_case(
>          self, test_case_method: MethodType, test_case_result:
> TestCaseResult
>      ) -> None:
> -        """
> -        Setup, execute and teardown a test case in this suite.
> -        Exceptions are caught and recorded in logs and results.
> +        """Setup, execute and teardown a test case in this suite.
> +
> +        Record the result of the setup and the teardown and handle
> failures.
>          """
>          test_case_name = test_case_method.__name__
>
> @@ -427,9 +530,7 @@ def _run_test_case(
>      def _execute_test_case(
>          self, test_case_method: MethodType, test_case_result:
> TestCaseResult
>      ) -> None:
> -        """
> -        Execute one test case and handle failures.
> -        """
> +        """Execute one test case, record the result and handle
> failures."""
>          test_case_name = test_case_method.__name__
>          try:
>              self._logger.info(f"Starting test case execution:
> {test_case_name}")
> @@ -452,6 +553,18 @@ def _execute_test_case(
>
>
>  def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
> +    r"""Find all :class:`TestSuite`\s in a Python module.
> +
> +    Args:
> +        testsuite_module_path: The path to the Python module.
> +
> +    Returns:
> +        The list of :class:`TestSuite`\s found within the Python module.
> +
> +    Raises:
> +        ConfigurationError: The test suite module was not found.
> +    """
> +
>      def is_test_suite(object: Any) -> bool:
>          try:
>              if issubclass(object, TestSuite) and object is not TestSuite:
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 20080 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 09/21] dts: test result docstring update
  2023-11-15 13:09               ` [PATCH v7 09/21] dts: test result " Juraj Linkeš
@ 2023-11-16 22:47                 ` Jeremy Spewock
  2023-11-20 16:33                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-11-16 22:47 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

[-- Attachment #1: Type: text/plain, Size: 21137 bytes --]

The only comments I had on this were a few places where I think attribute
sections should be class variables instead. I tried to mark all of the
places I saw it and it could be a difference where because of the way they
are subclassed they might do it differently but I'm unsure.

On Wed, Nov 15, 2023 at 8:12 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++-------
>  1 file changed, 234 insertions(+), 58 deletions(-)
>
> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
> index 603e18872c..05e210f6e7 100644
> --- a/dts/framework/test_result.py
> +++ b/dts/framework/test_result.py
> @@ -2,8 +2,25 @@
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>  # Copyright(c) 2023 University of New Hampshire
>
> -"""
> -Generic result container and reporters
> +r"""Record and process DTS results.
> +
> +The results are recorded in a hierarchical manner:
> +
> +    * :class:`DTSResult` contains
> +    * :class:`ExecutionResult` contains
> +    * :class:`BuildTargetResult` contains
> +    * :class:`TestSuiteResult` contains
> +    * :class:`TestCaseResult`
> +
> +Each result may contain multiple lower level results, e.g. there are
> multiple
> +:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
> +The results have common parts, such as setup and teardown results,
> captured in :class:`BaseResult`,
> +which also defines some common behaviors in its methods.
> +
> +Each result class has its own idiosyncrasies which they implement in
> overridden methods.
> +
> +The :option:`--output` command line argument and the
> :envvar:`DTS_OUTPUT_DIR` environment
> +variable modify the directory where the files with results will be stored.
>  """
>
>  import os.path
> @@ -26,26 +43,34 @@
>
>
>  class Result(Enum):
> -    """
> -    An Enum defining the possible states that
> -    a setup, a teardown or a test case may end up in.
> -    """
> +    """The possible states that a setup, a teardown or a test case may
> end up in."""
>
> +    #:
>      PASS = auto()
> +    #:
>      FAIL = auto()
> +    #:
>      ERROR = auto()
> +    #:
>      SKIP = auto()
>
>      def __bool__(self) -> bool:
> +        """Only PASS is True."""
>          return self is self.PASS
>
>
>  class FixtureResult(object):
> -    """
> -    A record that stored the result of a setup or a teardown.
> -    The default is FAIL because immediately after creating the object
> -    the setup of the corresponding stage will be executed, which also
> guarantees
> -    the execution of teardown.
> +    """A record that stores the result of a setup or a teardown.
> +
> +    FAIL is a sensible default since it prevents false positives
> +        (which could happen if the default was PASS).
> +
> +    Preventing false positives or other false results is preferable since
> a failure
> +    is mostly likely to be investigated (the other false results may not
> be investigated at all).
> +
> +    Attributes:
> +        result: The associated result.
> +        error: The error in case of a failure.
>      """
>

I think the items in the attributes section should instead be "#:" because
they are class variables.


>
>      result: Result
> @@ -56,21 +81,32 @@ def __init__(
>          result: Result = Result.FAIL,
>          error: Exception | None = None,
>      ):
> +        """Initialize the constructor with the fixture result and store a
> possible error.
> +
> +        Args:
> +            result: The result to store.
> +            error: The error which happened when a failure occurred.
> +        """
>          self.result = result
>          self.error = error
>
>      def __bool__(self) -> bool:
> +        """A wrapper around the stored :class:`Result`."""
>          return bool(self.result)
>
>
>  class Statistics(dict):
> -    """
> -    A helper class used to store the number of test cases by its result
> -    along a few other basic information.
> -    Using a dict provides a convenient way to format the data.
> +    """How many test cases ended in which result state along some other
> basic information.
> +
> +    Subclassing :class:`dict` provides a convenient way to format the
> data.
>      """
>
>      def __init__(self, dpdk_version: str | None):
> +        """Extend the constructor with relevant keys.
> +
> +        Args:
> +            dpdk_version: The version of tested DPDK.
> +        """
>

Should we maybe mark the "PASS RATE" and the "DPDK VERSION" as instance
variables of the class?


>          super(Statistics, self).__init__()
>          for result in Result:
>              self[result.name] = 0
> @@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None):
>          self["DPDK VERSION"] = dpdk_version
>
>      def __iadd__(self, other: Result) -> "Statistics":
> -        """
> -        Add a Result to the final count.
> +        """Add a Result to the final count.
> +
> +        Example:
> +            stats: Statistics = Statistics()  # empty Statistics
> +            stats += Result.PASS  # add a Result to `stats`
> +
> +        Args:
> +            other: The Result to add to this statistics object.
> +
> +        Returns:
> +            The modified statistics object.
>          """
>          self[other.name] += 1
>          self["PASS RATE"] = (
> @@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics":
>          return self
>
>      def __str__(self) -> str:
> -        """
> -        Provide a string representation of the data.
> -        """
> +        """Each line contains the formatted key = value pair."""
>          stats_str = ""
>          for key, value in self.items():
>              stats_str += f"{key:<12} = {value}\n"
> @@ -102,10 +145,16 @@ def __str__(self) -> str:
>
>
>  class BaseResult(object):
> -    """
> -    The Base class for all results. Stores the results of
> -    the setup and teardown portions of the corresponding stage
> -    and a list of results from each inner stage in _inner_results.
> +    """Common data and behavior of DTS results.
> +
> +    Stores the results of the setup and teardown portions of the
> corresponding stage.
> +    The hierarchical nature of DTS results is captured recursively in an
> internal list.
> +    A stage is each level in this particular hierarchy (pre-execution or
> the top-most level,
> +    execution, build target, test suite and test case.)
> +
> +    Attributes:
> +        setup_result: The result of the setup of the particular stage.
> +        teardown_result: The results of the teardown of the particular
> stage.
>      """
>

I think this might be another case of the attributes should be marked as
class variables instead of instance variables.


>
>      setup_result: FixtureResult
> @@ -113,15 +162,28 @@ class BaseResult(object):
>      _inner_results: MutableSequence["BaseResult"]
>
>      def __init__(self):
> +        """Initialize the constructor."""
>          self.setup_result = FixtureResult()
>          self.teardown_result = FixtureResult()
>          self._inner_results = []
>
>      def update_setup(self, result: Result, error: Exception | None =
> None) -> None:
> +        """Store the setup result.
> +
> +        Args:
> +            result: The result of the setup.
> +            error: The error that occurred in case of a failure.
> +        """
>          self.setup_result.result = result
>          self.setup_result.error = error
>
>      def update_teardown(self, result: Result, error: Exception | None =
> None) -> None:
> +        """Store the teardown result.
> +
> +        Args:
> +            result: The result of the teardown.
> +            error: The error that occurred in case of a failure.
> +        """
>          self.teardown_result.result = result
>          self.teardown_result.error = error
>
> @@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]:
>          ]
>
>      def get_errors(self) -> list[Exception]:
> +        """Compile errors from the whole result hierarchy.
> +
> +        Returns:
> +            The errors from setup, teardown and all errors found in the
> whole result hierarchy.
> +        """
>          return self._get_setup_teardown_errors() +
> self._get_inner_errors()
>
>      def add_stats(self, statistics: Statistics) -> None:
> +        """Collate stats from the whole result hierarchy.
> +
> +        Args:
> +            statistics: The :class:`Statistics` object where the stats
> will be collated.
> +        """
>          for inner_result in self._inner_results:
>              inner_result.add_stats(statistics)
>
>
>  class TestCaseResult(BaseResult, FixtureResult):
> -    """
> -    The test case specific result.
> -    Stores the result of the actual test case.
> -    Also stores the test case name.
> +    r"""The test case specific result.
> +
> +    Stores the result of the actual test case. This is done by adding an
> extra superclass
> +    in :class:`FixtureResult`. The setup and teardown results are
> :class:`FixtureResult`\s and
> +    the class is itself a record of the test case.
> +
> +    Attributes:
> +        test_case_name: The test case name.
>      """
>
>
Another spot where I think this should have a class variable comment.


>      test_case_name: str
>
>      def __init__(self, test_case_name: str):
> +        """Extend the constructor with `test_case_name`.
> +
> +        Args:
> +            test_case_name: The test case's name.
> +        """
>          super(TestCaseResult, self).__init__()
>          self.test_case_name = test_case_name
>
>      def update(self, result: Result, error: Exception | None = None) ->
> None:
> +        """Update the test case result.
> +
> +        This updates the result of the test case itself and doesn't affect
> +        the results of the setup and teardown steps in any way.
> +
> +        Args:
> +            result: The result of the test case.
> +            error: The error that occurred in case of a failure.
> +        """
>          self.result = result
>          self.error = error
>
> @@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]:
>          return []
>
>      def add_stats(self, statistics: Statistics) -> None:
> +        r"""Add the test case result to statistics.
> +
> +        The base method goes through the hierarchy recursively and this
> method is here to stop
> +        the recursion, as the :class:`TestCaseResult`\s are the leaves of
> the hierarchy tree.
> +
> +        Args:
> +            statistics: The :class:`Statistics` object where the stats
> will be added.
> +        """
>          statistics += self.result
>
>      def __bool__(self) -> bool:
> +        """The test case passed only if setup, teardown and the test case
> itself passed."""
>          return (
>              bool(self.setup_result) and bool(self.teardown_result) and
> bool(self.result)
>          )
>
>
>  class TestSuiteResult(BaseResult):
> -    """
> -    The test suite specific result.
> -    The _inner_results list stores results of test cases in a given test
> suite.
> -    Also stores the test suite name.
> +    """The test suite specific result.
> +
> +    The internal list stores the results of all test cases in a given
> test suite.
> +
> +    Attributes:
> +        suite_name: The test suite name.
>      """
>
>
I think this should also be a class variable.



>      suite_name: str
>
>      def __init__(self, suite_name: str):
> +        """Extend the constructor with `suite_name`.
> +
> +        Args:
> +            suite_name: The test suite's name.
> +        """
>          super(TestSuiteResult, self).__init__()
>          self.suite_name = suite_name
>
>      def add_test_case(self, test_case_name: str) -> TestCaseResult:
> +        """Add and return the inner result (test case).
> +
> +        Returns:
> +            The test case's result.
> +        """
>          test_case_result = TestCaseResult(test_case_name)
>          self._inner_results.append(test_case_result)
>          return test_case_result
>
>
>  class BuildTargetResult(BaseResult):
> -    """
> -    The build target specific result.
> -    The _inner_results list stores results of test suites in a given
> build target.
> -    Also stores build target specifics, such as compiler used to build
> DPDK.
> +    """The build target specific result.
> +
> +    The internal list stores the results of all test suites in a given
> build target.
> +
> +    Attributes:
> +        arch: The DPDK build target architecture.
> +        os: The DPDK build target operating system.
> +        cpu: The DPDK build target CPU.
> +        compiler: The DPDK build target compiler.
> +        compiler_version: The DPDK build target compiler version.
> +        dpdk_version: The built DPDK version.
>      """
>

I think this should be broken into class variables as well.


>
>      arch: Architecture
> @@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult):
>      dpdk_version: str | None
>
>      def __init__(self, build_target: BuildTargetConfiguration):
> +        """Extend the constructor with the `build_target`'s build target
> config.
> +
> +        Args:
> +            build_target: The build target's test run configuration.
> +        """
>          super(BuildTargetResult, self).__init__()
>          self.arch = build_target.arch
>          self.os = build_target.os
> @@ -222,20 +345,35 @@ def __init__(self, build_target:
> BuildTargetConfiguration):
>          self.dpdk_version = None
>
>      def add_build_target_info(self, versions: BuildTargetInfo) -> None:
> +        """Add information about the build target gathered at runtime.
> +
> +        Args:
> +            versions: The additional information.
> +        """
>          self.compiler_version = versions.compiler_version
>          self.dpdk_version = versions.dpdk_version
>
>      def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
> +        """Add and return the inner result (test suite).
> +
> +        Returns:
> +            The test suite's result.
> +        """
>          test_suite_result = TestSuiteResult(test_suite_name)
>          self._inner_results.append(test_suite_result)
>          return test_suite_result
>
>
>  class ExecutionResult(BaseResult):
> -    """
> -    The execution specific result.
> -    The _inner_results list stores results of build targets in a given
> execution.
> -    Also stores the SUT node configuration.
> +    """The execution specific result.
> +
> +    The internal list stores the results of all build targets in a given
> execution.
> +
> +    Attributes:
> +        sut_node: The SUT node used in the execution.
> +        sut_os_name: The operating system of the SUT node.
> +        sut_os_version: The operating system version of the SUT node.
> +        sut_kernel_version: The operating system kernel version of the
> SUT node.
>      """
>
>
I think these should be class variables as well.


>      sut_node: NodeConfiguration
> @@ -244,36 +382,55 @@ class ExecutionResult(BaseResult):
>      sut_kernel_version: str
>
>      def __init__(self, sut_node: NodeConfiguration):
> +        """Extend the constructor with the `sut_node`'s config.
> +
> +        Args:
> +            sut_node: The SUT node's test run configuration used in the
> execution.
> +        """
>          super(ExecutionResult, self).__init__()
>          self.sut_node = sut_node
>
>      def add_build_target(
>          self, build_target: BuildTargetConfiguration
>      ) -> BuildTargetResult:
> +        """Add and return the inner result (build target).
> +
> +        Args:
> +            build_target: The build target's test run configuration.
> +
> +        Returns:
> +            The build target's result.
> +        """
>          build_target_result = BuildTargetResult(build_target)
>          self._inner_results.append(build_target_result)
>          return build_target_result
>
>      def add_sut_info(self, sut_info: NodeInfo) -> None:
> +        """Add SUT information gathered at runtime.
> +
> +        Args:
> +            sut_info: The additional SUT node information.
> +        """
>          self.sut_os_name = sut_info.os_name
>          self.sut_os_version = sut_info.os_version
>          self.sut_kernel_version = sut_info.kernel_version
>
>
>  class DTSResult(BaseResult):
> -    """
> -    Stores environment information and test results from a DTS run, which
> are:
> -    * Execution level information, such as SUT and TG hardware.
> -    * Build target level information, such as compiler, target OS and cpu.
> -    * Test suite results.
> -    * All errors that are caught and recorded during DTS execution.
> +    """Stores environment information and test results from a DTS run.
>
> -    The information is stored in nested objects.
> +        * Execution level information, such as testbed and the test suite
> list,
> +        * Build target level information, such as compiler, target OS and
> cpu,
> +        * Test suite and test case results,
> +        * All errors that are caught and recorded during DTS execution.
>
> -    The class is capable of computing the return code used to exit DTS
> with
> -    from the stored error.
> +    The information is stored hierarchically. This is the first level of
> the hierarchy
> +    and as such is where the data form the whole hierarchy is collated or
> processed.
>
> -    It also provides a brief statistical summary of passed/failed test
> cases.
> +    The internal list stores the results of all executions.
> +
> +    Attributes:
> +        dpdk_version: The DPDK version to record.
>      """
>
>
I think this should be a class variable as well.


>      dpdk_version: str | None
> @@ -284,6 +441,11 @@ class DTSResult(BaseResult):
>      _stats_filename: str
>
>      def __init__(self, logger: DTSLOG):
> +        """Extend the constructor with top-level specifics.
> +
> +        Args:
> +            logger: The logger instance the whole result will use.
> +        """
>          super(DTSResult, self).__init__()
>          self.dpdk_version = None
>          self._logger = logger
> @@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG):
>          self._stats_filename = os.path.join(SETTINGS.output_dir,
> "statistics.txt")
>
>      def add_execution(self, sut_node: NodeConfiguration) ->
> ExecutionResult:
> +        """Add and return the inner result (execution).
> +
> +        Args:
> +            sut_node: The SUT node's test run configuration.
> +
> +        Returns:
> +            The execution's result.
> +        """
>          execution_result = ExecutionResult(sut_node)
>          self._inner_results.append(execution_result)
>          return execution_result
>
>      def add_error(self, error: Exception) -> None:
> +        """Record an error that occurred outside any execution.
> +
> +        Args:
> +            error: The exception to record.
> +        """
>          self._errors.append(error)
>
>      def process(self) -> None:
> -        """
> -        Process the data after a DTS run.
> -        The data is added to nested objects during runtime and this
> parent object
> -        is not updated at that time. This requires us to process the
> nested data
> -        after it's all been gathered.
> +        """Process the data after a whole DTS run.
> +
> +        The data is added to inner objects during runtime and this object
> is not updated
> +        at that time. This requires us to process the inner data after
> it's all been gathered.
>
> -        The processing gathers all errors and the result statistics of
> test cases.
> +        The processing gathers all errors and the statistics of test case
> results.
>          """
>          self._errors += self.get_errors()
>          if self._errors and self._logger:
> @@ -321,8 +495,10 @@ def process(self) -> None:
>              stats_file.write(str(self._stats_result))
>
>      def get_return_code(self) -> int:
> -        """
> -        Go through all stored Exceptions and return the highest error
> code found.
> +        """Go through all stored Exceptions and return the final DTS
> error code.
> +
> +        Returns:
> +            The highest error code found.
>          """
>          for error in self._errors:
>              error_return_code = ErrorSeverity.GENERIC_ERR
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 27447 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 21/21] dts: test suites docstring update
  2023-11-16 17:36                 ` Yoan Picchi
@ 2023-11-20 10:17                   ` Juraj Linkeš
  2023-11-20 12:50                     ` Yoan Picchi
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-20 10:17 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Thu, Nov 16, 2023 at 6:36 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/tests/TestSuite_hello_world.py | 16 +++++----
> >   dts/tests/TestSuite_os_udp.py      | 19 +++++++----
> >   dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++---
> >   3 files changed, 70 insertions(+), 18 deletions(-)
> >
> > diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
> > index 7e3d95c0cf..662a8f8726 100644
> > --- a/dts/tests/TestSuite_hello_world.py
> > +++ b/dts/tests/TestSuite_hello_world.py
> > @@ -1,7 +1,8 @@
> >   # SPDX-License-Identifier: BSD-3-Clause
> >   # Copyright(c) 2010-2014 Intel Corporation
> >
> > -"""
> > +"""The DPDK hello world app test suite.
> > +
> >   Run the helloworld example app and verify it prints a message for each used core.
> >   No other EAL parameters apart from cores are used.
> >   """
> > @@ -15,22 +16,25 @@
> >
> >
> >   class TestHelloWorld(TestSuite):
> > +    """DPDK hello world app test suite."""
> > +
> >       def set_up_suite(self) -> None:
> > -        """
> > +        """Set up the test suite.
> > +
> >           Setup:
> >               Build the app we're about to test - helloworld.
> >           """
> >           self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
> >
> >       def test_hello_world_single_core(self) -> None:
> > -        """
> > +        """Single core test case.
> > +
> >           Steps:
> >               Run the helloworld app on the first usable logical core.
> >           Verify:
> >               The app prints a message from the used core:
> >               "hello from core <core_id>"
> >           """
> > -
> >           # get the first usable core
> >           lcore_amount = LogicalCoreCount(1, 1, 1)
> >           lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
> > @@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
> >           )
> >
> >       def test_hello_world_all_cores(self) -> None:
> > -        """
> > +        """All cores test case.
> > +
> >           Steps:
> >               Run the helloworld app on all usable logical cores.
> >           Verify:
> >               The app prints a message from all used cores:
> >               "hello from core <core_id>"
> >           """
> > -
> >           # get the maximum logical core number
> >           eal_para = self.sut_node.create_eal_parameters(
> >               lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
> > diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
> > index bf6b93deb5..e0c5239612 100644
> > --- a/dts/tests/TestSuite_os_udp.py
> > +++ b/dts/tests/TestSuite_os_udp.py
> > @@ -1,7 +1,8 @@
> >   # SPDX-License-Identifier: BSD-3-Clause
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >
> > -"""
> > +"""Basic IPv4 OS routing test suite.
> > +
> >   Configure SUT node to route traffic from if1 to if2.
> >   Send a packet to the SUT node, verify it comes back on the second port on the TG node.
> >   """
> > @@ -13,24 +14,27 @@
> >
> >
> >   class TestOSUdp(TestSuite):
> > +    """IPv4 UDP OS routing test suite."""
> > +
> >       def set_up_suite(self) -> None:
> > -        """
> > +        """Set up the test suite.
> > +
> >           Setup:
> > -            Configure SUT ports and SUT to route traffic from if1 to if2.
> > +            Bind the SUT ports to the OS driver, configure the ports and configure the SUT
> > +            to route traffic from if1 to if2.
> >           """
> >
> > -        # This test uses kernel drivers
> >           self.sut_node.bind_ports_to_driver(for_dpdk=False)
> >           self.configure_testbed_ipv4()
> >
> >       def test_os_udp(self) -> None:
> > -        """
> > +        """Basic UDP IPv4 traffic test case.
> > +
> >           Steps:
> >               Send a UDP packet.
> >           Verify:
> >               The packet with proper addresses arrives at the other TG port.
> >           """
> > -
> >           packet = Ether() / IP() / UDP()
> >
> >           received_packets = self.send_packet_and_capture(packet)
> > @@ -40,7 +44,8 @@ def test_os_udp(self) -> None:
> >           self.verify_packets(expected_packet, received_packets)
> >
> >       def tear_down_suite(self) -> None:
> > -        """
> > +        """Tear down the test suite.
> > +
> >           Teardown:
> >               Remove the SUT port configuration configured in setup.
> >           """
> > diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
> > index e8016d1b54..6fae099a0e 100644
> > --- a/dts/tests/TestSuite_smoke_tests.py
> > +++ b/dts/tests/TestSuite_smoke_tests.py
> > @@ -1,6 +1,17 @@
> >   # SPDX-License-Identifier: BSD-3-Clause
> >   # Copyright(c) 2023 University of New Hampshire
> >
> > +"""Smoke test suite.
> > +
> > +Smoke tests are a class of tests which are used for validating a minimal set of important features.
> > +These are the most important features without which (or when they're faulty) the software wouldn't
> > +work properly. Thus, if any failure occurs while testing these features,
> > +there isn't that much of a reason to continue testing, as the software is fundamentally broken.
> > +
> > +These tests don't have to include only DPDK tests, as the reason for failures could be
> > +in the infrastructure (a faulty link between NICs or a misconfiguration).
> > +"""
> > +
> >   import re
> >
> >   from framework.config import PortConfig
> > @@ -11,13 +22,25 @@
> >
> >
> >   class SmokeTests(TestSuite):
> > +    """DPDK and infrastructure smoke test suite.
> > +
> > +    The test cases validate the most basic DPDK functionality needed for all other test suites.
> > +    The infrastructure also needs to be tested, as that is also used by all other test suites.
> > +
> > +    Attributes:
> > +        is_blocking: This test suite will block the execution of all other test suites
> > +            in the build target after it.
> > +        nics_in_node: The NICs present on the SUT node.
> > +    """
> > +
> >       is_blocking = True
> >       # dicts in this list are expected to have two keys:
> >       # "pci_address" and "current_driver"
> >       nics_in_node: list[PortConfig] = []
> >
> >       def set_up_suite(self) -> None:
> > -        """
> > +        """Set up the test suite.
> > +
> >           Setup:
> >               Set the build directory path and generate a list of NICs in the SUT node.
> >           """
> > @@ -25,7 +48,13 @@ def set_up_suite(self) -> None:
> >           self.nics_in_node = self.sut_node.config.ports
> >
> >       def test_unit_tests(self) -> None:
> > -        """
> > +        """DPDK meson fast-tests unit tests.
> > +
> > +        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
> > +        These need to be addressed before other testing.
> > +
> > +        The fast-tests unit tests are a subset with only the most basic tests.
> > +
> >           Test:
> >               Run the fast-test unit-test suite through meson.
> >           """
> > @@ -37,7 +66,14 @@ def test_unit_tests(self) -> None:
> >           )
> >
> >       def test_driver_tests(self) -> None:
> > -        """
> > +        """DPDK meson driver-tests unit tests.
> > +
>
> Copy paste from the previous unit test in the driver tests. If it is on
> purpose as both are considered unit tests, then the previous function is
> test_unit_tests and deal with fast-tests
>

I'm not sure what you mean. The two are separate tests (one with the
fast-test, the other one with the driver-test unit test test suites)
and the docstring do capture the differences.

> > +
> > +        The driver-tests unit tests are a subset that test only drivers. These may be run
> > +        with virtual devices as well.
> > +
> >           Test:
> >               Run the driver-test unit-test suite through meson.
> >           """
> > @@ -63,7 +99,10 @@ def test_driver_tests(self) -> None:
> >           )
> >
> >       def test_devices_listed_in_testpmd(self) -> None:
> > -        """
> > +        """Testpmd device discovery.
> > +
> > +        If the configured devices can't be found in testpmd, they can't be tested.
>
> Maybe a bit nitpicky. This is more of a statement as to why the test
> exist than a description of the test. Suggestion: "Tests that the
> configured devices can be found in testpmd. If they aren't, the
> configuration might be wrong and tests might be skipped"
>

This is more of a reason for why this particular test is a smoke test.
Since a smoke test failure results in all test suites being blocked,
this seemed like key information.

We also don't have an exact format of what should be included in a
test case/suite documentation. We should use this opportunity to
document what we deem important in these test cases at this point in
time and improve the docs as we continue adding test cases. We can add
more custom sections (such as the "Setup:" and" "Test:" sections,
which can be added to Sphinx); I like adding a section with
explanation for why a test is a particular type of test (in this case,
a smoke test). The regular body could contain a description as you
suggested. What do you think?

> > +
> >           Test:
> >               Uses testpmd driver to verify that devices have been found by testpmd.
> >           """
> > @@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None:
> >               )
> >
> >       def test_device_bound_to_driver(self) -> None:
> > -        """
> > +        """Device driver in OS.
> > +
> > +        The devices must be bound to the proper driver, otherwise they can't be used by DPDK
> > +        or the traffic generators.
>
> Same as the previous comment. It is more of a statement as to why the
> test exist than a description of the test
>

Ack.

> > +
> >           Test:
> >               Ensure that all drivers listed in the config are bound to the correct
> >               driver.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 21/21] dts: test suites docstring update
  2023-11-20 10:17                   ` Juraj Linkeš
@ 2023-11-20 12:50                     ` Yoan Picchi
  2023-11-22 13:40                       ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-20 12:50 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On 11/20/23 10:17, Juraj Linkeš wrote:
> On Thu, Nov 16, 2023 at 6:36 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>>
>> On 11/15/23 13:09, Juraj Linkeš wrote:
>>> Format according to the Google format and PEP257, with slight
>>> deviations.
>>>
>>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>>> ---
>>>    dts/tests/TestSuite_hello_world.py | 16 +++++----
>>>    dts/tests/TestSuite_os_udp.py      | 19 +++++++----
>>>    dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++---
>>>    3 files changed, 70 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
>>> index 7e3d95c0cf..662a8f8726 100644
>>> --- a/dts/tests/TestSuite_hello_world.py
>>> +++ b/dts/tests/TestSuite_hello_world.py
>>> @@ -1,7 +1,8 @@
>>>    # SPDX-License-Identifier: BSD-3-Clause
>>>    # Copyright(c) 2010-2014 Intel Corporation
>>>
>>> -"""
>>> +"""The DPDK hello world app test suite.
>>> +
>>>    Run the helloworld example app and verify it prints a message for each used core.
>>>    No other EAL parameters apart from cores are used.
>>>    """
>>> @@ -15,22 +16,25 @@
>>>
>>>
>>>    class TestHelloWorld(TestSuite):
>>> +    """DPDK hello world app test suite."""
>>> +
>>>        def set_up_suite(self) -> None:
>>> -        """
>>> +        """Set up the test suite.
>>> +
>>>            Setup:
>>>                Build the app we're about to test - helloworld.
>>>            """
>>>            self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
>>>
>>>        def test_hello_world_single_core(self) -> None:
>>> -        """
>>> +        """Single core test case.
>>> +
>>>            Steps:
>>>                Run the helloworld app on the first usable logical core.
>>>            Verify:
>>>                The app prints a message from the used core:
>>>                "hello from core <core_id>"
>>>            """
>>> -
>>>            # get the first usable core
>>>            lcore_amount = LogicalCoreCount(1, 1, 1)
>>>            lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
>>> @@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
>>>            )
>>>
>>>        def test_hello_world_all_cores(self) -> None:
>>> -        """
>>> +        """All cores test case.
>>> +
>>>            Steps:
>>>                Run the helloworld app on all usable logical cores.
>>>            Verify:
>>>                The app prints a message from all used cores:
>>>                "hello from core <core_id>"
>>>            """
>>> -
>>>            # get the maximum logical core number
>>>            eal_para = self.sut_node.create_eal_parameters(
>>>                lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
>>> diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
>>> index bf6b93deb5..e0c5239612 100644
>>> --- a/dts/tests/TestSuite_os_udp.py
>>> +++ b/dts/tests/TestSuite_os_udp.py
>>> @@ -1,7 +1,8 @@
>>>    # SPDX-License-Identifier: BSD-3-Clause
>>>    # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>>
>>> -"""
>>> +"""Basic IPv4 OS routing test suite.
>>> +
>>>    Configure SUT node to route traffic from if1 to if2.
>>>    Send a packet to the SUT node, verify it comes back on the second port on the TG node.
>>>    """
>>> @@ -13,24 +14,27 @@
>>>
>>>
>>>    class TestOSUdp(TestSuite):
>>> +    """IPv4 UDP OS routing test suite."""
>>> +
>>>        def set_up_suite(self) -> None:
>>> -        """
>>> +        """Set up the test suite.
>>> +
>>>            Setup:
>>> -            Configure SUT ports and SUT to route traffic from if1 to if2.
>>> +            Bind the SUT ports to the OS driver, configure the ports and configure the SUT
>>> +            to route traffic from if1 to if2.
>>>            """
>>>
>>> -        # This test uses kernel drivers
>>>            self.sut_node.bind_ports_to_driver(for_dpdk=False)
>>>            self.configure_testbed_ipv4()
>>>
>>>        def test_os_udp(self) -> None:
>>> -        """
>>> +        """Basic UDP IPv4 traffic test case.
>>> +
>>>            Steps:
>>>                Send a UDP packet.
>>>            Verify:
>>>                The packet with proper addresses arrives at the other TG port.
>>>            """
>>> -
>>>            packet = Ether() / IP() / UDP()
>>>
>>>            received_packets = self.send_packet_and_capture(packet)
>>> @@ -40,7 +44,8 @@ def test_os_udp(self) -> None:
>>>            self.verify_packets(expected_packet, received_packets)
>>>
>>>        def tear_down_suite(self) -> None:
>>> -        """
>>> +        """Tear down the test suite.
>>> +
>>>            Teardown:
>>>                Remove the SUT port configuration configured in setup.
>>>            """
>>> diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
>>> index e8016d1b54..6fae099a0e 100644
>>> --- a/dts/tests/TestSuite_smoke_tests.py
>>> +++ b/dts/tests/TestSuite_smoke_tests.py
>>> @@ -1,6 +1,17 @@
>>>    # SPDX-License-Identifier: BSD-3-Clause
>>>    # Copyright(c) 2023 University of New Hampshire
>>>
>>> +"""Smoke test suite.
>>> +
>>> +Smoke tests are a class of tests which are used for validating a minimal set of important features.
>>> +These are the most important features without which (or when they're faulty) the software wouldn't
>>> +work properly. Thus, if any failure occurs while testing these features,
>>> +there isn't that much of a reason to continue testing, as the software is fundamentally broken.
>>> +
>>> +These tests don't have to include only DPDK tests, as the reason for failures could be
>>> +in the infrastructure (a faulty link between NICs or a misconfiguration).
>>> +"""
>>> +
>>>    import re
>>>
>>>    from framework.config import PortConfig
>>> @@ -11,13 +22,25 @@
>>>
>>>
>>>    class SmokeTests(TestSuite):
>>> +    """DPDK and infrastructure smoke test suite.
>>> +
>>> +    The test cases validate the most basic DPDK functionality needed for all other test suites.
>>> +    The infrastructure also needs to be tested, as that is also used by all other test suites.
>>> +
>>> +    Attributes:
>>> +        is_blocking: This test suite will block the execution of all other test suites
>>> +            in the build target after it.
>>> +        nics_in_node: The NICs present on the SUT node.
>>> +    """
>>> +
>>>        is_blocking = True
>>>        # dicts in this list are expected to have two keys:
>>>        # "pci_address" and "current_driver"
>>>        nics_in_node: list[PortConfig] = []
>>>
>>>        def set_up_suite(self) -> None:
>>> -        """
>>> +        """Set up the test suite.
>>> +
>>>            Setup:
>>>                Set the build directory path and generate a list of NICs in the SUT node.
>>>            """
>>> @@ -25,7 +48,13 @@ def set_up_suite(self) -> None:
>>>            self.nics_in_node = self.sut_node.config.ports
>>>
>>>        def test_unit_tests(self) -> None:
>>> -        """
>>> +        """DPDK meson fast-tests unit tests.
>>> +
>>> +        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
>>> +        These need to be addressed before other testing.
>>> +
>>> +        The fast-tests unit tests are a subset with only the most basic tests.
>>> +
>>>            Test:
>>>                Run the fast-test unit-test suite through meson.
>>>            """
>>> @@ -37,7 +66,14 @@ def test_unit_tests(self) -> None:
>>>            )
>>>
>>>        def test_driver_tests(self) -> None:
>>> -        """
>>> +        """DPDK meson driver-tests unit tests.
>>> +
>>
>> Copy paste from the previous unit test in the driver tests. If it is on
>> purpose as both are considered unit tests, then the previous function is
>> test_unit_tests and deal with fast-tests
>>
> 
> I'm not sure what you mean. The two are separate tests (one with the
> fast-test, the other one with the driver-test unit test test suites)
> and the docstring do capture the differences.

I am a little bit confused as to how I deleted it in my reply, but I was 
referencing to this sentence in the patch:
"The DPDK unit tests are basic tests that indicate regressions and other 
critical failures.
These need to be addressed before other testing."
But in any case, reading it again, I agree with you.

> 
>>> +
>>> +        The driver-tests unit tests are a subset that test only drivers. These may be run
>>> +        with virtual devices as well.
>>> +
>>>            Test:
>>>                Run the driver-test unit-test suite through meson.
>>>            """
>>> @@ -63,7 +99,10 @@ def test_driver_tests(self) -> None:
>>>            )
>>>
>>>        def test_devices_listed_in_testpmd(self) -> None:
>>> -        """
>>> +        """Testpmd device discovery.
>>> +
>>> +        If the configured devices can't be found in testpmd, they can't be tested.
>>
>> Maybe a bit nitpicky. This is more of a statement as to why the test
>> exist than a description of the test. Suggestion: "Tests that the
>> configured devices can be found in testpmd. If they aren't, the
>> configuration might be wrong and tests might be skipped"
>>
> 
> This is more of a reason for why this particular test is a smoke test.
> Since a smoke test failure results in all test suites being blocked,
> this seemed like key information.
> 
> We also don't have an exact format of what should be included in a
> test case/suite documentation. We should use this opportunity to
> document what we deem important in these test cases at this point in
> time and improve the docs as we continue adding test cases. We can add
> more custom sections (such as the "Setup:" and" "Test:" sections,
> which can be added to Sphinx); I like adding a section with
> explanation for why a test is a particular type of test (in this case,
> a smoke test). The regular body could contain a description as you
> suggested. What do you think?
> 

I'm not really sure what way to go here. The thing I noticed here was 
mainly the lack of consistency between this test's description and the 
previous one. I agree that making it clear it's a smoke test is good, 
but compare it to test_device_bound_to_driver's description for 
instance. Both states clearly that they are a smoke test, but the 
formulation is quite different.

I'm not entirely sure about adding more custom sections. I fear it might 
be more hassle than it's worth. A short guideline on how to write the 
doc and what section to use could be handy though.

Reading the previous test, I think I see what you mean by having a 
section to describe the test type and another for the description.
In short the type is: DPDK unit tests, test critical failures, needs to 
run first
and then followed by the test's description
But I think this type is redundant with the test suite's description? If 
so, only a description would be needed.

>>> +
>>>            Test:
>>>                Uses testpmd driver to verify that devices have been found by testpmd.
>>>            """
>>> @@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None:
>>>                )
>>>
>>>        def test_device_bound_to_driver(self) -> None:
>>> -        """
>>> +        """Device driver in OS.
>>> +
>>> +        The devices must be bound to the proper driver, otherwise they can't be used by DPDK
>>> +        or the traffic generators.
>>
>> Same as the previous comment. It is more of a statement as to why the
>> test exist than a description of the test
>>
> 
> Ack.
> 
>>> +
>>>            Test:
>>>                Ensure that all drivers listed in the config are bound to the correct
>>>                driver.


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 01/21] dts: code adjustments for doc generation
  2023-11-15 13:09               ` [PATCH v7 01/21] dts: code adjustments for doc generation Juraj Linkeš
  2023-11-16 21:04                 ` Jeremy Spewock
@ 2023-11-20 16:02                 ` Yoan Picchi
  1 sibling, 0 replies; 255+ messages in thread
From: Yoan Picchi @ 2023-11-20 16:02 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> The standard Python tool for generating API documentation, Sphinx,
> imports modules one-by-one when generating the documentation. This
> requires code changes:
> * properly guarding argument parsing in the if __name__ == '__main__'
>    block,
> * the logger used by DTS runner underwent the same treatment so that it
>    doesn't create log files outside of a DTS run,
> * however, DTS uses the arguments to construct an object holding global
>    variables. The defaults for the global variables needed to be moved
>    from argument parsing elsewhere,
> * importing the remote_session module from framework resulted in
>    circular imports because of one module trying to import another
>    module. This is fixed by reorganizing the code,
> * some code reorganization was done because the resulting structure
>    makes more sense, improving documentation clarity.
> 
> The are some other changes which are documentation related:
> * added missing type annotation so they appear in the generated docs,
> * reordered arguments in some methods,
> * removed superfluous arguments and attributes,
> * change private functions/methods/attributes to private and vice-versa.
> 
> The above all appear in the generated documentation and the with them,
> the documentation is improved.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/config/__init__.py              | 10 ++-
>   dts/framework/dts.py                          | 33 +++++--
>   dts/framework/exception.py                    | 54 +++++-------
>   dts/framework/remote_session/__init__.py      | 41 ++++-----
>   .../interactive_remote_session.py             |  0
>   .../{remote => }/interactive_shell.py         |  0
>   .../{remote => }/python_shell.py              |  0
>   .../remote_session/remote/__init__.py         | 27 ------
>   .../{remote => }/remote_session.py            |  0
>   .../{remote => }/ssh_session.py               | 12 +--
>   .../{remote => }/testpmd_shell.py             |  0
>   dts/framework/settings.py                     | 87 +++++++++++--------
>   dts/framework/test_result.py                  |  4 +-
>   dts/framework/test_suite.py                   |  7 +-
>   dts/framework/testbed_model/__init__.py       | 12 +--
>   dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
>   dts/framework/testbed_model/hw/__init__.py    | 27 ------
>   .../linux_session.py                          |  6 +-
>   dts/framework/testbed_model/node.py           | 25 ++++--
>   .../os_session.py                             | 22 ++---
>   dts/framework/testbed_model/{hw => }/port.py  |  0
>   .../posix_session.py                          |  4 +-
>   dts/framework/testbed_model/sut_node.py       |  8 +-
>   dts/framework/testbed_model/tg_node.py        | 30 +------
>   .../traffic_generator/__init__.py             | 24 +++++
>   .../capturing_traffic_generator.py            |  6 +-
>   .../{ => traffic_generator}/scapy.py          | 23 ++---
>   .../traffic_generator.py                      | 16 +++-
>   .../testbed_model/{hw => }/virtual_device.py  |  0
>   dts/framework/utils.py                        | 46 +++-------
>   dts/main.py                                   |  9 +-
>   31 files changed, 258 insertions(+), 288 deletions(-)
>   rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
>   rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
>   rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
>   delete mode 100644 dts/framework/remote_session/remote/__init__.py
>   rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
>   rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
>   rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
>   rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
>   delete mode 100644 dts/framework/testbed_model/hw/__init__.py
>   rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
>   rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
>   rename dts/framework/testbed_model/{hw => }/port.py (100%)
>   rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
>   create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
>   rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%)
>   rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
>   rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
>   rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
> 
> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> index cb7e00ba34..2044c82611 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -17,6 +17,7 @@
>   import warlock  # type: ignore[import]
>   import yaml
>   
> +from framework.exception import ConfigurationError
>   from framework.settings import SETTINGS
>   from framework.utils import StrEnum
>   
> @@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
>       traffic_generator_type: TrafficGeneratorType
>   
>       @staticmethod
> -    def from_dict(d: dict):
> +    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
>           # This looks useless now, but is designed to allow expansion to traffic
>           # generators that require more configuration later.
>           match TrafficGeneratorType(d["type"]):
> @@ -97,6 +98,10 @@ def from_dict(d: dict):
>                   return ScapyTrafficGeneratorConfig(
>                       traffic_generator_type=TrafficGeneratorType.SCAPY
>                   )
> +            case _:
> +                raise ConfigurationError(
> +                    f'Unknown traffic generator type "{d["type"]}".'
> +                )
>   
>   
>   @dataclass(slots=True, frozen=True)
> @@ -324,6 +329,3 @@ def load_config() -> Configuration:
>       config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
>       config_obj: Configuration = Configuration.from_dict(dict(config))
>       return config_obj
> -
> -
> -CONFIGURATION = load_config()
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index f773f0c38d..4c7fb0c40a 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -6,19 +6,19 @@
>   import sys
>   
>   from .config import (
> -    CONFIGURATION,
>       BuildTargetConfiguration,
>       ExecutionConfiguration,
>       TestSuiteConfig,
> +    load_config,
>   )
>   from .exception import BlockingTestSuiteError
>   from .logger import DTSLOG, getLogger
>   from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
>   from .test_suite import get_test_suites
>   from .testbed_model import SutNode, TGNode
> -from .utils import check_dts_python_version
>   
> -dts_logger: DTSLOG = getLogger("DTSRunner")
> +# dummy defaults to satisfy linters
> +dts_logger: DTSLOG = None  # type: ignore[assignment]
>   result: DTSResult = DTSResult(dts_logger)
>   
>   
> @@ -30,14 +30,18 @@ def run_all() -> None:
>       global dts_logger
>       global result
>   
> +    # create a regular DTS logger and create a new result with it
> +    dts_logger = getLogger("DTSRunner")
> +    result = DTSResult(dts_logger)
> +
>       # check the python version of the server that run dts
> -    check_dts_python_version()
> +    _check_dts_python_version()
>   
>       sut_nodes: dict[str, SutNode] = {}
>       tg_nodes: dict[str, TGNode] = {}
>       try:
>           # for all Execution sections
> -        for execution in CONFIGURATION.executions:
> +        for execution in load_config().executions:
>               sut_node = sut_nodes.get(execution.system_under_test_node.name)
>               tg_node = tg_nodes.get(execution.traffic_generator_node.name)
>   
> @@ -82,6 +86,25 @@ def run_all() -> None:
>       _exit_dts()
>   
>   
> +def _check_dts_python_version() -> None:
> +    def RED(text: str) -> str:
> +        return f"\u001B[31;1m{str(text)}\u001B[0m"
> +
> +    if sys.version_info.major < 3 or (
> +        sys.version_info.major == 3 and sys.version_info.minor < 10
> +    ):
> +        print(
> +            RED(
> +                (
> +                    "WARNING: DTS execution node's python version is lower than"
> +                    "python 3.10, is deprecated and will not work in future releases."
> +                )
> +            ),
> +            file=sys.stderr,
> +        )
> +        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> +
> +
>   def _run_execution(
>       sut_node: SutNode,
>       tg_node: TGNode,
> diff --git a/dts/framework/exception.py b/dts/framework/exception.py
> index 001a5a5496..7489c03570 100644
> --- a/dts/framework/exception.py
> +++ b/dts/framework/exception.py
> @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
>       Command execution timeout.
>       """
>   
> -    command: str
> -    output: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _command: str
>   
> -    def __init__(self, command: str, output: str):
> -        self.command = command
> -        self.output = output
> +    def __init__(self, command: str):
> +        self._command = command
>   
>       def __str__(self) -> str:
> -        return f"TIMEOUT on {self.command}"
> -
> -    def get_output(self) -> str:
> -        return self.output
> +        return f"TIMEOUT on {self._command}"
>   
>   
>   class SSHConnectionError(DTSError):
> @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
>       SSH connection error.
>       """
>   
> -    host: str
> -    errors: list[str]
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _host: str
> +    _errors: list[str]
>   
>       def __init__(self, host: str, errors: list[str] | None = None):
> -        self.host = host
> -        self.errors = [] if errors is None else errors
> +        self._host = host
> +        self._errors = [] if errors is None else errors
>   
>       def __str__(self) -> str:
> -        message = f"Error trying to connect with {self.host}."
> -        if self.errors:
> -            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
> +        message = f"Error trying to connect with {self._host}."
> +        if self._errors:
> +            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
>   
>           return message
>   
> @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
>       It can no longer be used.
>       """
>   
> -    host: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> +    _host: str
>   
>       def __init__(self, host: str):
> -        self.host = host
> +        self._host = host
>   
>       def __str__(self) -> str:
> -        return f"SSH session with {self.host} has died"
> +        return f"SSH session with {self._host} has died"
>   
>   
>   class ConfigurationError(DTSError):
> @@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError):
>       Raised when a command executed on a Node returns a non-zero exit status.
>       """
>   
> -    command: str
> -    command_return_code: int
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
> +    command: str
> +    _command_return_code: int
>   
>       def __init__(self, command: str, command_return_code: int):
>           self.command = command
> -        self.command_return_code = command_return_code
> +        self._command_return_code = command_return_code
>   
>       def __str__(self) -> str:
>           return (
>               f"Command {self.command} returned a non-zero exit code: "
> -            f"{self.command_return_code}"
> +            f"{self._command_return_code}"
>           )
>   
>   
> @@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError):
>       Used in test cases to verify the expected behavior.
>       """
>   
> -    value: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
>   
> -    def __init__(self, value: str):
> -        self.value = value
> -
> -    def __str__(self) -> str:
> -        return repr(self.value)
> -
>   
>   class BlockingTestSuiteError(DTSError):
> -    suite_name: str
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
> +    _suite_name: str
>   
>       def __init__(self, suite_name: str) -> None:
> -        self.suite_name = suite_name
> +        self._suite_name = suite_name
>   
>       def __str__(self) -> str:
> -        return f"Blocking suite {self.suite_name} failed."
> +        return f"Blocking suite {self._suite_name} failed."
> diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
> index 00b6d1f03a..5e7ddb2b05 100644
> --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__init__.py
> @@ -12,29 +12,24 @@
>   
>   # pylama:ignore=W0611
>   
> -from framework.config import OS, NodeConfiguration
> -from framework.exception import ConfigurationError
> +from framework.config import NodeConfiguration
>   from framework.logger import DTSLOG
>   
> -from .linux_session import LinuxSession
> -from .os_session import InteractiveShellType, OSSession
> -from .remote import (
> -    CommandResult,
> -    InteractiveRemoteSession,
> -    InteractiveShell,
> -    PythonShell,
> -    RemoteSession,
> -    SSHSession,
> -    TestPmdDevice,
> -    TestPmdShell,
> -)
> -
> -
> -def create_session(
> +from .interactive_remote_session import InteractiveRemoteSession
> +from .interactive_shell import InteractiveShell
> +from .python_shell import PythonShell
> +from .remote_session import CommandResult, RemoteSession
> +from .ssh_session import SSHSession
> +from .testpmd_shell import TestPmdShell
> +
> +
> +def create_remote_session(
>       node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> OSSession:
> -    match node_config.os:
> -        case OS.linux:
> -            return LinuxSession(node_config, name, logger)
> -        case _:
> -            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> +) -> RemoteSession:
> +    return SSHSession(node_config, name, logger)
> +
> +
> +def create_interactive_session(
> +    node_config: NodeConfiguration, logger: DTSLOG
> +) -> InteractiveRemoteSession:
> +    return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/interactive_remote_session.py
> rename to dts/framework/remote_session/interactive_remote_session.py
> diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/interactive_shell.py
> rename to dts/framework/remote_session/interactive_shell.py
> diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/python_shell.py
> rename to dts/framework/remote_session/python_shell.py
> diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
> deleted file mode 100644
> index 06403691a5..0000000000
> --- a/dts/framework/remote_session/remote/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -# Copyright(c) 2023 University of New Hampshire
> -
> -# pylama:ignore=W0611
> -
> -from framework.config import NodeConfiguration
> -from framework.logger import DTSLOG
> -
> -from .interactive_remote_session import InteractiveRemoteSession
> -from .interactive_shell import InteractiveShell
> -from .python_shell import PythonShell
> -from .remote_session import CommandResult, RemoteSession
> -from .ssh_session import SSHSession
> -from .testpmd_shell import TestPmdDevice, TestPmdShell
> -
> -
> -def create_remote_session(
> -    node_config: NodeConfiguration, name: str, logger: DTSLOG
> -) -> RemoteSession:
> -    return SSHSession(node_config, name, logger)
> -
> -
> -def create_interactive_session(
> -    node_config: NodeConfiguration, logger: DTSLOG
> -) -> InteractiveRemoteSession:
> -    return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/remote_session.py
> rename to dts/framework/remote_session/remote_session.py
> diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
> similarity index 91%
> rename from dts/framework/remote_session/remote/ssh_session.py
> rename to dts/framework/remote_session/ssh_session.py
> index 8d127f1601..cee11d14d6 100644
> --- a/dts/framework/remote_session/remote/ssh_session.py
> +++ b/dts/framework/remote_session/ssh_session.py
> @@ -18,9 +18,7 @@
>       SSHException,
>   )
>   
> -from framework.config import NodeConfiguration
>   from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
> -from framework.logger import DTSLOG
>   
>   from .remote_session import CommandResult, RemoteSession
>   
> @@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
>   
>       session: Connection
>   
> -    def __init__(
> -        self,
> -        node_config: NodeConfiguration,
> -        session_name: str,
> -        logger: DTSLOG,
> -    ):
> -        super(SSHSession, self).__init__(node_config, session_name, logger)
> -
>       def _connect(self) -> None:
>           errors = []
>           retry_attempts = 10
> @@ -117,7 +107,7 @@ def _send_command(
>   
>           except CommandTimedOut as e:
>               self._logger.exception(e)
> -            raise SSHTimeoutError(command, e.result.stderr) from e
> +            raise SSHTimeoutError(command) from e
>   
>           return CommandResult(
>               self.name, command, output.stdout, output.stderr, output.return_code
> diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
> similarity index 100%
> rename from dts/framework/remote_session/remote/testpmd_shell.py
> rename to dts/framework/remote_session/testpmd_shell.py
> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
> index cfa39d011b..7f5841d073 100644
> --- a/dts/framework/settings.py
> +++ b/dts/framework/settings.py
> @@ -6,7 +6,7 @@
>   import argparse
>   import os
>   from collections.abc import Callable, Iterable, Sequence
> -from dataclasses import dataclass
> +from dataclasses import dataclass, field
>   from pathlib import Path
>   from typing import Any, TypeVar
>   
> @@ -22,8 +22,8 @@ def __init__(
>               option_strings: Sequence[str],
>               dest: str,
>               nargs: str | int | None = None,
> -            const: str | None = None,
> -            default: str = None,
> +            const: bool | None = None,
> +            default: Any = None,
>               type: Callable[[str], _T | argparse.FileType | None] = None,
>               choices: Iterable[_T] | None = None,
>               required: bool = False,
> @@ -32,6 +32,12 @@ def __init__(
>           ) -> None:
>               env_var_value = os.environ.get(env_var)
>               default = env_var_value or default
> +            if const is not None:
> +                nargs = 0
> +                default = const if env_var_value else default
> +                type = None
> +                choices = None
> +                metavar = None
>               super(_EnvironmentArgument, self).__init__(
>                   option_strings,
>                   dest,
> @@ -52,22 +58,28 @@ def __call__(
>               values: Any,
>               option_string: str = None,
>           ) -> None:
> -            setattr(namespace, self.dest, values)
> +            if self.const is not None:
> +                setattr(namespace, self.dest, self.const)
> +            else:
> +                setattr(namespace, self.dest, values)
>   
>       return _EnvironmentArgument
>   
>   
> -@dataclass(slots=True, frozen=True)
> -class _Settings:
> -    config_file_path: str
> -    output_dir: str
> -    timeout: float
> -    verbose: bool
> -    skip_setup: bool
> -    dpdk_tarball_path: Path
> -    compile_timeout: float
> -    test_cases: list
> -    re_run: int
> +@dataclass(slots=True)
> +class Settings:
> +    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
> +    output_dir: str = "output"
> +    timeout: float = 15
> +    verbose: bool = False
> +    skip_setup: bool = False
> +    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
> +    compile_timeout: float = 1200
> +    test_cases: list[str] = field(default_factory=list)
> +    re_run: int = 0
> +
> +
> +SETTINGS: Settings = Settings()
>   
>   
>   def _get_parser() -> argparse.ArgumentParser:
> @@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
>       parser.add_argument(
>           "--config-file",
>           action=_env_arg("DTS_CFG_FILE"),
> -        default="conf.yaml",
> +        default=SETTINGS.config_file_path,
> +        type=Path,
>           help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
>           "and targets.",
>       )
> @@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--output-dir",
>           "--output",
>           action=_env_arg("DTS_OUTPUT_DIR"),
> -        default="output",
> +        default=SETTINGS.output_dir,
>           help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
>       )
>   
> @@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-t",
>           "--timeout",
>           action=_env_arg("DTS_TIMEOUT"),
> -        default=15,
> +        default=SETTINGS.timeout,
>           type=float,
>           help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
>           "compiling DPDK.",
> @@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-v",
>           "--verbose",
>           action=_env_arg("DTS_VERBOSE"),
> -        default="N",
> -        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
> +        default=SETTINGS.verbose,
> +        const=True,
> +        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
>           "to the console.",
>       )
>   
> @@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
>           "-s",
>           "--skip-setup",
>           action=_env_arg("DTS_SKIP_SETUP"),
> -        default="N",
> -        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
> +        const=True,
> +        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
>       )
>   
>       parser.add_argument(
> @@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--snapshot",
>           "--git-ref",
>           action=_env_arg("DTS_DPDK_TARBALL"),
> -        default="dpdk.tar.xz",
> +        default=SETTINGS.dpdk_tarball_path,
>           type=Path,
>           help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
>           "tag ID or tree ID to test. To test local changes, first commit them, "
> @@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
>       parser.add_argument(
>           "--compile-timeout",
>           action=_env_arg("DTS_COMPILE_TIMEOUT"),
> -        default=1200,
> +        default=SETTINGS.compile_timeout,
>           type=float,
>           help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
>       )
> @@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
>           "--re-run",
>           "--re_run",
>           action=_env_arg("DTS_RERUN"),
> -        default=0,
> +        default=SETTINGS.re_run,
>           type=int,
>           help="[DTS_RERUN] Re-run each test case the specified amount of times "
>           "if a test failure occurs",
> @@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
>       return parser
>   
>   
> -def _get_settings() -> _Settings:
> +def get_settings() -> Settings:
>       parsed_args = _get_parser().parse_args()
> -    return _Settings(
> +    return Settings(
>           config_file_path=parsed_args.config_file,
>           output_dir=parsed_args.output_dir,
>           timeout=parsed_args.timeout,
> -        verbose=(parsed_args.verbose == "Y"),
> -        skip_setup=(parsed_args.skip_setup == "Y"),
> +        verbose=parsed_args.verbose,
> +        skip_setup=parsed_args.skip_setup,
>           dpdk_tarball_path=Path(
> -            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
> -        )
> -        if not os.path.exists(parsed_args.tarball)
> -        else Path(parsed_args.tarball),
> +            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
> +            if not os.path.exists(parsed_args.tarball)
> +            else Path(parsed_args.tarball)
> +        ),
>           compile_timeout=parsed_args.compile_timeout,
> -        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
> +        test_cases=(
> +            parsed_args.test_cases.split(",") if parsed_args.test_cases else []
> +        ),
>           re_run=parsed_args.re_run,
>       )
> -
> -
> -SETTINGS: _Settings = _get_settings()
> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
> index f0fbe80f6f..603e18872c 100644
> --- a/dts/framework/test_result.py
> +++ b/dts/framework/test_result.py
> @@ -254,7 +254,7 @@ def add_build_target(
>           self._inner_results.append(build_target_result)
>           return build_target_result
>   
> -    def add_sut_info(self, sut_info: NodeInfo):
> +    def add_sut_info(self, sut_info: NodeInfo) -> None:
>           self.sut_os_name = sut_info.os_name
>           self.sut_os_version = sut_info.os_version
>           self.sut_kernel_version = sut_info.kernel_version
> @@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
>           self._inner_results.append(execution_result)
>           return execution_result
>   
> -    def add_error(self, error) -> None:
> +    def add_error(self, error: Exception) -> None:
>           self._errors.append(error)
>   
>       def process(self) -> None:
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index 3b890c0451..d53553bf34 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -11,7 +11,7 @@
>   import re
>   from ipaddress import IPv4Interface, IPv6Interface, ip_interface
>   from types import MethodType
> -from typing import Union
> +from typing import Any, Union
>   
>   from scapy.layers.inet import IP  # type: ignore[import]
>   from scapy.layers.l2 import Ether  # type: ignore[import]
> @@ -26,8 +26,7 @@
>   from .logger import DTSLOG, getLogger
>   from .settings import SETTINGS
>   from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
> -from .testbed_model import SutNode, TGNode
> -from .testbed_model.hw.port import Port, PortLink
> +from .testbed_model import Port, PortLink, SutNode, TGNode
>   from .utils import get_packet_summaries
>   
>   
> @@ -453,7 +452,7 @@ def _execute_test_case(
>   
>   
>   def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
> -    def is_test_suite(object) -> bool:
> +    def is_test_suite(object: Any) -> bool:
>           try:
>               if issubclass(object, TestSuite) and object is not TestSuite:
>                   return True
> diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
> index 5cbb859e47..8ced05653b 100644
> --- a/dts/framework/testbed_model/__init__.py
> +++ b/dts/framework/testbed_model/__init__.py
> @@ -9,15 +9,9 @@
>   
>   # pylama:ignore=W0611
>   
> -from .hw import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -    VirtualDevice,
> -    lcore_filter,
> -)
> +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
>   from .node import Node
> +from .port import Port, PortLink
>   from .sut_node import SutNode
>   from .tg_node import TGNode
> +from .virtual_device import VirtualDevice
> diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
> similarity index 95%
> rename from dts/framework/testbed_model/hw/cpu.py
> rename to dts/framework/testbed_model/cpu.py
> index d1918a12dc..8fe785dfe4 100644
> --- a/dts/framework/testbed_model/hw/cpu.py
> +++ b/dts/framework/testbed_model/cpu.py
> @@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
>               )
>   
>           return filtered_lcores
> +
> +
> +def lcore_filter(
> +    core_list: list[LogicalCore],
> +    filter_specifier: LogicalCoreCount | LogicalCoreList,
> +    ascending: bool,
> +) -> LogicalCoreFilter:
> +    if isinstance(filter_specifier, LogicalCoreList):
> +        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
> +    elif isinstance(filter_specifier, LogicalCoreCount):
> +        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
> +    else:
> +        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
> deleted file mode 100644
> index 88ccac0b0e..0000000000
> --- a/dts/framework/testbed_model/hw/__init__.py
> +++ /dev/null
> @@ -1,27 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
> -
> -# pylama:ignore=W0611
> -
> -from .cpu import (
> -    LogicalCore,
> -    LogicalCoreCount,
> -    LogicalCoreCountFilter,
> -    LogicalCoreFilter,
> -    LogicalCoreList,
> -    LogicalCoreListFilter,
> -)
> -from .virtual_device import VirtualDevice
> -
> -
> -def lcore_filter(
> -    core_list: list[LogicalCore],
> -    filter_specifier: LogicalCoreCount | LogicalCoreList,
> -    ascending: bool,
> -) -> LogicalCoreFilter:
> -    if isinstance(filter_specifier, LogicalCoreList):
> -        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
> -    elif isinstance(filter_specifier, LogicalCoreCount):
> -        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
> -    else:
> -        raise ValueError(f"Unsupported filter r{filter_specifier}")
> diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
> similarity index 97%
> rename from dts/framework/remote_session/linux_session.py
> rename to dts/framework/testbed_model/linux_session.py
> index a3f1a6bf3b..f472bb8f0f 100644
> --- a/dts/framework/remote_session/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -9,10 +9,10 @@
>   from typing_extensions import NotRequired
>   
>   from framework.exception import RemoteCommandExecutionError
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
>   from framework.utils import expand_range
>   
> +from .cpu import LogicalCore
> +from .port import Port
>   from .posix_session import PosixSession
>   
>   
> @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
>               lcores.append(LogicalCore(lcore, core, socket, node))
>           return lcores
>   
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>           return dpdk_prefix
>   
>       def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index fc01e0bf8e..fa5b143cdd 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -12,23 +12,26 @@
>   from typing import Any, Callable, Type, Union
>   
>   from framework.config import (
> +    OS,
>       BuildTargetConfiguration,
>       ExecutionConfiguration,
>       NodeConfiguration,
>   )
> +from framework.exception import ConfigurationError
>   from framework.logger import DTSLOG, getLogger
> -from framework.remote_session import InteractiveShellType, OSSession, create_session
>   from framework.settings import SETTINGS
>   
> -from .hw import (
> +from .cpu import (
>       LogicalCore,
>       LogicalCoreCount,
>       LogicalCoreList,
>       LogicalCoreListFilter,
> -    VirtualDevice,
>       lcore_filter,
>   )
> -from .hw.port import Port
> +from .linux_session import LinuxSession
> +from .os_session import InteractiveShellType, OSSession
> +from .port import Port
> +from .virtual_device import VirtualDevice
>   
>   
>   class Node(ABC):
> @@ -172,9 +175,9 @@ def create_interactive_shell(
>   
>           return self.main_session.create_interactive_shell(
>               shell_cls,
> -            app_args,
>               timeout,
>               privileged,
> +            app_args,
>           )
>   
>       def filter_lcores(
> @@ -205,7 +208,7 @@ def _get_remote_cpus(self) -> None:
>           self._logger.info("Getting CPU information.")
>           self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
>   
> -    def _setup_hugepages(self):
> +    def _setup_hugepages(self) -> None:
>           """
>           Setup hugepages on the Node. Different architectures can supply different
>           amounts of memory for hugepages and numa-based hugepage allocation may need
> @@ -249,3 +252,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
>               return lambda *args: None
>           else:
>               return func
> +
> +
> +def create_session(
> +    node_config: NodeConfiguration, name: str, logger: DTSLOG
> +) -> OSSession:
> +    match node_config.os:
> +        case OS.linux:
> +            return LinuxSession(node_config, name, logger)
> +        case _:
> +            raise ConfigurationError(f"Unsupported OS {node_config.os}")
> diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
> similarity index 95%
> rename from dts/framework/remote_session/os_session.py
> rename to dts/framework/testbed_model/os_session.py
> index 8a709eac1c..76e595a518 100644
> --- a/dts/framework/remote_session/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -10,19 +10,19 @@
>   
>   from framework.config import Architecture, NodeConfiguration, NodeInfo
>   from framework.logger import DTSLOG
> -from framework.remote_session.remote import InteractiveShell
> -from framework.settings import SETTINGS
> -from framework.testbed_model import LogicalCore
> -from framework.testbed_model.hw.port import Port
> -from framework.utils import MesonArgs
> -
> -from .remote import (
> +from framework.remote_session import (
>       CommandResult,
>       InteractiveRemoteSession,
> +    InteractiveShell,
>       RemoteSession,
>       create_interactive_session,
>       create_remote_session,
>   )
> +from framework.settings import SETTINGS
> +from framework.utils import MesonArgs
> +
> +from .cpu import LogicalCore
> +from .port import Port
>   
>   InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
>   
> @@ -85,9 +85,9 @@ def send_command(
>       def create_interactive_shell(
>           self,
>           shell_cls: Type[InteractiveShellType],
> -        eal_parameters: str,
>           timeout: float,
>           privileged: bool,
> +        app_args: str,
>       ) -> InteractiveShellType:
>           """
>           See "create_interactive_shell" in SutNode
> @@ -96,7 +96,7 @@ def create_interactive_shell(
>               self.interactive_session.session,
>               self._logger,
>               self._get_privileged_command if privileged else None,
> -            eal_parameters,
> +            app_args,
>               timeout,
>           )
>   
> @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
>           """
>   
>       @abstractmethod
> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
>           """
>           Try to find DPDK remote dir in remote_dir.
>           """
> @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
>           """
>   
>       @abstractmethod
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>           """
>           Get the DPDK file prefix that will be used when running DPDK apps.
>           """
> diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/port.py
> rename to dts/framework/testbed_model/port.py
> diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
> similarity index 98%
> rename from dts/framework/remote_session/posix_session.py
> rename to dts/framework/testbed_model/posix_session.py
> index 5da0516e05..1d1d5b1b26 100644
> --- a/dts/framework/remote_session/posix_session.py
> +++ b/dts/framework/testbed_model/posix_session.py
> @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
>   
>           return ret_opts
>   
> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
>           remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
>           result = self.send_command(f"ls -d {remote_guess} | tail -1")
>           return PurePosixPath(result.stdout)
> @@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs(
>           for dpdk_runtime_dir in dpdk_runtime_dirs:
>               self.remove_remote_dir(dpdk_runtime_dir)
>   
> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>           return ""
>   
>       def get_compiler_version(self, compiler_name: str) -> str:
> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
> index 4161d3a4d5..17deea06e2 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -15,12 +15,14 @@
>       NodeInfo,
>       SutNodeConfiguration,
>   )
> -from framework.remote_session import CommandResult, InteractiveShellType, OSSession
> +from framework.remote_session import CommandResult
>   from framework.settings import SETTINGS
>   from framework.utils import MesonArgs
>   
> -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
> +from .cpu import LogicalCoreCount, LogicalCoreList
>   from .node import Node
> +from .os_session import InteractiveShellType, OSSession
> +from .virtual_device import VirtualDevice
>   
>   
>   class EalParameters(object):
> @@ -307,7 +309,7 @@ def create_eal_parameters(
>           prefix: str = "dpdk",
>           append_prefix_timestamp: bool = True,
>           no_pci: bool = False,
> -        vdevs: list[VirtualDevice] = None,
> +        vdevs: list[VirtualDevice] | None = None,
>           other_eal_param: str = "",
>       ) -> "EalParameters":
>           """
> diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
> index 27025cfa31..166eb8430e 100644
> --- a/dts/framework/testbed_model/tg_node.py
> +++ b/dts/framework/testbed_model/tg_node.py
> @@ -16,16 +16,11 @@
>   
>   from scapy.packet import Packet  # type: ignore[import]
>   
> -from framework.config import (
> -    ScapyTrafficGeneratorConfig,
> -    TGNodeConfiguration,
> -    TrafficGeneratorType,
> -)
> -from framework.exception import ConfigurationError
> -
> -from .capturing_traffic_generator import CapturingTrafficGenerator
> -from .hw.port import Port
> +from framework.config import TGNodeConfiguration
> +
>   from .node import Node
> +from .port import Port
> +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
>   
>   
>   class TGNode(Node):
> @@ -80,20 +75,3 @@ def close(self) -> None:
>           """Free all resources used by the node"""
>           self.traffic_generator.close()
>           super(TGNode, self).close()
> -
> -
> -def create_traffic_generator(
> -    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
> -) -> CapturingTrafficGenerator:
> -    """A factory function for creating traffic generator object from user config."""
> -
> -    from .scapy import ScapyTrafficGenerator
> -
> -    match traffic_generator_config.traffic_generator_type:
> -        case TrafficGeneratorType.SCAPY:
> -            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> -        case _:
> -            raise ConfigurationError(
> -                "Unknown traffic generator: "
> -                f"{traffic_generator_config.traffic_generator_type}"
> -            )
> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
> new file mode 100644
> index 0000000000..11bfa1ee0f
> --- /dev/null
> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> @@ -0,0 +1,24 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
> +from framework.exception import ConfigurationError
> +from framework.testbed_model.node import Node
> +
> +from .capturing_traffic_generator import CapturingTrafficGenerator
> +from .scapy import ScapyTrafficGenerator
> +
> +
> +def create_traffic_generator(
> +    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
> +) -> CapturingTrafficGenerator:
> +    """A factory function for creating traffic generator object from user config."""
> +
> +    match traffic_generator_config.traffic_generator_type:
> +        case TrafficGeneratorType.SCAPY:
> +            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> +        case _:
> +            raise ConfigurationError(
> +                "Unknown traffic generator: "
> +                f"{traffic_generator_config.traffic_generator_type}"
> +            )
> diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> similarity index 96%
> rename from dts/framework/testbed_model/capturing_traffic_generator.py
> rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> index ab98987f8e..e521211ef0 100644
> --- a/dts/framework/testbed_model/capturing_traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> @@ -16,9 +16,9 @@
>   from scapy.packet import Packet  # type: ignore[import]
>   
>   from framework.settings import SETTINGS
> +from framework.testbed_model.port import Port
>   from framework.utils import get_packet_summaries
>   
> -from .hw.port import Port
>   from .traffic_generator import TrafficGenerator
>   
>   
> @@ -130,7 +130,9 @@ def _send_packets_and_capture(
>           for the specified duration. It must be able to handle no received packets.
>           """
>   
> -    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
> +    def _write_capture_from_packets(
> +        self, capture_name: str, packets: list[Packet]
> +    ) -> None:
>           file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
>           self._logger.debug(f"Writing packets to {file_name}.")
>           scapy.utils.wrpcap(file_name, packets)
> diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
> similarity index 95%
> rename from dts/framework/testbed_model/scapy.py
> rename to dts/framework/testbed_model/traffic_generator/scapy.py
> index af0d4dbb25..51864b6e6b 100644
> --- a/dts/framework/testbed_model/scapy.py
> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
> @@ -24,16 +24,15 @@
>   from scapy.packet import Packet  # type: ignore[import]
>   
>   from framework.config import OS, ScapyTrafficGeneratorConfig
> -from framework.logger import DTSLOG, getLogger
>   from framework.remote_session import PythonShell
>   from framework.settings import SETTINGS
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>   
>   from .capturing_traffic_generator import (
>       CapturingTrafficGenerator,
>       _get_default_capture_name,
>   )
> -from .hw.port import Port
> -from .tg_node import TGNode
>   
>   """
>   ========= BEGIN RPC FUNCTIONS =========
> @@ -146,7 +145,7 @@ def quit(self) -> None:
>           self._BaseServer__shutdown_request = True
>           return None
>   
> -    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
> +    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
>           """Add a function to the server.
>   
>           This is meant to be executed remotely.
> @@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
>       session: PythonShell
>       rpc_server_proxy: xmlrpc.client.ServerProxy
>       _config: ScapyTrafficGeneratorConfig
> -    _tg_node: TGNode
> -    _logger: DTSLOG
> -
> -    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
> -        self._config = config
> -        self._tg_node = tg_node
> -        self._logger = getLogger(
> -            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> -        )
> +
> +    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
> +        super().__init__(tg_node, config)
>   
>           assert (
>               self._tg_node.config.os == OS.linux
> @@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
>               function_bytes = marshal.dumps(function.__code__)
>               self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
>   
> -    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
> +    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
>           # load the source of the function
>           src = inspect.getsource(QuittableXMLRPCServer)
>           # Lines with only whitespace break the repl if in the middle of a function
> @@ -280,7 +273,7 @@ def _send_packets_and_capture(
>           scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
>           return scapy_packets
>   
> -    def close(self):
> +    def close(self) -> None:
>           try:
>               self.rpc_server_proxy.quit()
>           except ConnectionRefusedError:
> diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> similarity index 80%
> rename from dts/framework/testbed_model/traffic_generator.py
> rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
> index 28c35d3ce4..ea7c3963da 100644
> --- a/dts/framework/testbed_model/traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> @@ -12,11 +12,12 @@
>   
>   from scapy.packet import Packet  # type: ignore[import]
>   
> -from framework.logger import DTSLOG
> +from framework.config import TrafficGeneratorConfig
> +from framework.logger import DTSLOG, getLogger
> +from framework.testbed_model.node import Node
> +from framework.testbed_model.port import Port
>   from framework.utils import get_packet_summaries
>   
> -from .hw.port import Port
> -
>   
>   class TrafficGenerator(ABC):
>       """The base traffic generator.
> @@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
>       Defines the few basic methods that each traffic generator must implement.
>       """
>   
> +    _config: TrafficGeneratorConfig
> +    _tg_node: Node
>       _logger: DTSLOG
>   
> +    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> +        self._config = config
> +        self._tg_node = tg_node
> +        self._logger = getLogger(
> +            f"{self._tg_node.name} {self._config.traffic_generator_type}"
> +        )
> +
>       def send_packet(self, packet: Packet, port: Port) -> None:
>           """Send a packet and block until it is fully sent.
>   
> diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
> similarity index 100%
> rename from dts/framework/testbed_model/hw/virtual_device.py
> rename to dts/framework/testbed_model/virtual_device.py
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index d27c2c5b5f..f0c916471c 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -7,7 +7,6 @@
>   import json
>   import os
>   import subprocess
> -import sys
>   from enum import Enum
>   from pathlib import Path
>   from subprocess import SubprocessError
> @@ -16,35 +15,7 @@
>   
>   from .exception import ConfigurationError
>   
> -
> -class StrEnum(Enum):
> -    @staticmethod
> -    def _generate_next_value_(
> -        name: str, start: int, count: int, last_values: object
> -    ) -> str:
> -        return name
> -
> -    def __str__(self) -> str:
> -        return self.name
> -
> -
> -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
> -
> -
> -def check_dts_python_version() -> None:
> -    if sys.version_info.major < 3 or (
> -        sys.version_info.major == 3 and sys.version_info.minor < 10
> -    ):
> -        print(
> -            RED(
> -                (
> -                    "WARNING: DTS execution node's python version is lower than"
> -                    "python 3.10, is deprecated and will not work in future releases."
> -                )
> -            ),
> -            file=sys.stderr,
> -        )
> -        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
> +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
>   
>   
>   def expand_range(range_str: str) -> list[int]:
> @@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
>       return expanded_range
>   
>   
> -def get_packet_summaries(packets: list[Packet]):
> +def get_packet_summaries(packets: list[Packet]) -> str:
>       if len(packets) == 1:
>           packet_summaries = packets[0].summary()
>       else:
> @@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
>       return f"Packet contents: \n{packet_summaries}"
>   
>   
> -def RED(text: str) -> str:
> -    return f"\u001B[31;1m{str(text)}\u001B[0m"
> +class StrEnum(Enum):
> +    @staticmethod
> +    def _generate_next_value_(
> +        name: str, start: int, count: int, last_values: object
> +    ) -> str:
> +        return name
> +
> +    def __str__(self) -> str:
> +        return self.name
>   
>   
>   class MesonArgs(object):
> @@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
>           if self._tarball_path and os.path.exists(self._tarball_path):
>               os.remove(self._tarball_path)
>   
> -    def __fspath__(self):
> +    def __fspath__(self) -> str:
>           return str(self._tarball_path)
> diff --git a/dts/main.py b/dts/main.py
> index 43311fa847..5d4714b0c3 100755
> --- a/dts/main.py
> +++ b/dts/main.py
> @@ -10,10 +10,17 @@
>   
>   import logging
>   
> -from framework import dts
> +from framework import settings
>   
>   
>   def main() -> None:
> +    """Set DTS settings, then run DTS.
> +
> +    The DTS settings are taken from the command line arguments and the environment variables.
> +    """
> +    settings.SETTINGS = settings.get_settings()
> +    from framework import dts
> +
>       dts.run_all()
>   
>   
Reviewed-by: Yoan Picchi <yoan.picchi@arm.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 02/21] dts: add docstring checker
  2023-11-15 13:09               ` [PATCH v7 02/21] dts: add docstring checker Juraj Linkeš
@ 2023-11-20 16:03                 ` Yoan Picchi
  0 siblings, 0 replies; 255+ messages in thread
From: Yoan Picchi @ 2023-11-20 16:03 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Python docstrings are the in-code way to document the code. The
> docstring checker of choice is pydocstyle which we're executing from
> Pylama, but the current latest versions are not complatible due to [0],
> so pin the pydocstyle version to the latest working version.
> 
> [0] https://github.com/klen/pylama/issues/232
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/poetry.lock    | 12 ++++++------
>   dts/pyproject.toml |  6 +++++-
>   2 files changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/dts/poetry.lock b/dts/poetry.lock
> index f7b3b6d602..a734fa71f0 100644
> --- a/dts/poetry.lock
> +++ b/dts/poetry.lock
> @@ -489,20 +489,20 @@ files = [
>   
>   [[package]]
>   name = "pydocstyle"
> -version = "6.3.0"
> +version = "6.1.1"
>   description = "Python docstring style checker"
>   optional = false
>   python-versions = ">=3.6"
>   files = [
> -    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
> -    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
> +    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
> +    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
>   ]
>   
>   [package.dependencies]
> -snowballstemmer = ">=2.2.0"
> +snowballstemmer = "*"
>   
>   [package.extras]
> -toml = ["tomli (>=1.2.3)"]
> +toml = ["toml"]
>   
>   [[package]]
>   name = "pyflakes"
> @@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
>   [metadata]
>   lock-version = "2.0"
>   python-versions = "^3.10"
> -content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
> +content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
> diff --git a/dts/pyproject.toml b/dts/pyproject.toml
> index 6762edfa6b..3943c87c87 100644
> --- a/dts/pyproject.toml
> +++ b/dts/pyproject.toml
> @@ -25,6 +25,7 @@ PyYAML = "^6.0"
>   types-PyYAML = "^6.0.8"
>   fabric = "^2.7.1"
>   scapy = "^2.5.0"
> +pydocstyle = "6.1.1"
>   
>   [tool.poetry.group.dev.dependencies]
>   mypy = "^0.961"
> @@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
>   build-backend = "poetry.core.masonry.api"
>   
>   [tool.pylama]
> -linters = "mccabe,pycodestyle,pyflakes"
> +linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
>   format = "pylint"
>   max_line_length = 88 # https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#line-length
>   
> +[tool.pylama.linter.pydocstyle]
> +convention = "google"
> +
>   [tool.mypy]
>   python_version = "3.10"
>   enable_error_code = ["ignore-without-code"]
Reviewed-by: Yoan Picchi <yoan.picchi@arm.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 03/21] dts: add basic developer docs
  2023-11-15 13:09               ` [PATCH v7 03/21] dts: add basic developer docs Juraj Linkeš
@ 2023-11-20 16:03                 ` Yoan Picchi
  0 siblings, 0 replies; 255+ messages in thread
From: Yoan Picchi @ 2023-11-20 16:03 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Expand the framework contribution guidelines and add how to document the
> code with Python docstrings.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 73 insertions(+)
> 
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index 32c18ee472..cd771a428c 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
>   The results contain basic statistics of passed/failed test cases and DPDK version.
>   
>   
> +Contributing to DTS
> +-------------------
> +
> +There are two areas of contribution: The DTS framework and DTS test suites.
> +
> +The framework contains the logic needed to run test cases, such as connecting to nodes,
> +running DPDK apps and collecting results.
> +
> +The test cases call APIs from the framework to test their scenarios. Adding test cases may
> +require adding code to the framework as well.
> +
> +
> +Framework Coding Guidelines
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +When adding code to the DTS framework, pay attention to the rest of the code
> +and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
> +warnings when some of the basics are not met.
> +
> +The code must be properly documented with docstrings. The style must conform to
> +the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
> +See an example of the style
> +`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
> +For cases which are not covered by the Google style, refer
> +to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
> +the two style guides, where we deviate or where some additional clarification is helpful:
> +
> +   * The __init__() methods of classes are documented separately from the docstring of the class
> +     itself.
> +   * The docstrigs of implemented abstract methods should refer to the superclass's definition
> +     if there's no deviation.
> +   * Instance variables/attributes should be documented in the docstring of the class
> +     in the ``Attributes:`` section.
> +   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
> +     attributes which result in instance variables/attributes should also be recorded
> +     in the ``Attributes:`` section.
> +   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
> +     the type annotated line. The description may be omitted if the meaning is obvious.
> +   * The Enum and TypedDict also process the attributes in particular ways and should be documented
> +     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
> +   * When referencing a parameter of a function or a method in their docstring, don't use
> +     any articles and put the parameter into single backticks. This mimics the style of
> +     `Python's documentation <https://docs.python.org/3/index.html>`_.
> +   * When specifying a value, use double backticks::
> +
> +        def foo(greet: bool) -> None:
> +            """Demonstration of single and double backticks.
> +
> +            `greet` controls whether ``Hello World`` is printed.
> +
> +            Args:
> +               greet: Whether to print the ``Hello World`` message.
> +            """
> +            if greet:
> +               print(f"Hello World")
> +
> +   * The docstring maximum line length is the same as the code maximum line length.
> +
> +
>   How To Write a Test Suite
>   -------------------------
>   
> @@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
>      | These methods don't need to be implemented if there's no need for them in a test suite.
>        In that case, nothing will happen when they're is executed.
>   
> +#. **Configuration, traffic and other logic**
> +
> +   The ``TestSuite`` class contains a variety of methods for anything that
> +   a test suite setup, a teardown, or a test case may need to do.
> +
> +   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
> +   and use the interactive shell instances directly.
> +
> +   These are the two main ways to call the framework logic in test suites. If there's any
> +   functionality or logic missing from the framework, it should be implemented so that
> +   the test suites can use one of these two ways.
> +
>   #. **Test case verification**
>   
>      Test case verification should be done with the ``verify`` method, which records the result.
> @@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
>      and used by the test suite via the ``sut_node`` field.
>   
>   
> +.. _dts_dev_tools:
> +
>   DTS Developer Tools
>   -------------------
>   
Reviewed-by: Yoan Picchi <yoan.picchi@arm.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 01/21] dts: code adjustments for doc generation
  2023-11-16 21:04                 ` Jeremy Spewock
@ 2023-11-20 16:10                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-20 16:10 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

On Thu, Nov 16, 2023 at 10:05 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
>
> On Wed, Nov 15, 2023 at 8:11 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> The standard Python tool for generating API documentation, Sphinx,
>> imports modules one-by-one when generating the documentation. This
>> requires code changes:
>> * properly guarding argument parsing in the if __name__ == '__main__'
>>   block,
>> * the logger used by DTS runner underwent the same treatment so that it
>>   doesn't create log files outside of a DTS run,
>> * however, DTS uses the arguments to construct an object holding global
>>   variables. The defaults for the global variables needed to be moved
>>   from argument parsing elsewhere,
>> * importing the remote_session module from framework resulted in
>>   circular imports because of one module trying to import another
>>   module. This is fixed by reorganizing the code,
>> * some code reorganization was done because the resulting structure
>>   makes more sense, improving documentation clarity.
>>
>> The are some other changes which are documentation related:
>> * added missing type annotation so they appear in the generated docs,
>> * reordered arguments in some methods,
>> * removed superfluous arguments and attributes,
>> * change private functions/methods/attributes to private and vice-versa.
>>
>> The above all appear in the generated documentation and the with them,
>> the documentation is improved.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  dts/framework/config/__init__.py              | 10 ++-
>>  dts/framework/dts.py                          | 33 +++++--
>>  dts/framework/exception.py                    | 54 +++++-------
>>  dts/framework/remote_session/__init__.py      | 41 ++++-----
>>  .../interactive_remote_session.py             |  0
>>  .../{remote => }/interactive_shell.py         |  0
>>  .../{remote => }/python_shell.py              |  0
>>  .../remote_session/remote/__init__.py         | 27 ------
>>  .../{remote => }/remote_session.py            |  0
>>  .../{remote => }/ssh_session.py               | 12 +--
>>  .../{remote => }/testpmd_shell.py             |  0
>>  dts/framework/settings.py                     | 87 +++++++++++--------
>>  dts/framework/test_result.py                  |  4 +-
>>  dts/framework/test_suite.py                   |  7 +-
>>  dts/framework/testbed_model/__init__.py       | 12 +--
>>  dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
>>  dts/framework/testbed_model/hw/__init__.py    | 27 ------
>>  .../linux_session.py                          |  6 +-
>>  dts/framework/testbed_model/node.py           | 25 ++++--
>>  .../os_session.py                             | 22 ++---
>>  dts/framework/testbed_model/{hw => }/port.py  |  0
>>  .../posix_session.py                          |  4 +-
>>  dts/framework/testbed_model/sut_node.py       |  8 +-
>>  dts/framework/testbed_model/tg_node.py        | 30 +------
>>  .../traffic_generator/__init__.py             | 24 +++++
>>  .../capturing_traffic_generator.py            |  6 +-
>>  .../{ => traffic_generator}/scapy.py          | 23 ++---
>>  .../traffic_generator.py                      | 16 +++-
>>  .../testbed_model/{hw => }/virtual_device.py  |  0
>>  dts/framework/utils.py                        | 46 +++-------
>>  dts/main.py                                   |  9 +-
>>  31 files changed, 258 insertions(+), 288 deletions(-)
>>  rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
>>  rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
>>  rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
>>  delete mode 100644 dts/framework/remote_session/remote/__init__.py
>>  rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
>>  rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
>>  rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
>>  rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
>>  delete mode 100644 dts/framework/testbed_model/hw/__init__.py
>>  rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
>>  rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
>>  rename dts/framework/testbed_model/{hw => }/port.py (100%)
>>  rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
>>  create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
>>  rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (96%)
>>  rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
>>  rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (80%)
>>  rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)
>>
>> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
>> index cb7e00ba34..2044c82611 100644
>> --- a/dts/framework/config/__init__.py
>> +++ b/dts/framework/config/__init__.py
>> @@ -17,6 +17,7 @@
>>  import warlock  # type: ignore[import]
>>  import yaml
>>
>> +from framework.exception import ConfigurationError
>>  from framework.settings import SETTINGS
>>  from framework.utils import StrEnum
>>
>> @@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
>>      traffic_generator_type: TrafficGeneratorType
>>
>>      @staticmethod
>> -    def from_dict(d: dict):
>> +    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
>>          # This looks useless now, but is designed to allow expansion to traffic
>>          # generators that require more configuration later.
>>          match TrafficGeneratorType(d["type"]):
>> @@ -97,6 +98,10 @@ def from_dict(d: dict):
>>                  return ScapyTrafficGeneratorConfig(
>>                      traffic_generator_type=TrafficGeneratorType.SCAPY
>>                  )
>> +            case _:
>> +                raise ConfigurationError(
>> +                    f'Unknown traffic generator type "{d["type"]}".'
>> +                )
>>
>>
>>  @dataclass(slots=True, frozen=True)
>> @@ -324,6 +329,3 @@ def load_config() -> Configuration:
>>      config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
>>      config_obj: Configuration = Configuration.from_dict(dict(config))
>>      return config_obj
>> -
>> -
>> -CONFIGURATION = load_config()
>> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
>> index f773f0c38d..4c7fb0c40a 100644
>> --- a/dts/framework/dts.py
>> +++ b/dts/framework/dts.py
>> @@ -6,19 +6,19 @@
>>  import sys
>>
>>  from .config import (
>> -    CONFIGURATION,
>>      BuildTargetConfiguration,
>>      ExecutionConfiguration,
>>      TestSuiteConfig,
>> +    load_config,
>>  )
>>  from .exception import BlockingTestSuiteError
>>  from .logger import DTSLOG, getLogger
>>  from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
>>  from .test_suite import get_test_suites
>>  from .testbed_model import SutNode, TGNode
>> -from .utils import check_dts_python_version
>>
>> -dts_logger: DTSLOG = getLogger("DTSRunner")
>> +# dummy defaults to satisfy linters
>> +dts_logger: DTSLOG = None  # type: ignore[assignment]
>>  result: DTSResult = DTSResult(dts_logger)
>>
>>
>> @@ -30,14 +30,18 @@ def run_all() -> None:
>>      global dts_logger
>>      global result
>>
>> +    # create a regular DTS logger and create a new result with it
>> +    dts_logger = getLogger("DTSRunner")
>> +    result = DTSResult(dts_logger)
>> +
>>      # check the python version of the server that run dts
>> -    check_dts_python_version()
>> +    _check_dts_python_version()
>>
>>      sut_nodes: dict[str, SutNode] = {}
>>      tg_nodes: dict[str, TGNode] = {}
>>      try:
>>          # for all Execution sections
>> -        for execution in CONFIGURATION.executions:
>> +        for execution in load_config().executions:
>>              sut_node = sut_nodes.get(execution.system_under_test_node.name)
>>              tg_node = tg_nodes.get(execution.traffic_generator_node.name)
>>
>> @@ -82,6 +86,25 @@ def run_all() -> None:
>>      _exit_dts()
>>
>>
>> +def _check_dts_python_version() -> None:
>> +    def RED(text: str) -> str:
>> +        return f"\u001B[31;1m{str(text)}\u001B[0m"
>> +
>> +    if sys.version_info.major < 3 or (
>> +        sys.version_info.major == 3 and sys.version_info.minor < 10
>> +    ):
>> +        print(
>> +            RED(
>> +                (
>> +                    "WARNING: DTS execution node's python version is lower than"
>> +                    "python 3.10, is deprecated and will not work in future releases."
>> +                )
>> +            ),
>> +            file=sys.stderr,
>> +        )
>> +        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
>> +
>> +
>>  def _run_execution(
>>      sut_node: SutNode,
>>      tg_node: TGNode,
>> diff --git a/dts/framework/exception.py b/dts/framework/exception.py
>> index 001a5a5496..7489c03570 100644
>> --- a/dts/framework/exception.py
>> +++ b/dts/framework/exception.py
>> @@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
>>      Command execution timeout.
>>      """
>>
>> -    command: str
>> -    output: str
>>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
>> +    _command: str
>>
>> -    def __init__(self, command: str, output: str):
>> -        self.command = command
>> -        self.output = output
>> +    def __init__(self, command: str):
>> +        self._command = command
>>
>>      def __str__(self) -> str:
>> -        return f"TIMEOUT on {self.command}"
>> -
>> -    def get_output(self) -> str:
>> -        return self.output
>> +        return f"TIMEOUT on {self._command}"
>>
>>
>>  class SSHConnectionError(DTSError):
>> @@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
>>      SSH connection error.
>>      """
>>
>> -    host: str
>> -    errors: list[str]
>>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
>> +    _host: str
>> +    _errors: list[str]
>>
>>      def __init__(self, host: str, errors: list[str] | None = None):
>> -        self.host = host
>> -        self.errors = [] if errors is None else errors
>> +        self._host = host
>> +        self._errors = [] if errors is None else errors
>>
>>      def __str__(self) -> str:
>> -        message = f"Error trying to connect with {self.host}."
>> -        if self.errors:
>> -            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
>> +        message = f"Error trying to connect with {self._host}."
>> +        if self._errors:
>> +            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
>>
>>          return message
>>
>> @@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
>>      It can no longer be used.
>>      """
>>
>> -    host: str
>>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
>> +    _host: str
>>
>>      def __init__(self, host: str):
>> -        self.host = host
>> +        self._host = host
>>
>>      def __str__(self) -> str:
>> -        return f"SSH session with {self.host} has died"
>> +        return f"SSH session with {self._host} has died"
>>
>>
>>  class ConfigurationError(DTSError):
>> @@ -107,18 +102,18 @@ class RemoteCommandExecutionError(DTSError):
>>      Raised when a command executed on a Node returns a non-zero exit status.
>>      """
>>
>> -    command: str
>> -    command_return_code: int
>>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
>> +    command: str
>> +    _command_return_code: int
>>
>>      def __init__(self, command: str, command_return_code: int):
>>          self.command = command
>> -        self.command_return_code = command_return_code
>> +        self._command_return_code = command_return_code
>>
>>      def __str__(self) -> str:
>>          return (
>>              f"Command {self.command} returned a non-zero exit code: "
>> -            f"{self.command_return_code}"
>> +            f"{self._command_return_code}"
>>          )
>>
>>
>> @@ -143,22 +138,15 @@ class TestCaseVerifyError(DTSError):
>>      Used in test cases to verify the expected behavior.
>>      """
>>
>> -    value: str
>>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
>>
>> -    def __init__(self, value: str):
>> -        self.value = value
>> -
>> -    def __str__(self) -> str:
>> -        return repr(self.value)
>> -
>
>
> Does this change mean we are no longer providing descriptions for what failing the verification means? I guess there isn't really harm in removing that functionality, but I'm not sure I see the value in removing the extra information either.
>

This shouldn't have any impact on the existing functionality. The
error message will be stored even without the variable (that's the
default behavior of exceptions) and the string representation is not
used anywhere in code and even if it was, the only difference is the
self.value string would be in quotes. This just removes unnecessary
code, which I didn't want to document as that would be just confusing.

>>
>>
>>  class BlockingTestSuiteError(DTSError):
>> -    suite_name: str
>>      severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
>> +    _suite_name: str
>>
>>      def __init__(self, suite_name: str) -> None:
>> -        self.suite_name = suite_name
>> +        self._suite_name = suite_name
>>
>>      def __str__(self) -> str:
>> -        return f"Blocking suite {self.suite_name} failed."
>> +        return f"Blocking suite {self._suite_name} failed."
>> diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
>> index 00b6d1f03a..5e7ddb2b05 100644
>> --- a/dts/framework/remote_session/__init__.py
>> +++ b/dts/framework/remote_session/__init__.py
>> @@ -12,29 +12,24 @@
>>
>>  # pylama:ignore=W0611
>>
>> -from framework.config import OS, NodeConfiguration
>> -from framework.exception import ConfigurationError
>> +from framework.config import NodeConfiguration
>>  from framework.logger import DTSLOG
>>
>> -from .linux_session import LinuxSession
>> -from .os_session import InteractiveShellType, OSSession
>> -from .remote import (
>> -    CommandResult,
>> -    InteractiveRemoteSession,
>> -    InteractiveShell,
>> -    PythonShell,
>> -    RemoteSession,
>> -    SSHSession,
>> -    TestPmdDevice,
>> -    TestPmdShell,
>> -)
>> -
>> -
>> -def create_session(
>> +from .interactive_remote_session import InteractiveRemoteSession
>> +from .interactive_shell import InteractiveShell
>> +from .python_shell import PythonShell
>> +from .remote_session import CommandResult, RemoteSession
>> +from .ssh_session import SSHSession
>> +from .testpmd_shell import TestPmdShell
>> +
>> +
>> +def create_remote_session(
>>      node_config: NodeConfiguration, name: str, logger: DTSLOG
>> -) -> OSSession:
>> -    match node_config.os:
>> -        case OS.linux:
>> -            return LinuxSession(node_config, name, logger)
>> -        case _:
>> -            raise ConfigurationError(f"Unsupported OS {node_config.os}")
>> +) -> RemoteSession:
>> +    return SSHSession(node_config, name, logger)
>> +
>> +
>> +def create_interactive_session(
>> +    node_config: NodeConfiguration, logger: DTSLOG
>> +) -> InteractiveRemoteSession:
>> +    return InteractiveRemoteSession(node_config, logger)
>> diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
>> similarity index 100%
>> rename from dts/framework/remote_session/remote/interactive_remote_session.py
>> rename to dts/framework/remote_session/interactive_remote_session.py
>> diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
>> similarity index 100%
>> rename from dts/framework/remote_session/remote/interactive_shell.py
>> rename to dts/framework/remote_session/interactive_shell.py
>> diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
>> similarity index 100%
>> rename from dts/framework/remote_session/remote/python_shell.py
>> rename to dts/framework/remote_session/python_shell.py
>> diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
>> deleted file mode 100644
>> index 06403691a5..0000000000
>> --- a/dts/framework/remote_session/remote/__init__.py
>> +++ /dev/null
>> @@ -1,27 +0,0 @@
>> -# SPDX-License-Identifier: BSD-3-Clause
>> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
>> -# Copyright(c) 2023 University of New Hampshire
>> -
>> -# pylama:ignore=W0611
>> -
>> -from framework.config import NodeConfiguration
>> -from framework.logger import DTSLOG
>> -
>> -from .interactive_remote_session import InteractiveRemoteSession
>> -from .interactive_shell import InteractiveShell
>> -from .python_shell import PythonShell
>> -from .remote_session import CommandResult, RemoteSession
>> -from .ssh_session import SSHSession
>> -from .testpmd_shell import TestPmdDevice, TestPmdShell
>> -
>> -
>> -def create_remote_session(
>> -    node_config: NodeConfiguration, name: str, logger: DTSLOG
>> -) -> RemoteSession:
>> -    return SSHSession(node_config, name, logger)
>> -
>> -
>> -def create_interactive_session(
>> -    node_config: NodeConfiguration, logger: DTSLOG
>> -) -> InteractiveRemoteSession:
>> -    return InteractiveRemoteSession(node_config, logger)
>> diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
>> similarity index 100%
>> rename from dts/framework/remote_session/remote/remote_session.py
>> rename to dts/framework/remote_session/remote_session.py
>> diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
>> similarity index 91%
>> rename from dts/framework/remote_session/remote/ssh_session.py
>> rename to dts/framework/remote_session/ssh_session.py
>> index 8d127f1601..cee11d14d6 100644
>> --- a/dts/framework/remote_session/remote/ssh_session.py
>> +++ b/dts/framework/remote_session/ssh_session.py
>> @@ -18,9 +18,7 @@
>>      SSHException,
>>  )
>>
>> -from framework.config import NodeConfiguration
>>  from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
>> -from framework.logger import DTSLOG
>>
>>  from .remote_session import CommandResult, RemoteSession
>>
>> @@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
>>
>>      session: Connection
>>
>> -    def __init__(
>> -        self,
>> -        node_config: NodeConfiguration,
>> -        session_name: str,
>> -        logger: DTSLOG,
>> -    ):
>> -        super(SSHSession, self).__init__(node_config, session_name, logger)
>> -
>>      def _connect(self) -> None:
>>          errors = []
>>          retry_attempts = 10
>> @@ -117,7 +107,7 @@ def _send_command(
>>
>>          except CommandTimedOut as e:
>>              self._logger.exception(e)
>> -            raise SSHTimeoutError(command, e.result.stderr) from e
>> +            raise SSHTimeoutError(command) from e
>>
>>          return CommandResult(
>>              self.name, command, output.stdout, output.stderr, output.return_code
>> diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
>> similarity index 100%
>> rename from dts/framework/remote_session/remote/testpmd_shell.py
>> rename to dts/framework/remote_session/testpmd_shell.py
>> diff --git a/dts/framework/settings.py b/dts/framework/settings.py
>> index cfa39d011b..7f5841d073 100644
>> --- a/dts/framework/settings.py
>> +++ b/dts/framework/settings.py
>> @@ -6,7 +6,7 @@
>>  import argparse
>>  import os
>>  from collections.abc import Callable, Iterable, Sequence
>> -from dataclasses import dataclass
>> +from dataclasses import dataclass, field
>>  from pathlib import Path
>>  from typing import Any, TypeVar
>>
>> @@ -22,8 +22,8 @@ def __init__(
>>              option_strings: Sequence[str],
>>              dest: str,
>>              nargs: str | int | None = None,
>> -            const: str | None = None,
>> -            default: str = None,
>> +            const: bool | None = None,
>> +            default: Any = None,
>>              type: Callable[[str], _T | argparse.FileType | None] = None,
>>              choices: Iterable[_T] | None = None,
>>              required: bool = False,
>> @@ -32,6 +32,12 @@ def __init__(
>>          ) -> None:
>>              env_var_value = os.environ.get(env_var)
>>              default = env_var_value or default
>> +            if const is not None:
>> +                nargs = 0
>> +                default = const if env_var_value else default
>> +                type = None
>> +                choices = None
>> +                metavar = None
>>              super(_EnvironmentArgument, self).__init__(
>>                  option_strings,
>>                  dest,
>> @@ -52,22 +58,28 @@ def __call__(
>>              values: Any,
>>              option_string: str = None,
>>          ) -> None:
>> -            setattr(namespace, self.dest, values)
>> +            if self.const is not None:
>> +                setattr(namespace, self.dest, self.const)
>> +            else:
>> +                setattr(namespace, self.dest, values)
>>
>>      return _EnvironmentArgument
>>
>>
>> -@dataclass(slots=True, frozen=True)
>> -class _Settings:
>> -    config_file_path: str
>> -    output_dir: str
>> -    timeout: float
>> -    verbose: bool
>> -    skip_setup: bool
>> -    dpdk_tarball_path: Path
>> -    compile_timeout: float
>> -    test_cases: list
>> -    re_run: int
>> +@dataclass(slots=True)
>> +class Settings:
>> +    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
>> +    output_dir: str = "output"
>> +    timeout: float = 15
>> +    verbose: bool = False
>> +    skip_setup: bool = False
>> +    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
>> +    compile_timeout: float = 1200
>> +    test_cases: list[str] = field(default_factory=list)
>> +    re_run: int = 0
>> +
>> +
>> +SETTINGS: Settings = Settings()
>>
>>
>>  def _get_parser() -> argparse.ArgumentParser:
>> @@ -81,7 +93,8 @@ def _get_parser() -> argparse.ArgumentParser:
>>      parser.add_argument(
>>          "--config-file",
>>          action=_env_arg("DTS_CFG_FILE"),
>> -        default="conf.yaml",
>> +        default=SETTINGS.config_file_path,
>> +        type=Path,
>>          help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs "
>>          "and targets.",
>>      )
>> @@ -90,7 +103,7 @@ def _get_parser() -> argparse.ArgumentParser:
>>          "--output-dir",
>>          "--output",
>>          action=_env_arg("DTS_OUTPUT_DIR"),
>> -        default="output",
>> +        default=SETTINGS.output_dir,
>>          help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
>>      )
>>
>> @@ -98,7 +111,7 @@ def _get_parser() -> argparse.ArgumentParser:
>>          "-t",
>>          "--timeout",
>>          action=_env_arg("DTS_TIMEOUT"),
>> -        default=15,
>> +        default=SETTINGS.timeout,
>>          type=float,
>>          help="[DTS_TIMEOUT] The default timeout for all DTS operations except for "
>>          "compiling DPDK.",
>> @@ -108,8 +121,9 @@ def _get_parser() -> argparse.ArgumentParser:
>>          "-v",
>>          "--verbose",
>>          action=_env_arg("DTS_VERBOSE"),
>> -        default="N",
>> -        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
>> +        default=SETTINGS.verbose,
>> +        const=True,
>> +        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
>>          "to the console.",
>>      )
>>
>> @@ -117,8 +131,8 @@ def _get_parser() -> argparse.ArgumentParser:
>>          "-s",
>>          "--skip-setup",
>>          action=_env_arg("DTS_SKIP_SETUP"),
>> -        default="N",
>> -        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
>> +        const=True,
>> +        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
>>      )
>>
>>      parser.add_argument(
>> @@ -126,7 +140,7 @@ def _get_parser() -> argparse.ArgumentParser:
>>          "--snapshot",
>>          "--git-ref",
>>          action=_env_arg("DTS_DPDK_TARBALL"),
>> -        default="dpdk.tar.xz",
>> +        default=SETTINGS.dpdk_tarball_path,
>>          type=Path,
>>          help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
>>          "tag ID or tree ID to test. To test local changes, first commit them, "
>> @@ -136,7 +150,7 @@ def _get_parser() -> argparse.ArgumentParser:
>>      parser.add_argument(
>>          "--compile-timeout",
>>          action=_env_arg("DTS_COMPILE_TIMEOUT"),
>> -        default=1200,
>> +        default=SETTINGS.compile_timeout,
>>          type=float,
>>          help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
>>      )
>> @@ -153,7 +167,7 @@ def _get_parser() -> argparse.ArgumentParser:
>>          "--re-run",
>>          "--re_run",
>>          action=_env_arg("DTS_RERUN"),
>> -        default=0,
>> +        default=SETTINGS.re_run,
>>          type=int,
>>          help="[DTS_RERUN] Re-run each test case the specified amount of times "
>>          "if a test failure occurs",
>> @@ -162,23 +176,22 @@ def _get_parser() -> argparse.ArgumentParser:
>>      return parser
>>
>>
>> -def _get_settings() -> _Settings:
>> +def get_settings() -> Settings:
>>      parsed_args = _get_parser().parse_args()
>> -    return _Settings(
>> +    return Settings(
>>          config_file_path=parsed_args.config_file,
>>          output_dir=parsed_args.output_dir,
>>          timeout=parsed_args.timeout,
>> -        verbose=(parsed_args.verbose == "Y"),
>> -        skip_setup=(parsed_args.skip_setup == "Y"),
>> +        verbose=parsed_args.verbose,
>> +        skip_setup=parsed_args.skip_setup,
>>          dpdk_tarball_path=Path(
>> -            DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir)
>> -        )
>> -        if not os.path.exists(parsed_args.tarball)
>> -        else Path(parsed_args.tarball),
>> +            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
>> +            if not os.path.exists(parsed_args.tarball)
>> +            else Path(parsed_args.tarball)
>> +        ),
>>          compile_timeout=parsed_args.compile_timeout,
>> -        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
>> +        test_cases=(
>> +            parsed_args.test_cases.split(",") if parsed_args.test_cases else []
>> +        ),
>>          re_run=parsed_args.re_run,
>>      )
>> -
>> -
>> -SETTINGS: _Settings = _get_settings()
>> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
>> index f0fbe80f6f..603e18872c 100644
>> --- a/dts/framework/test_result.py
>> +++ b/dts/framework/test_result.py
>> @@ -254,7 +254,7 @@ def add_build_target(
>>          self._inner_results.append(build_target_result)
>>          return build_target_result
>>
>> -    def add_sut_info(self, sut_info: NodeInfo):
>> +    def add_sut_info(self, sut_info: NodeInfo) -> None:
>>          self.sut_os_name = sut_info.os_name
>>          self.sut_os_version = sut_info.os_version
>>          self.sut_kernel_version = sut_info.kernel_version
>> @@ -297,7 +297,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
>>          self._inner_results.append(execution_result)
>>          return execution_result
>>
>> -    def add_error(self, error) -> None:
>> +    def add_error(self, error: Exception) -> None:
>>          self._errors.append(error)
>>
>>      def process(self) -> None:
>> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
>> index 3b890c0451..d53553bf34 100644
>> --- a/dts/framework/test_suite.py
>> +++ b/dts/framework/test_suite.py
>> @@ -11,7 +11,7 @@
>>  import re
>>  from ipaddress import IPv4Interface, IPv6Interface, ip_interface
>>  from types import MethodType
>> -from typing import Union
>> +from typing import Any, Union
>>
>>  from scapy.layers.inet import IP  # type: ignore[import]
>>  from scapy.layers.l2 import Ether  # type: ignore[import]
>> @@ -26,8 +26,7 @@
>>  from .logger import DTSLOG, getLogger
>>  from .settings import SETTINGS
>>  from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
>> -from .testbed_model import SutNode, TGNode
>> -from .testbed_model.hw.port import Port, PortLink
>> +from .testbed_model import Port, PortLink, SutNode, TGNode
>>  from .utils import get_packet_summaries
>>
>>
>> @@ -453,7 +452,7 @@ def _execute_test_case(
>>
>>
>>  def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
>> -    def is_test_suite(object) -> bool:
>> +    def is_test_suite(object: Any) -> bool:
>>          try:
>>              if issubclass(object, TestSuite) and object is not TestSuite:
>>                  return True
>> diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
>> index 5cbb859e47..8ced05653b 100644
>> --- a/dts/framework/testbed_model/__init__.py
>> +++ b/dts/framework/testbed_model/__init__.py
>> @@ -9,15 +9,9 @@
>>
>>  # pylama:ignore=W0611
>>
>> -from .hw import (
>> -    LogicalCore,
>> -    LogicalCoreCount,
>> -    LogicalCoreCountFilter,
>> -    LogicalCoreList,
>> -    LogicalCoreListFilter,
>> -    VirtualDevice,
>> -    lcore_filter,
>> -)
>> +from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
>>  from .node import Node
>> +from .port import Port, PortLink
>>  from .sut_node import SutNode
>>  from .tg_node import TGNode
>> +from .virtual_device import VirtualDevice
>> diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
>> similarity index 95%
>> rename from dts/framework/testbed_model/hw/cpu.py
>> rename to dts/framework/testbed_model/cpu.py
>> index d1918a12dc..8fe785dfe4 100644
>> --- a/dts/framework/testbed_model/hw/cpu.py
>> +++ b/dts/framework/testbed_model/cpu.py
>> @@ -272,3 +272,16 @@ def filter(self) -> list[LogicalCore]:
>>              )
>>
>>          return filtered_lcores
>> +
>> +
>> +def lcore_filter(
>> +    core_list: list[LogicalCore],
>> +    filter_specifier: LogicalCoreCount | LogicalCoreList,
>> +    ascending: bool,
>> +) -> LogicalCoreFilter:
>> +    if isinstance(filter_specifier, LogicalCoreList):
>> +        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
>> +    elif isinstance(filter_specifier, LogicalCoreCount):
>> +        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
>> +    else:
>> +        raise ValueError(f"Unsupported filter r{filter_specifier}")
>> diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
>> deleted file mode 100644
>> index 88ccac0b0e..0000000000
>> --- a/dts/framework/testbed_model/hw/__init__.py
>> +++ /dev/null
>> @@ -1,27 +0,0 @@
>> -# SPDX-License-Identifier: BSD-3-Clause
>> -# Copyright(c) 2023 PANTHEON.tech s.r.o.
>> -
>> -# pylama:ignore=W0611
>> -
>> -from .cpu import (
>> -    LogicalCore,
>> -    LogicalCoreCount,
>> -    LogicalCoreCountFilter,
>> -    LogicalCoreFilter,
>> -    LogicalCoreList,
>> -    LogicalCoreListFilter,
>> -)
>> -from .virtual_device import VirtualDevice
>> -
>> -
>> -def lcore_filter(
>> -    core_list: list[LogicalCore],
>> -    filter_specifier: LogicalCoreCount | LogicalCoreList,
>> -    ascending: bool,
>> -) -> LogicalCoreFilter:
>> -    if isinstance(filter_specifier, LogicalCoreList):
>> -        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
>> -    elif isinstance(filter_specifier, LogicalCoreCount):
>> -        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
>> -    else:
>> -        raise ValueError(f"Unsupported filter r{filter_specifier}")
>> diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
>> similarity index 97%
>> rename from dts/framework/remote_session/linux_session.py
>> rename to dts/framework/testbed_model/linux_session.py
>> index a3f1a6bf3b..f472bb8f0f 100644
>> --- a/dts/framework/remote_session/linux_session.py
>> +++ b/dts/framework/testbed_model/linux_session.py
>> @@ -9,10 +9,10 @@
>>  from typing_extensions import NotRequired
>>
>>  from framework.exception import RemoteCommandExecutionError
>> -from framework.testbed_model import LogicalCore
>> -from framework.testbed_model.hw.port import Port
>>  from framework.utils import expand_range
>>
>> +from .cpu import LogicalCore
>> +from .port import Port
>>  from .posix_session import PosixSession
>>
>>
>> @@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
>>              lcores.append(LogicalCore(lcore, core, socket, node))
>>          return lcores
>>
>> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
>> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>>          return dpdk_prefix
>>
>>      def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
>> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
>> index fc01e0bf8e..fa5b143cdd 100644
>> --- a/dts/framework/testbed_model/node.py
>> +++ b/dts/framework/testbed_model/node.py
>> @@ -12,23 +12,26 @@
>>  from typing import Any, Callable, Type, Union
>>
>>  from framework.config import (
>> +    OS,
>>      BuildTargetConfiguration,
>>      ExecutionConfiguration,
>>      NodeConfiguration,
>>  )
>> +from framework.exception import ConfigurationError
>>  from framework.logger import DTSLOG, getLogger
>> -from framework.remote_session import InteractiveShellType, OSSession, create_session
>>  from framework.settings import SETTINGS
>>
>> -from .hw import (
>> +from .cpu import (
>>      LogicalCore,
>>      LogicalCoreCount,
>>      LogicalCoreList,
>>      LogicalCoreListFilter,
>> -    VirtualDevice,
>>      lcore_filter,
>>  )
>> -from .hw.port import Port
>> +from .linux_session import LinuxSession
>> +from .os_session import InteractiveShellType, OSSession
>> +from .port import Port
>> +from .virtual_device import VirtualDevice
>>
>>
>>  class Node(ABC):
>> @@ -172,9 +175,9 @@ def create_interactive_shell(
>>
>>          return self.main_session.create_interactive_shell(
>>              shell_cls,
>> -            app_args,
>>              timeout,
>>              privileged,
>> +            app_args,
>>          )
>>
>>      def filter_lcores(
>> @@ -205,7 +208,7 @@ def _get_remote_cpus(self) -> None:
>>          self._logger.info("Getting CPU information.")
>>          self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
>>
>> -    def _setup_hugepages(self):
>> +    def _setup_hugepages(self) -> None:
>>          """
>>          Setup hugepages on the Node. Different architectures can supply different
>>          amounts of memory for hugepages and numa-based hugepage allocation may need
>> @@ -249,3 +252,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
>>              return lambda *args: None
>>          else:
>>              return func
>> +
>> +
>> +def create_session(
>> +    node_config: NodeConfiguration, name: str, logger: DTSLOG
>> +) -> OSSession:
>> +    match node_config.os:
>> +        case OS.linux:
>> +            return LinuxSession(node_config, name, logger)
>> +        case _:
>> +            raise ConfigurationError(f"Unsupported OS {node_config.os}")
>> diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
>> similarity index 95%
>> rename from dts/framework/remote_session/os_session.py
>> rename to dts/framework/testbed_model/os_session.py
>> index 8a709eac1c..76e595a518 100644
>> --- a/dts/framework/remote_session/os_session.py
>> +++ b/dts/framework/testbed_model/os_session.py
>> @@ -10,19 +10,19 @@
>>
>>  from framework.config import Architecture, NodeConfiguration, NodeInfo
>>  from framework.logger import DTSLOG
>> -from framework.remote_session.remote import InteractiveShell
>> -from framework.settings import SETTINGS
>> -from framework.testbed_model import LogicalCore
>> -from framework.testbed_model.hw.port import Port
>> -from framework.utils import MesonArgs
>> -
>> -from .remote import (
>> +from framework.remote_session import (
>>      CommandResult,
>>      InteractiveRemoteSession,
>> +    InteractiveShell,
>>      RemoteSession,
>>      create_interactive_session,
>>      create_remote_session,
>>  )
>> +from framework.settings import SETTINGS
>> +from framework.utils import MesonArgs
>> +
>> +from .cpu import LogicalCore
>> +from .port import Port
>>
>>  InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
>>
>> @@ -85,9 +85,9 @@ def send_command(
>>      def create_interactive_shell(
>>          self,
>>          shell_cls: Type[InteractiveShellType],
>> -        eal_parameters: str,
>>          timeout: float,
>>          privileged: bool,
>> +        app_args: str,
>>      ) -> InteractiveShellType:
>>          """
>>          See "create_interactive_shell" in SutNode
>> @@ -96,7 +96,7 @@ def create_interactive_shell(
>>              self.interactive_session.session,
>>              self._logger,
>>              self._get_privileged_command if privileged else None,
>> -            eal_parameters,
>> +            app_args,
>>              timeout,
>>          )
>>
>> @@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
>>          """
>>
>>      @abstractmethod
>> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
>> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
>>          """
>>          Try to find DPDK remote dir in remote_dir.
>>          """
>> @@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
>>          """
>>
>>      @abstractmethod
>> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
>> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>>          """
>>          Get the DPDK file prefix that will be used when running DPDK apps.
>>          """
>> diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
>> similarity index 100%
>> rename from dts/framework/testbed_model/hw/port.py
>> rename to dts/framework/testbed_model/port.py
>> diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
>> similarity index 98%
>> rename from dts/framework/remote_session/posix_session.py
>> rename to dts/framework/testbed_model/posix_session.py
>> index 5da0516e05..1d1d5b1b26 100644
>> --- a/dts/framework/remote_session/posix_session.py
>> +++ b/dts/framework/testbed_model/posix_session.py
>> @@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
>>
>>          return ret_opts
>>
>> -    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
>> +    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
>>          remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
>>          result = self.send_command(f"ls -d {remote_guess} | tail -1")
>>          return PurePosixPath(result.stdout)
>> @@ -219,7 +219,7 @@ def _remove_dpdk_runtime_dirs(
>>          for dpdk_runtime_dir in dpdk_runtime_dirs:
>>              self.remove_remote_dir(dpdk_runtime_dir)
>>
>> -    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
>> +    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
>>          return ""
>>
>>      def get_compiler_version(self, compiler_name: str) -> str:
>> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
>> index 4161d3a4d5..17deea06e2 100644
>> --- a/dts/framework/testbed_model/sut_node.py
>> +++ b/dts/framework/testbed_model/sut_node.py
>> @@ -15,12 +15,14 @@
>>      NodeInfo,
>>      SutNodeConfiguration,
>>  )
>> -from framework.remote_session import CommandResult, InteractiveShellType, OSSession
>> +from framework.remote_session import CommandResult
>>  from framework.settings import SETTINGS
>>  from framework.utils import MesonArgs
>>
>> -from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
>> +from .cpu import LogicalCoreCount, LogicalCoreList
>>  from .node import Node
>> +from .os_session import InteractiveShellType, OSSession
>> +from .virtual_device import VirtualDevice
>>
>>
>>  class EalParameters(object):
>> @@ -307,7 +309,7 @@ def create_eal_parameters(
>>          prefix: str = "dpdk",
>>          append_prefix_timestamp: bool = True,
>>          no_pci: bool = False,
>> -        vdevs: list[VirtualDevice] = None,
>> +        vdevs: list[VirtualDevice] | None = None,
>>          other_eal_param: str = "",
>>      ) -> "EalParameters":
>>          """
>> diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
>> index 27025cfa31..166eb8430e 100644
>> --- a/dts/framework/testbed_model/tg_node.py
>> +++ b/dts/framework/testbed_model/tg_node.py
>> @@ -16,16 +16,11 @@
>>
>>  from scapy.packet import Packet  # type: ignore[import]
>>
>> -from framework.config import (
>> -    ScapyTrafficGeneratorConfig,
>> -    TGNodeConfiguration,
>> -    TrafficGeneratorType,
>> -)
>> -from framework.exception import ConfigurationError
>> -
>> -from .capturing_traffic_generator import CapturingTrafficGenerator
>> -from .hw.port import Port
>> +from framework.config import TGNodeConfiguration
>> +
>>  from .node import Node
>> +from .port import Port
>> +from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
>>
>>
>>  class TGNode(Node):
>> @@ -80,20 +75,3 @@ def close(self) -> None:
>>          """Free all resources used by the node"""
>>          self.traffic_generator.close()
>>          super(TGNode, self).close()
>> -
>> -
>> -def create_traffic_generator(
>> -    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
>> -) -> CapturingTrafficGenerator:
>> -    """A factory function for creating traffic generator object from user config."""
>> -
>> -    from .scapy import ScapyTrafficGenerator
>> -
>> -    match traffic_generator_config.traffic_generator_type:
>> -        case TrafficGeneratorType.SCAPY:
>> -            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
>> -        case _:
>> -            raise ConfigurationError(
>> -                "Unknown traffic generator: "
>> -                f"{traffic_generator_config.traffic_generator_type}"
>> -            )
>> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
>> new file mode 100644
>> index 0000000000..11bfa1ee0f
>> --- /dev/null
>> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
>> @@ -0,0 +1,24 @@
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
>> +
>> +from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
>> +from framework.exception import ConfigurationError
>> +from framework.testbed_model.node import Node
>> +
>> +from .capturing_traffic_generator import CapturingTrafficGenerator
>> +from .scapy import ScapyTrafficGenerator
>> +
>> +
>> +def create_traffic_generator(
>> +    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
>> +) -> CapturingTrafficGenerator:
>> +    """A factory function for creating traffic generator object from user config."""
>> +
>> +    match traffic_generator_config.traffic_generator_type:
>> +        case TrafficGeneratorType.SCAPY:
>> +            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
>> +        case _:
>> +            raise ConfigurationError(
>> +                "Unknown traffic generator: "
>> +                f"{traffic_generator_config.traffic_generator_type}"
>> +            )
>> diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
>> similarity index 96%
>> rename from dts/framework/testbed_model/capturing_traffic_generator.py
>> rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
>> index ab98987f8e..e521211ef0 100644
>> --- a/dts/framework/testbed_model/capturing_traffic_generator.py
>> +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
>> @@ -16,9 +16,9 @@
>>  from scapy.packet import Packet  # type: ignore[import]
>>
>>  from framework.settings import SETTINGS
>> +from framework.testbed_model.port import Port
>>  from framework.utils import get_packet_summaries
>>
>> -from .hw.port import Port
>>  from .traffic_generator import TrafficGenerator
>>
>>
>> @@ -130,7 +130,9 @@ def _send_packets_and_capture(
>>          for the specified duration. It must be able to handle no received packets.
>>          """
>>
>> -    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
>> +    def _write_capture_from_packets(
>> +        self, capture_name: str, packets: list[Packet]
>> +    ) -> None:
>>          file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
>>          self._logger.debug(f"Writing packets to {file_name}.")
>>          scapy.utils.wrpcap(file_name, packets)
>> diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
>> similarity index 95%
>> rename from dts/framework/testbed_model/scapy.py
>> rename to dts/framework/testbed_model/traffic_generator/scapy.py
>> index af0d4dbb25..51864b6e6b 100644
>> --- a/dts/framework/testbed_model/scapy.py
>> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
>> @@ -24,16 +24,15 @@
>>  from scapy.packet import Packet  # type: ignore[import]
>>
>>  from framework.config import OS, ScapyTrafficGeneratorConfig
>> -from framework.logger import DTSLOG, getLogger
>>  from framework.remote_session import PythonShell
>>  from framework.settings import SETTINGS
>> +from framework.testbed_model.node import Node
>> +from framework.testbed_model.port import Port
>>
>>  from .capturing_traffic_generator import (
>>      CapturingTrafficGenerator,
>>      _get_default_capture_name,
>>  )
>> -from .hw.port import Port
>> -from .tg_node import TGNode
>>
>>  """
>>  ========= BEGIN RPC FUNCTIONS =========
>> @@ -146,7 +145,7 @@ def quit(self) -> None:
>>          self._BaseServer__shutdown_request = True
>>          return None
>>
>> -    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
>> +    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
>>          """Add a function to the server.
>>
>>          This is meant to be executed remotely.
>> @@ -191,15 +190,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
>>      session: PythonShell
>>      rpc_server_proxy: xmlrpc.client.ServerProxy
>>      _config: ScapyTrafficGeneratorConfig
>> -    _tg_node: TGNode
>> -    _logger: DTSLOG
>> -
>> -    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
>> -        self._config = config
>> -        self._tg_node = tg_node
>> -        self._logger = getLogger(
>> -            f"{self._tg_node.name} {self._config.traffic_generator_type}"
>> -        )
>> +
>> +    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
>> +        super().__init__(tg_node, config)
>>
>>          assert (
>>              self._tg_node.config.os == OS.linux
>> @@ -235,7 +228,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
>>              function_bytes = marshal.dumps(function.__code__)
>>              self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
>>
>> -    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
>> +    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
>>          # load the source of the function
>>          src = inspect.getsource(QuittableXMLRPCServer)
>>          # Lines with only whitespace break the repl if in the middle of a function
>> @@ -280,7 +273,7 @@ def _send_packets_and_capture(
>>          scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
>>          return scapy_packets
>>
>> -    def close(self):
>> +    def close(self) -> None:
>>          try:
>>              self.rpc_server_proxy.quit()
>>          except ConnectionRefusedError:
>> diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
>> similarity index 80%
>> rename from dts/framework/testbed_model/traffic_generator.py
>> rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
>> index 28c35d3ce4..ea7c3963da 100644
>> --- a/dts/framework/testbed_model/traffic_generator.py
>> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
>> @@ -12,11 +12,12 @@
>>
>>  from scapy.packet import Packet  # type: ignore[import]
>>
>> -from framework.logger import DTSLOG
>> +from framework.config import TrafficGeneratorConfig
>> +from framework.logger import DTSLOG, getLogger
>> +from framework.testbed_model.node import Node
>> +from framework.testbed_model.port import Port
>>  from framework.utils import get_packet_summaries
>>
>> -from .hw.port import Port
>> -
>>
>>  class TrafficGenerator(ABC):
>>      """The base traffic generator.
>> @@ -24,8 +25,17 @@ class TrafficGenerator(ABC):
>>      Defines the few basic methods that each traffic generator must implement.
>>      """
>>
>> +    _config: TrafficGeneratorConfig
>> +    _tg_node: Node
>
>
> Is there a benefit to changing this to be a node instead of a TGNode? Wouldn't we want the capabilities of the TGNode to be accessible in the TrafficGenerator class?
>

The benefit is that it works :-). If this was TGNode there would be
circular imports. It's possible this should be done differently, but I
wanted to do as little as possible to make the doc generation work.
Anything more would be out of scope of this patch I feel.

>>
>>      _logger: DTSLOG
>>
>> +    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
>> +        self._config = config
>> +        self._tg_node = tg_node
>> +        self._logger = getLogger(
>> +            f"{self._tg_node.name} {self._config.traffic_generator_type}"
>> +        )
>> +
>>      def send_packet(self, packet: Packet, port: Port) -> None:
>>          """Send a packet and block until it is fully sent.
>>
>> diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
>> similarity index 100%
>> rename from dts/framework/testbed_model/hw/virtual_device.py
>> rename to dts/framework/testbed_model/virtual_device.py
>> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
>> index d27c2c5b5f..f0c916471c 100644
>> --- a/dts/framework/utils.py
>> +++ b/dts/framework/utils.py
>> @@ -7,7 +7,6 @@
>>  import json
>>  import os
>>  import subprocess
>> -import sys
>>  from enum import Enum
>>  from pathlib import Path
>>  from subprocess import SubprocessError
>> @@ -16,35 +15,7 @@
>>
>>  from .exception import ConfigurationError
>>
>> -
>> -class StrEnum(Enum):
>> -    @staticmethod
>> -    def _generate_next_value_(
>> -        name: str, start: int, count: int, last_values: object
>> -    ) -> str:
>> -        return name
>> -
>> -    def __str__(self) -> str:
>> -        return self.name
>> -
>> -
>> -REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
>> -
>> -
>> -def check_dts_python_version() -> None:
>> -    if sys.version_info.major < 3 or (
>> -        sys.version_info.major == 3 and sys.version_info.minor < 10
>> -    ):
>> -        print(
>> -            RED(
>> -                (
>> -                    "WARNING: DTS execution node's python version is lower than"
>> -                    "python 3.10, is deprecated and will not work in future releases."
>> -                )
>> -            ),
>> -            file=sys.stderr,
>> -        )
>> -        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
>> +REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
>>
>>
>>  def expand_range(range_str: str) -> list[int]:
>> @@ -67,7 +38,7 @@ def expand_range(range_str: str) -> list[int]:
>>      return expanded_range
>>
>>
>> -def get_packet_summaries(packets: list[Packet]):
>> +def get_packet_summaries(packets: list[Packet]) -> str:
>>      if len(packets) == 1:
>>          packet_summaries = packets[0].summary()
>>      else:
>> @@ -77,8 +48,15 @@ def get_packet_summaries(packets: list[Packet]):
>>      return f"Packet contents: \n{packet_summaries}"
>>
>>
>> -def RED(text: str) -> str:
>> -    return f"\u001B[31;1m{str(text)}\u001B[0m"
>> +class StrEnum(Enum):
>> +    @staticmethod
>> +    def _generate_next_value_(
>> +        name: str, start: int, count: int, last_values: object
>> +    ) -> str:
>> +        return name
>> +
>> +    def __str__(self) -> str:
>> +        return self.name
>>
>>
>>  class MesonArgs(object):
>> @@ -225,5 +203,5 @@ def _delete_tarball(self) -> None:
>>          if self._tarball_path and os.path.exists(self._tarball_path):
>>              os.remove(self._tarball_path)
>>
>> -    def __fspath__(self):
>> +    def __fspath__(self) -> str:
>>          return str(self._tarball_path)
>> diff --git a/dts/main.py b/dts/main.py
>> index 43311fa847..5d4714b0c3 100755
>> --- a/dts/main.py
>> +++ b/dts/main.py
>> @@ -10,10 +10,17 @@
>>
>>  import logging
>>
>> -from framework import dts
>> +from framework import settings
>>
>>
>>  def main() -> None:
>> +    """Set DTS settings, then run DTS.
>> +
>> +    The DTS settings are taken from the command line arguments and the environment variables.
>> +    """
>> +    settings.SETTINGS = settings.get_settings()
>> +    from framework import dts
>> +
>>      dts.run_all()
>>
>>
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 07/21] dts: dts runner and main docstring update
  2023-11-16 21:51                 ` Jeremy Spewock
@ 2023-11-20 16:13                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-20 16:13 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

On Thu, Nov 16, 2023 at 10:51 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
>
> On Wed, Nov 15, 2023 at 8:11 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Format according to the Google format and PEP257, with slight
>> deviations.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++-------
>>  dts/main.py          |   8 ++-
>>  2 files changed, 112 insertions(+), 24 deletions(-)
>>
>> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
>> index 4c7fb0c40a..331fed7dc4 100644
>> --- a/dts/framework/dts.py
>> +++ b/dts/framework/dts.py
>> @@ -3,6 +3,33 @@
>>  # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>>  # Copyright(c) 2022-2023 University of New Hampshire
>>
>> +r"""Test suite runner module.
>> +
>> +A DTS run is split into stages:
>> +
>> +    #. Execution stage,
>> +    #. Build target stage,
>> +    #. Test suite stage,
>> +    #. Test case stage.
>> +
>> +The module is responsible for running tests on testbeds defined in the test run configuration.
>> +Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or
>> +one of its subclasses. The test case results are also recorded.
>> +
>> +If an error occurs, the current stage is aborted, the error is recorded and the run continues in
>> +the next iteration of the same stage. The return code is the highest `severity` of all
>> +:class:`~.framework.exception.DTSError`\s.
>> +
>> +Example:
>> +    An error occurs in a build target setup. The current build target is aborted and the run
>> +    continues with the next build target. If the errored build target was the last one in the given
>> +    execution, the next execution begins.
>> +
>> +Attributes:
>> +    dts_logger: The logger instance used in this module.
>> +    result: The top level result used in the module.
>> +"""
>> +
>>  import sys
>>
>>  from .config import (
>> @@ -23,9 +50,38 @@
>>
>>
>>  def run_all() -> None:
>> -    """
>> -    The main process of DTS. Runs all build targets in all executions from the main
>> -    config file.
>> +    """Run all build targets in all executions from the test run configuration.
>> +
>> +    Before running test suites, executions and build targets are first set up.
>> +    The executions and build targets defined in the test run configuration are iterated over.
>> +    The executions define which tests to run and where to run them and build targets define
>> +    the DPDK build setup.
>> +
>> +    The tests suites are set up for each execution/build target tuple and each scheduled
>> +    test case within the test suite is set up, executed and torn down. After all test cases
>> +    have been executed, the test suite is torn down and the next build target will be tested.
>> +
>> +    All the nested steps look like this:
>> +
>> +        #. Execution setup
>> +
>> +            #. Build target setup
>> +
>> +                #. Test suite setup
>> +
>> +                    #. Test case setup
>> +                    #. Test case logic
>> +                    #. Test case teardown
>> +
>> +                #. Test suite teardown
>> +
>> +            #. Build target teardown
>> +
>> +        #. Execution teardown
>> +
>> +    The test cases are filtered according to the specification in the test run configuration and
>> +    the :option:`--test-cases` command line argument or
>> +    the :envvar:`DTS_TESTCASES` environment variable.
>>      """
>>      global dts_logger
>>      global result
>> @@ -87,6 +143,8 @@ def run_all() -> None:
>>
>>
>>  def _check_dts_python_version() -> None:
>> +    """Check the required Python version - v3.10."""
>> +
>>      def RED(text: str) -> str:
>>          return f"\u001B[31;1m{str(text)}\u001B[0m"
>>
>> @@ -111,9 +169,16 @@ def _run_execution(
>>      execution: ExecutionConfiguration,
>>      result: DTSResult,
>>  ) -> None:
>> -    """
>> -    Run the given execution. This involves running the execution setup as well as
>> -    running all build targets in the given execution.
>> +    """Run the given execution.
>> +
>> +    This involves running the execution setup as well as running all build targets
>> +    in the given execution. After that, execution teardown is run.
>> +
>> +    Args:
>> +        sut_node: The execution's SUT node.
>> +        tg_node: The execution's TG node.
>> +        execution: An execution's test run configuration.
>> +        result: The top level result object.
>>      """
>>      dts_logger.info(
>>          f"Running execution with SUT '{execution.system_under_test_node.name}'."
>> @@ -150,8 +215,18 @@ def _run_build_target(
>>      execution: ExecutionConfiguration,
>>      execution_result: ExecutionResult,
>>  ) -> None:
>> -    """
>> -    Run the given build target.
>> +    """Run the given build target.
>> +
>> +    This involves running the build target setup as well as running all test suites
>> +    in the given execution the build target is defined in.
>> +    After that, build target teardown is run.
>> +
>> +    Args:
>> +        sut_node: The execution's SUT node.
>> +        tg_node: The execution's TG node.
>> +        build_target: A build target's test run configuration.
>> +        execution: The build target's execution's test run configuration.
>> +        execution_result: The execution level result object associated with the execution.
>>      """
>>      dts_logger.info(f"Running build target '{build_target.name}'.")
>>      build_target_result = execution_result.add_build_target(build_target)
>> @@ -183,10 +258,17 @@ def _run_all_suites(
>>      execution: ExecutionConfiguration,
>>      build_target_result: BuildTargetResult,
>>  ) -> None:
>> -    """
>> -    Use the given build_target to run execution's test suites
>> -    with possibly only a subset of test cases.
>> -    If no subset is specified, run all test cases.
>> +    """Run the execution's (possibly a subset) test suites using the current build_target.
>> +
>> +    The function assumes the build target we're testing has already been built on the SUT node.
>> +    The current build target thus corresponds to the current DPDK build present on the SUT node.
>> +
>> +    Args:
>> +        sut_node: The execution's SUT node.
>> +        tg_node: The execution's TG node.
>> +        execution: The execution's test run configuration associated with the current build target.
>> +        build_target_result: The build target level result object associated
>> +            with the current build target.
>>      """
>
>
> Is it worth mentioning in this method or the _run_build_target method that when a blocking suite fails that no more suites will be run on that build target?
>

Absolutely, I'll add that. Thanks for the catch.

>>
>>      end_build_target = False
>>      if not execution.skip_smoke_tests:
>> @@ -215,16 +297,22 @@ def _run_single_suite(
>>      build_target_result: BuildTargetResult,
>>      test_suite_config: TestSuiteConfig,
>>  ) -> None:
>> -    """Runs a single test suite.
>> +    """Run all test suite in a single test suite module.
>> +
>> +    The function assumes the build target we're testing has already been built on the SUT node.
>> +    The current build target thus corresponds to the current DPDK build present on the SUT node.
>>
>>      Args:
>> -        sut_node: Node to run tests on.
>> -        execution: Execution the test case belongs to.
>> -        build_target_result: Build target configuration test case is run on
>> -        test_suite_config: Test suite configuration
>> +        sut_node: The execution's SUT node.
>> +        tg_node: The execution's TG node.
>> +        execution: The execution's test run configuration associated with the current build target.
>> +        build_target_result: The build target level result object associated
>> +            with the current build target.
>> +        test_suite_config: Test suite test run configuration specifying the test suite module
>> +            and possibly a subset of test cases of test suites in that module.
>>
>>      Raises:
>> -        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
>> +        BlockingTestSuiteError: If a blocking test suite fails.
>>      """
>>      try:
>>          full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
>> @@ -248,9 +336,7 @@ def _run_single_suite(
>>
>>
>>  def _exit_dts() -> None:
>> -    """
>> -    Process all errors and exit with the proper exit code.
>> -    """
>> +    """Process all errors and exit with the proper exit code."""
>>      result.process()
>>
>>      if dts_logger:
>> diff --git a/dts/main.py b/dts/main.py
>> index 5d4714b0c3..f703615d11 100755
>> --- a/dts/main.py
>> +++ b/dts/main.py
>> @@ -4,9 +4,7 @@
>>  # Copyright(c) 2022 PANTHEON.tech s.r.o.
>>  # Copyright(c) 2022 University of New Hampshire
>>
>> -"""
>> -A test framework for testing DPDK.
>> -"""
>> +"""The DTS executable."""
>>
>>  import logging
>>
>> @@ -17,6 +15,10 @@ def main() -> None:
>>      """Set DTS settings, then run DTS.
>>
>>      The DTS settings are taken from the command line arguments and the environment variables.
>> +    The settings object is stored in the module-level variable settings.SETTINGS which the entire
>> +    framework uses. After importing the module (or the variable), any changes to the variable are
>> +    not going to be reflected without a re-import. This means that the SETTINGS variable must
>> +    be modified before the settings module is imported anywhere else in the framework.
>>      """
>>      settings.SETTINGS = settings.get_settings()
>>      from framework import dts
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 04/21] dts: exceptions docstring update
  2023-11-15 13:09               ` [PATCH v7 04/21] dts: exceptions docstring update Juraj Linkeš
@ 2023-11-20 16:22                 ` Yoan Picchi
  2023-11-20 16:35                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-20 16:22 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/__init__.py  |  12 ++++-
>   dts/framework/exception.py | 106 +++++++++++++++++++++++++------------
>   2 files changed, 83 insertions(+), 35 deletions(-)
> 
> diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
> index d551ad4bf0..662e6ccad2 100644
> --- a/dts/framework/__init__.py
> +++ b/dts/framework/__init__.py
> @@ -1,3 +1,13 @@
>   # SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2022 PANTHEON.tech s.r.o.
> +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022 University of New Hampshire
> +
> +"""Libraries and utilities for running DPDK Test Suite (DTS).
> +
> +The various modules in the DTS framework offer:
> +
> +* Connections to nodes, both interactive and non-interactive,
> +* A straightforward way to add support for different operating systems of remote nodes,
> +* Test suite setup, execution and teardown, along with test case setup, execution and teardown,
> +* Pre-test suite setup and post-test suite teardown.
> +"""
> diff --git a/dts/framework/exception.py b/dts/framework/exception.py
> index 7489c03570..ee1562c672 100644
> --- a/dts/framework/exception.py
> +++ b/dts/framework/exception.py
> @@ -3,8 +3,10 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> -"""
> -User-defined exceptions used across the framework.
> +"""DTS exceptions.
> +
> +The exceptions all have different severities expressed as an integer.
> +The highest severity of all raised exception is used as the exit code of DTS.

all raised exception*s*

>   """
>   
>   from enum import IntEnum, unique
> @@ -13,59 +15,79 @@
>   
>   @unique
>   class ErrorSeverity(IntEnum):
> -    """
> -    The severity of errors that occur during DTS execution.
> +    """The severity of errors that occur during DTS execution.
> +
>       All exceptions are caught and the most severe error is used as return code.
>       """
>   
> +    #:
>       NO_ERR = 0
> +    #:
>       GENERIC_ERR = 1
> +    #:
>       CONFIG_ERR = 2
> +    #:
>       REMOTE_CMD_EXEC_ERR = 3
> +    #:
>       SSH_ERR = 4
> +    #:
>       DPDK_BUILD_ERR = 10
> +    #:
>       TESTCASE_VERIFY_ERR = 20
> +    #:
>       BLOCKING_TESTSUITE_ERR = 25
>   
>   
>   class DTSError(Exception):
> -    """
> -    The base exception from which all DTS exceptions are derived.
> -    Stores error severity.
> +    """The base exception from which all DTS exceptions are subclassed.
> +
> +    Do not use this exception, only use subclassed exceptions.
>       """
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
>   
>   
>   class SSHTimeoutError(DTSError):
> -    """
> -    Command execution timeout.
> -    """
> +    """The SSH execution of a command timed out."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
>       _command: str
>   
>       def __init__(self, command: str):
> +        """Define the meaning of the first argument.
> +
> +        Args:
> +            command: The executed command.
> +        """
>           self._command = command
>   
>       def __str__(self) -> str:
> -        return f"TIMEOUT on {self._command}"
> +        """Add some context to the string representation."""
> +        return f"{self._command} execution timed out."
>   
>   
>   class SSHConnectionError(DTSError):
> -    """
> -    SSH connection error.
> -    """
> +    """An unsuccessful SSH connection."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
>       _host: str
>       _errors: list[str]
>   
>       def __init__(self, host: str, errors: list[str] | None = None):
> +        """Define the meaning of the first two arguments.
> +
> +        Args:
> +            host: The hostname to which we're trying to connect.
> +            errors: Any errors that occurred during the connection attempt.
> +        """
>           self._host = host
>           self._errors = [] if errors is None else errors
>   
>       def __str__(self) -> str:
> +        """Include the errors in the string representation."""
>           message = f"Error trying to connect with {self._host}."
>           if self._errors:
>               message += f" Errors encountered while retrying: {', '.join(self._errors)}"
> @@ -74,43 +96,53 @@ def __str__(self) -> str:
>   
>   
>   class SSHSessionDeadError(DTSError):
> -    """
> -    SSH session is not alive.
> -    It can no longer be used.
> -    """
> +    """The SSH session is no longer alive."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
>       _host: str
>   
>       def __init__(self, host: str):
> +        """Define the meaning of the first argument.
> +
> +        Args:
> +            host: The hostname of the disconnected node.
> +        """
>           self._host = host
>   
>       def __str__(self) -> str:
> -        return f"SSH session with {self._host} has died"
> +        """Add some context to the string representation."""
> +        return f"SSH session with {self._host} has died."
>   
>   
>   class ConfigurationError(DTSError):
> -    """
> -    Raised when an invalid configuration is encountered.
> -    """
> +    """An invalid configuration."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
>   
>   
>   class RemoteCommandExecutionError(DTSError):
> -    """
> -    Raised when a command executed on a Node returns a non-zero exit status.
> -    """
> +    """An unsuccessful execution of a remote command."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
> +    #: The executed command.
>       command: str
>       _command_return_code: int
>   
>       def __init__(self, command: str, command_return_code: int):
> +        """Define the meaning of the first two arguments.
> +
> +        Args:
> +            command: The executed command.
> +            command_return_code: The return code of the executed command.
> +        """
>           self.command = command
>           self._command_return_code = command_return_code
>   
>       def __str__(self) -> str:
> +        """Include both the command and return code in the string representation."""
>           return (
>               f"Command {self.command} returned a non-zero exit code: "
>               f"{self._command_return_code}"
> @@ -118,35 +150,41 @@ def __str__(self) -> str:
>   
>   
>   class RemoteDirectoryExistsError(DTSError):
> -    """
> -    Raised when a remote directory to be created already exists.
> -    """
> +    """A directory that exists on a remote node."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
>   
>   
>   class DPDKBuildError(DTSError):
> -    """
> -    Raised when DPDK build fails for any reason.
> -    """
> +    """A DPDK build failure."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
>   
>   
>   class TestCaseVerifyError(DTSError):
> -    """
> -    Used in test cases to verify the expected behavior.
> -    """
> +    """A test case failure."""
>   
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
>   
>   
>   class BlockingTestSuiteError(DTSError):
> +    """A failure in a blocking test suite."""
> +
> +    #:
>       severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
>       _suite_name: str
>   
>       def __init__(self, suite_name: str) -> None:
> +        """Define the meaning of the first argument.
> +
> +        Args:
> +            suite_name: The blocking test suite.
> +        """
>           self._suite_name = suite_name
>   
>       def __str__(self) -> str:
> +        """Add some context to the string representation."""
>           return f"Blocking suite {self._suite_name} failed."


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 06/21] dts: logger and utils docstring update
  2023-11-15 13:09               ` [PATCH v7 06/21] dts: logger and utils " Juraj Linkeš
@ 2023-11-20 16:23                 ` Yoan Picchi
  2023-11-20 16:36                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-20 16:23 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/logger.py | 72 ++++++++++++++++++++++-----------
>   dts/framework/utils.py  | 88 +++++++++++++++++++++++++++++------------
>   2 files changed, 113 insertions(+), 47 deletions(-)
> 
> diff --git a/dts/framework/logger.py b/dts/framework/logger.py
> index bb2991e994..d3eb75a4e4 100644
> --- a/dts/framework/logger.py
> +++ b/dts/framework/logger.py
> @@ -3,9 +3,9 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> -"""
> -DTS logger module with several log level. DTS framework and TestSuite logs
> -are saved in different log files.
> +"""DTS logger module.
> +
> +DTS framework and TestSuite logs are saved in different log files.
>   """
>   
>   import logging
> @@ -18,19 +18,21 @@
>   stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
>   
>   
> -class LoggerDictType(TypedDict):
> -    logger: "DTSLOG"
> -    name: str
> -    node: str
> -
> +class DTSLOG(logging.LoggerAdapter):
> +    """DTS logger adapter class for framework and testsuites.
>   
> -# List for saving all using loggers
> -Loggers: list[LoggerDictType] = []
> +    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
> +    variable control the verbosity of output. If enabled, all messages will be emitted to the
> +    console.
>   
> +    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
> +    variable modify the directory where the logs will be stored.
>   
> -class DTSLOG(logging.LoggerAdapter):
> -    """
> -    DTS log class for framework and testsuite.
> +    Attributes:
> +        node: The additional identifier. Currently unused.
> +        sh: The handler which emits logs to console.
> +        fh: The handler which emits logs to a file.
> +        verbose_fh: Just as fh, but logs with a different, more verbose, format.
>       """
>   
>       _logger: logging.Logger
> @@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
>       verbose_fh: logging.FileHandler
>   
>       def __init__(self, logger: logging.Logger, node: str = "suite"):
> +        """Extend the constructor with additional handlers.
> +
> +        One handler logs to the console, the other one to a file, with either a regular or verbose
> +        format.
> +
> +        Args:
> +            logger: The logger from which to create the logger adapter.
> +            node: An additional identifier. Currently unused.
> +        """
>           self._logger = logger
>           # 1 means log everything, this will be used by file handlers if their level
>           # is not set
> @@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
>           super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
>   
>       def logger_exit(self) -> None:
> -        """
> -        Remove stream handler and logfile handler.
> -        """
> +        """Remove the stream handler and the logfile handler."""
>           for handler in (self.sh, self.fh, self.verbose_fh):
>               handler.flush()
>               self._logger.removeHandler(handler)
>   
>   
> +class _LoggerDictType(TypedDict):
> +    logger: DTSLOG
> +    name: str
> +    node: str
> +
> +
> +# List for saving all loggers in use
> +_Loggers: list[_LoggerDictType] = []
> +
> +
>   def getLogger(name: str, node: str = "suite") -> DTSLOG:
> +    """Get DTS logger adapter identified by name and node.
> +
> +    An existing logger will be return if one with the exact name and node already exists.

An existing logger will be return*ed*

> +    A new one will be created and stored otherwise.
> +
> +    Args:
> +        name: The name of the logger.
> +        node: An additional identifier for the logger.
> +
> +    Returns:
> +        A logger uniquely identified by both name and node.
>       """
> -    Get logger handler and if there's no handler for specified Node will create one.
> -    """
> -    global Loggers
> +    global _Loggers
>       # return saved logger
> -    logger: LoggerDictType
> -    for logger in Loggers:
> +    logger: _LoggerDictType
> +    for logger in _Loggers:
>           if logger["name"] == name and logger["node"] == node:
>               return logger["logger"]
>   
>       # return new logger
>       dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
> -    Loggers.append({"logger": dts_logger, "name": name, "node": node})
> +    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
>       return dts_logger
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index f0c916471c..5016e3be10 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -3,6 +3,16 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> +"""Various utility classes and functions.
> +
> +These are used in multiple modules across the framework. They're here because
> +they provide some non-specific functionality, greatly simplify imports or just don't
> +fit elsewhere.
> +
> +Attributes:
> +    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
> +"""
> +
>   import atexit
>   import json
>   import os
> @@ -19,12 +29,20 @@
>   
>   
>   def expand_range(range_str: str) -> list[int]:
> -    """
> -    Process range string into a list of integers. There are two possible formats:
> -    n - a single integer
> -    n-m - a range of integers
> +    """Process `range_str` into a list of integers.
> +
> +    There are two possible formats of `range_str`:
> +
> +        * ``n`` - a single integer,
> +        * ``n-m`` - a range of integers.
>   
> -    The returned range includes both n and m. Empty string returns an empty list.
> +    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
> +
> +    Args:
> +        range_str: The range to expand.
> +
> +    Returns:
> +        All the numbers from the range.
>       """
>       expanded_range: list[int] = []
>       if range_str:
> @@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]:
>   
>   
>   def get_packet_summaries(packets: list[Packet]) -> str:
> +    """Format a string summary from `packets`.
> +
> +    Args:
> +        packets: The packets to format.
> +
> +    Returns:
> +        The summary of `packets`.
> +    """
>       if len(packets) == 1:
>           packet_summaries = packets[0].summary()
>       else:
> @@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str:
>   
>   
>   class StrEnum(Enum):
> +    """Enum with members stored as strings."""
> +
>       @staticmethod
>       def _generate_next_value_(
>           name: str, start: int, count: int, last_values: object
> @@ -56,22 +84,29 @@ def _generate_next_value_(
>           return name
>   
>       def __str__(self) -> str:
> +        """The string representation is the name of the member."""
>           return self.name
>   
>   
>   class MesonArgs(object):
> -    """
> -    Aggregate the arguments needed to build DPDK:
> -    default_library: Default library type, Meson allows "shared", "static" and "both".
> -               Defaults to None, in which case the argument won't be used.
> -    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> -               Do not use -D with them, for example:
> -               meson_args = MesonArgs(enable_kmods=True).
> -    """
> +    """Aggregate the arguments needed to build DPDK."""
>   
>       _default_library: str
>   
>       def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> +        """Initialize the meson arguments.
> +
> +        Args:
> +            default_library: The default library type, Meson supports ``shared``, ``static`` and
> +                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
> +            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
> +                Do not use ``-D`` with them.
> +
> +        Example:
> +            ::
> +
> +                meson_args = MesonArgs(enable_kmods=True).
> +        """
>           self._default_library = (
>               f"--default-library={default_library}" if default_library else ""
>           )
> @@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
>           )
>   
>       def __str__(self) -> str:
> +        """The actual args."""
>           return " ".join(f"{self._default_library} {self._dpdk_args}".split())
>   
>   
> @@ -104,24 +140,14 @@ class _TarCompressionFormat(StrEnum):
>   
>   
>   class DPDKGitTarball(object):
> -    """Create a compressed tarball of DPDK from the repository.
> -
> -    The DPDK version is specified with git object git_ref.
> -    The tarball will be compressed with _TarCompressionFormat,
> -    which must be supported by the DTS execution environment.
> -    The resulting tarball will be put into output_dir.
> +    """Compressed tarball of DPDK from the repository.
>   
> -    The class supports the os.PathLike protocol,
> +    The class supports the :class:`os.PathLike` protocol,
>       which is used to get the Path of the tarball::
>   
>           from pathlib import Path
>           tarball = DPDKGitTarball("HEAD", "output")
>           tarball_path = Path(tarball)
> -
> -    Arguments:
> -        git_ref: A git commit ID, tag ID or tree ID.
> -        output_dir: The directory where to put the resulting tarball.
> -        tar_compression_format: The compression format to use.
>       """
>   
>       _git_ref: str
> @@ -136,6 +162,17 @@ def __init__(
>           output_dir: str,
>           tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
>       ):
> +        """Create the tarball during initialization.
> +
> +        The DPDK version is specified with `git_ref`. The tarball will be compressed with
> +        `tar_compression_format`, which must be supported by the DTS execution environment.
> +        The resulting tarball will be put into `output_dir`.
> +
> +        Args:
> +            git_ref: A git commit ID, tag ID or tree ID.
> +            output_dir: The directory where to put the resulting tarball.
> +            tar_compression_format: The compression format to use.
> +        """
>           self._git_ref = git_ref
>           self._tar_compression_format = tar_compression_format
>   
> @@ -204,4 +241,5 @@ def _delete_tarball(self) -> None:
>               os.remove(self._tarball_path)
>   
>       def __fspath__(self) -> str:
> +        """The os.PathLike protocol implementation."""
>           return str(self._tarball_path)


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 08/21] dts: test suite docstring update
  2023-11-16 22:16                 ` Jeremy Spewock
@ 2023-11-20 16:25                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-20 16:25 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

On Thu, Nov 16, 2023 at 11:16 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
>
> On Wed, Nov 15, 2023 at 8:12 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Format according to the Google format and PEP257, with slight
>> deviations.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  dts/framework/test_suite.py | 223 +++++++++++++++++++++++++++---------
>>  1 file changed, 168 insertions(+), 55 deletions(-)
>>
>> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
>> index d53553bf34..9e5251ffc6 100644
>> --- a/dts/framework/test_suite.py
>> +++ b/dts/framework/test_suite.py
>> @@ -2,8 +2,19 @@
>>  # Copyright(c) 2010-2014 Intel Corporation
>>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>
>> -"""
>> -Base class for creating DTS test cases.
>> +"""Features common to all test suites.
>> +
>> +The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such
>> +must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics
>> +needed by subclasses:
>> +
>> +    * Test suite and test case execution flow,
>> +    * Testbed (SUT, TG) configuration,
>> +    * Packet sending and verification,
>> +    * Test case verification.
>> +
>> +The module also defines a function, :func:`get_test_suites`,
>> +for gathering test suites from a Python module.
>>  """
>>
>>  import importlib
>> @@ -31,25 +42,44 @@
>>
>>
>>  class TestSuite(object):
>> -    """
>> -    The base TestSuite class provides methods for handling basic flow of a test suite:
>> -    * test case filtering and collection
>> -    * test suite setup/cleanup
>> -    * test setup/cleanup
>> -    * test case execution
>> -    * error handling and results storage
>> -    Test cases are implemented by derived classes. Test cases are all methods
>> -    starting with test_, further divided into performance test cases
>> -    (starting with test_perf_) and functional test cases (all other test cases).
>> -    By default, all test cases will be executed. A list of testcase str names
>> -    may be specified in conf.yaml or on the command line
>> -    to filter which test cases to run.
>> -    The methods named [set_up|tear_down]_[suite|test_case] should be overridden
>> -    in derived classes if the appropriate suite/test case fixtures are needed.
>> +    """The base class with methods for handling the basic flow of a test suite.
>> +
>> +        * Test case filtering and collection,
>> +        * Test suite setup/cleanup,
>> +        * Test setup/cleanup,
>> +        * Test case execution,
>> +        * Error handling and results storage.
>> +
>> +    Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``,
>> +    further divided into performance test cases (starting with ``test_perf_``)
>> +    and functional test cases (all other test cases).
>> +
>> +    By default, all test cases will be executed. A list of testcase names may be specified
>> +    in the YAML test run configuration file and in the :option:`--test-cases` command line argument
>> +    or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run.
>> +    The union of both lists will be used. Any unknown test cases from the latter lists
>> +    will be silently ignored.
>> +
>> +    If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable
>> +    is set, in case of a test case failure, the test case will be executed again until it passes
>> +    or it fails that many times in addition of the first failure.
>> +
>> +    The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses
>> +    if the appropriate test suite/test case fixtures are needed.
>> +
>> +    The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
>> +    properly choose the IP addresses and other configuration that must be tailored to the testbed.
>> +
>> +    Attributes:
>> +        sut_node: The SUT node where the test suite is running.
>> +        tg_node: The TG node where the test suite is running.
>> +        is_blocking: Whether the test suite is blocking. A failure of a blocking test suite
>> +            will block the execution of all subsequent test suites in the current build target.
>>      """
>
>
> Should this attribute section instead be comments in the form "#:" because they are class variables instead of instance ones?
>

Yes and no. The first two are not class variables, but the last one
is, so I'll change is_blocking to ClassVar. Thankfully the resulting
generated docs look just fine, the instance variables are listed
first, then class variables.

>>
>>
>>      sut_node: SutNode
>> -    is_blocking = False
>> +    tg_node: TGNode
>> +    is_blocking: bool = False
>>      _logger: DTSLOG
>>      _test_cases_to_run: list[str]
>>      _func: bool
>> @@ -72,6 +102,19 @@ def __init__(
>>          func: bool,
>>          build_target_result: BuildTargetResult,
>>      ):
>> +        """Initialize the test suite testbed information and basic configuration.
>> +
>> +        Process what test cases to run, create the associated :class:`TestSuiteResult`,
>> +        find links between ports and set up default IP addresses to be used when configuring them.
>> +
>> +        Args:
>> +            sut_node: The SUT node where the test suite will run.
>> +            tg_node: The TG node where the test suite will run.
>> +            test_cases: The list of test cases to execute.
>> +                If empty, all test cases will be executed.
>> +            func: Whether to run functional tests.
>> +            build_target_result: The build target result this test suite is run in.
>> +        """
>>          self.sut_node = sut_node
>>          self.tg_node = tg_node
>>          self._logger = getLogger(self.__class__.__name__)
>> @@ -95,6 +138,7 @@ def __init__(
>>          self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
>>
>>      def _process_links(self) -> None:
>> +        """Construct links between SUT and TG ports."""
>>          for sut_port in self.sut_node.ports:
>>              for tg_port in self.tg_node.ports:
>>                  if (sut_port.identifier, sut_port.peer) == (
>> @@ -106,27 +150,42 @@ def _process_links(self) -> None:
>>                      )
>>
>>      def set_up_suite(self) -> None:
>> -        """
>> -        Set up test fixtures common to all test cases; this is done before
>> -        any test case is run.
>> +        """Set up test fixtures common to all test cases.
>> +
>> +        This is done before any test case has been run.
>>          """
>>
>>      def tear_down_suite(self) -> None:
>> -        """
>> -        Tear down the previously created test fixtures common to all test cases.
>> +        """Tear down the previously created test fixtures common to all test cases.
>> +
>> +        This is done after all test have been run.
>>          """
>>
>>      def set_up_test_case(self) -> None:
>> -        """
>> -        Set up test fixtures before each test case.
>> +        """Set up test fixtures before each test case.
>> +
>> +        This is done before *each* test case.
>>          """
>>
>>      def tear_down_test_case(self) -> None:
>> -        """
>> -        Tear down the previously created test fixtures after each test case.
>> +        """Tear down the previously created test fixtures after each test case.
>> +
>> +        This is done after *each* test case.
>>          """
>>
>>      def configure_testbed_ipv4(self, restore: bool = False) -> None:
>> +        """Configure IPv4 addresses on all testbed ports.
>> +
>> +        The configured ports are:
>> +
>> +        * SUT ingress port,
>> +        * SUT egress port,
>> +        * TG ingress port,
>> +        * TG egress port.
>> +
>> +        Args:
>> +            restore: If :data:`True`, will remove the configuration instead.
>> +        """
>>          delete = True if restore else False
>>          enable = False if restore else True
>>          self._configure_ipv4_forwarding(enable)
>> @@ -153,11 +212,13 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None:
>>      def send_packet_and_capture(
>>          self, packet: Packet, duration: float = 1
>>      ) -> list[Packet]:
>> -        """
>> -        Send a packet through the appropriate interface and
>> -        receive on the appropriate interface.
>> -        Modify the packet with l3/l2 addresses corresponding
>> -        to the testbed and desired traffic.
>> +        """Send and receive `packet` using the associated TG.
>> +
>> +        Send `packet` through the appropriate interface and receive on the appropriate interface.
>> +        Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic.
>> +
>> +        Returns:
>> +            A list of received packets.
>>          """
>>          packet = self._adjust_addresses(packet)
>>          return self.tg_node.send_packet_and_capture(
>> @@ -165,13 +226,25 @@ def send_packet_and_capture(
>>          )
>>
>>      def get_expected_packet(self, packet: Packet) -> Packet:
>> +        """Inject the proper L2/L3 addresses into `packet`.
>> +
>> +        Args:
>> +            packet: The packet to modify.
>> +
>> +        Returns:
>> +            `packet` with injected L2/L3 addresses.
>> +        """
>>          return self._adjust_addresses(packet, expected=True)
>>
>>      def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
>> -        """
>> +        """L2 and L3 address additions in both directions.
>> +
>>          Assumptions:
>> -            Two links between SUT and TG, one link is TG -> SUT,
>> -            the other SUT -> TG.
>> +            Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG.
>> +
>> +        Args:
>> +            packet: The packet to modify.
>> +            expected: If :data:`True`, the direction is SUT -> TG, otherwise the direction is TG -> SUT.
>>          """
>>          if expected:
>>              # The packet enters the TG from SUT
>> @@ -197,6 +270,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
>>          return Ether(packet.build())
>>
>>      def verify(self, condition: bool, failure_description: str) -> None:
>> +        """Verify `condition` and handle failures.
>> +
>> +        When `condition` is :data:`False`, raise an exception and log the last 10 commands
>> +        executed on both the SUT and TG.
>> +
>> +        Args:
>> +            condition: The condition to check.
>> +            failure_description: A short description of the failure
>> +                that will be stored in the raised exception.
>> +
>> +        Raises:
>> +            TestCaseVerifyError: `condition` is :data:`False`.
>> +        """
>>          if not condition:
>>              self._fail_test_case_verify(failure_description)
>>
>> @@ -216,6 +302,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None:
>>      def verify_packets(
>>          self, expected_packet: Packet, received_packets: list[Packet]
>>      ) -> None:
>> +        """Verify that `expected_packet` has been received.
>> +
>> +        Go through `received_packets` and check that `expected_packet` is among them.
>> +        If not, raise an exception and log the last 10 commands
>> +        executed on both the SUT and TG.
>> +
>> +        Args:
>> +            expected_packet: The packet we're expecting to receive.
>> +            received_packets: The packets where we're looking for `expected_packet`.
>> +
>> +        Raises:
>> +            TestCaseVerifyError: `expected_packet` is not among `received_packets`.
>> +        """
>>          for received_packet in received_packets:
>>              if self._compare_packets(expected_packet, received_packet):
>>                  break
>> @@ -303,10 +402,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
>>          return True
>>
>>      def run(self) -> None:
>> -        """
>> -        Setup, execute and teardown the whole suite.
>> -        Suite execution consists of running all test cases scheduled to be executed.
>> -        A test cast run consists of setup, execution and teardown of said test case.
>> +        """Set up, execute and tear down the whole suite.
>> +
>> +        Test suite execution consists of running all test cases scheduled to be executed.
>> +        A test case run consists of setup, execution and teardown of said test case.
>> +
>> +        Record the setup and the teardown and handle failures.
>> +
>> +        The list of scheduled test cases is constructed when creating the :class:`TestSuite` object.
>>          """
>>          test_suite_name = self.__class__.__name__
>>
>> @@ -338,9 +441,7 @@ def run(self) -> None:
>>                  raise BlockingTestSuiteError(test_suite_name)
>>
>>      def _execute_test_suite(self) -> None:
>> -        """
>> -        Execute all test cases scheduled to be executed in this suite.
>> -        """
>> +        """Execute all test cases scheduled to be executed in this suite."""
>>          if self._func:
>>              for test_case_method in self._get_functional_test_cases():
>>                  test_case_name = test_case_method.__name__
>> @@ -357,14 +458,18 @@ def _execute_test_suite(self) -> None:
>>                      self._run_test_case(test_case_method, test_case_result)
>>
>>      def _get_functional_test_cases(self) -> list[MethodType]:
>> -        """
>> -        Get all functional test cases.
>> +        """Get all functional test cases defined in this TestSuite.
>> +
>> +        Returns:
>> +            The list of functional test cases of this TestSuite.
>>          """
>>          return self._get_test_cases(r"test_(?!perf_)")
>>
>>      def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
>> -        """
>> -        Return a list of test cases matching test_case_regex.
>> +        """Return a list of test cases matching test_case_regex.
>> +
>> +        Returns:
>> +            The list of test cases matching test_case_regex of this TestSuite.
>>          """
>>          self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
>>          filtered_test_cases = []
>> @@ -378,9 +483,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
>>          return filtered_test_cases
>>
>>      def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
>> -        """
>> -        Check whether the test case should be executed.
>> -        """
>> +        """Check whether the test case should be scheduled to be executed."""
>>          match = bool(re.match(test_case_regex, test_case_name))
>>          if self._test_cases_to_run:
>>              return match and test_case_name in self._test_cases_to_run
>> @@ -390,9 +493,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
>>      def _run_test_case(
>>          self, test_case_method: MethodType, test_case_result: TestCaseResult
>>      ) -> None:
>> -        """
>> -        Setup, execute and teardown a test case in this suite.
>> -        Exceptions are caught and recorded in logs and results.
>> +        """Setup, execute and teardown a test case in this suite.
>> +
>> +        Record the result of the setup and the teardown and handle failures.
>>          """
>>          test_case_name = test_case_method.__name__
>>
>> @@ -427,9 +530,7 @@ def _run_test_case(
>>      def _execute_test_case(
>>          self, test_case_method: MethodType, test_case_result: TestCaseResult
>>      ) -> None:
>> -        """
>> -        Execute one test case and handle failures.
>> -        """
>> +        """Execute one test case, record the result and handle failures."""
>>          test_case_name = test_case_method.__name__
>>          try:
>>              self._logger.info(f"Starting test case execution: {test_case_name}")
>> @@ -452,6 +553,18 @@ def _execute_test_case(
>>
>>
>>  def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
>> +    r"""Find all :class:`TestSuite`\s in a Python module.
>> +
>> +    Args:
>> +        testsuite_module_path: The path to the Python module.
>> +
>> +    Returns:
>> +        The list of :class:`TestSuite`\s found within the Python module.
>> +
>> +    Raises:
>> +        ConfigurationError: The test suite module was not found.
>> +    """
>> +
>>      def is_test_suite(object: Any) -> bool:
>>          try:
>>              if issubclass(object, TestSuite) and object is not TestSuite:
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 09/21] dts: test result docstring update
  2023-11-16 22:47                 ` Jeremy Spewock
@ 2023-11-20 16:33                   ` Juraj Linkeš
  2023-11-30 21:20                     ` Jeremy Spewock
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-20 16:33 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

On Thu, Nov 16, 2023 at 11:47 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
> The only comments I had on this were a few places where I think attribute sections should be class variables instead. I tried to mark all of the places I saw it and it could be a difference where because of the way they are subclassed they might do it differently but I'm unsure.
>
> On Wed, Nov 15, 2023 at 8:12 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Format according to the Google format and PEP257, with slight
>> deviations.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++-------
>>  1 file changed, 234 insertions(+), 58 deletions(-)
>>
>> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
>> index 603e18872c..05e210f6e7 100644
>> --- a/dts/framework/test_result.py
>> +++ b/dts/framework/test_result.py
>> @@ -2,8 +2,25 @@
>>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>  # Copyright(c) 2023 University of New Hampshire
>>
>> -"""
>> -Generic result container and reporters
>> +r"""Record and process DTS results.
>> +
>> +The results are recorded in a hierarchical manner:
>> +
>> +    * :class:`DTSResult` contains
>> +    * :class:`ExecutionResult` contains
>> +    * :class:`BuildTargetResult` contains
>> +    * :class:`TestSuiteResult` contains
>> +    * :class:`TestCaseResult`
>> +
>> +Each result may contain multiple lower level results, e.g. there are multiple
>> +:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
>> +The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`,
>> +which also defines some common behaviors in its methods.
>> +
>> +Each result class has its own idiosyncrasies which they implement in overridden methods.
>> +
>> +The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
>> +variable modify the directory where the files with results will be stored.
>>  """
>>
>>  import os.path
>> @@ -26,26 +43,34 @@
>>
>>
>>  class Result(Enum):
>> -    """
>> -    An Enum defining the possible states that
>> -    a setup, a teardown or a test case may end up in.
>> -    """
>> +    """The possible states that a setup, a teardown or a test case may end up in."""
>>
>> +    #:
>>      PASS = auto()
>> +    #:
>>      FAIL = auto()
>> +    #:
>>      ERROR = auto()
>> +    #:
>>      SKIP = auto()
>>
>>      def __bool__(self) -> bool:
>> +        """Only PASS is True."""
>>          return self is self.PASS
>>
>>
>>  class FixtureResult(object):
>> -    """
>> -    A record that stored the result of a setup or a teardown.
>> -    The default is FAIL because immediately after creating the object
>> -    the setup of the corresponding stage will be executed, which also guarantees
>> -    the execution of teardown.
>> +    """A record that stores the result of a setup or a teardown.
>> +
>> +    FAIL is a sensible default since it prevents false positives
>> +        (which could happen if the default was PASS).
>> +
>> +    Preventing false positives or other false results is preferable since a failure
>> +    is mostly likely to be investigated (the other false results may not be investigated at all).
>> +
>> +    Attributes:
>> +        result: The associated result.
>> +        error: The error in case of a failure.
>>      """
>
>
> I think the items in the attributes section should instead be "#:" because they are class variables.
>

Making these class variables would make the value the same for all
instances, of which there are plenty. Why do you think these should be
class variables?

>>
>>
>>      result: Result
>> @@ -56,21 +81,32 @@ def __init__(
>>          result: Result = Result.FAIL,
>>          error: Exception | None = None,
>>      ):
>> +        """Initialize the constructor with the fixture result and store a possible error.
>> +
>> +        Args:
>> +            result: The result to store.
>> +            error: The error which happened when a failure occurred.
>> +        """
>>          self.result = result
>>          self.error = error
>>
>>      def __bool__(self) -> bool:
>> +        """A wrapper around the stored :class:`Result`."""
>>          return bool(self.result)
>>
>>
>>  class Statistics(dict):
>> -    """
>> -    A helper class used to store the number of test cases by its result
>> -    along a few other basic information.
>> -    Using a dict provides a convenient way to format the data.
>> +    """How many test cases ended in which result state along some other basic information.
>> +
>> +    Subclassing :class:`dict` provides a convenient way to format the data.
>>      """
>>
>>      def __init__(self, dpdk_version: str | None):
>> +        """Extend the constructor with relevant keys.
>> +
>> +        Args:
>> +            dpdk_version: The version of tested DPDK.
>> +        """
>
>
> Should we maybe mark the "PASS RATE" and the "DPDK VERSION" as instance variables of the class?
>

This is a dict, so these won't work as instance variables, but it
makes sense to document these keys, so I'll add that.

>>
>>          super(Statistics, self).__init__()
>>          for result in Result:
>>              self[result.name] = 0
>> @@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None):
>>          self["DPDK VERSION"] = dpdk_version
>>
>>      def __iadd__(self, other: Result) -> "Statistics":
>> -        """
>> -        Add a Result to the final count.
>> +        """Add a Result to the final count.
>> +
>> +        Example:
>> +            stats: Statistics = Statistics()  # empty Statistics
>> +            stats += Result.PASS  # add a Result to `stats`
>> +
>> +        Args:
>> +            other: The Result to add to this statistics object.
>> +
>> +        Returns:
>> +            The modified statistics object.
>>          """
>>          self[other.name] += 1
>>          self["PASS RATE"] = (
>> @@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics":
>>          return self
>>
>>      def __str__(self) -> str:
>> -        """
>> -        Provide a string representation of the data.
>> -        """
>> +        """Each line contains the formatted key = value pair."""
>>          stats_str = ""
>>          for key, value in self.items():
>>              stats_str += f"{key:<12} = {value}\n"
>> @@ -102,10 +145,16 @@ def __str__(self) -> str:
>>
>>
>>  class BaseResult(object):
>> -    """
>> -    The Base class for all results. Stores the results of
>> -    the setup and teardown portions of the corresponding stage
>> -    and a list of results from each inner stage in _inner_results.
>> +    """Common data and behavior of DTS results.
>> +
>> +    Stores the results of the setup and teardown portions of the corresponding stage.
>> +    The hierarchical nature of DTS results is captured recursively in an internal list.
>> +    A stage is each level in this particular hierarchy (pre-execution or the top-most level,
>> +    execution, build target, test suite and test case.)
>> +
>> +    Attributes:
>> +        setup_result: The result of the setup of the particular stage.
>> +        teardown_result: The results of the teardown of the particular stage.
>>      """
>
>
> I think this might be another case of the attributes should be marked as class variables instead of instance variables.
>

This is the same as in FixtureResult. For example, there could be
multiple build targets with different results.

>>
>>
>>      setup_result: FixtureResult
>> @@ -113,15 +162,28 @@ class BaseResult(object):
>>      _inner_results: MutableSequence["BaseResult"]
>>
>>      def __init__(self):
>> +        """Initialize the constructor."""
>>          self.setup_result = FixtureResult()
>>          self.teardown_result = FixtureResult()
>>          self._inner_results = []
>>
>>      def update_setup(self, result: Result, error: Exception | None = None) -> None:
>> +        """Store the setup result.
>> +
>> +        Args:
>> +            result: The result of the setup.
>> +            error: The error that occurred in case of a failure.
>> +        """
>>          self.setup_result.result = result
>>          self.setup_result.error = error
>>
>>      def update_teardown(self, result: Result, error: Exception | None = None) -> None:
>> +        """Store the teardown result.
>> +
>> +        Args:
>> +            result: The result of the teardown.
>> +            error: The error that occurred in case of a failure.
>> +        """
>>          self.teardown_result.result = result
>>          self.teardown_result.error = error
>>
>> @@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]:
>>          ]
>>
>>      def get_errors(self) -> list[Exception]:
>> +        """Compile errors from the whole result hierarchy.
>> +
>> +        Returns:
>> +            The errors from setup, teardown and all errors found in the whole result hierarchy.
>> +        """
>>          return self._get_setup_teardown_errors() + self._get_inner_errors()
>>
>>      def add_stats(self, statistics: Statistics) -> None:
>> +        """Collate stats from the whole result hierarchy.
>> +
>> +        Args:
>> +            statistics: The :class:`Statistics` object where the stats will be collated.
>> +        """
>>          for inner_result in self._inner_results:
>>              inner_result.add_stats(statistics)
>>
>>
>>  class TestCaseResult(BaseResult, FixtureResult):
>> -    """
>> -    The test case specific result.
>> -    Stores the result of the actual test case.
>> -    Also stores the test case name.
>> +    r"""The test case specific result.
>> +
>> +    Stores the result of the actual test case. This is done by adding an extra superclass
>> +    in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and
>> +    the class is itself a record of the test case.
>> +
>> +    Attributes:
>> +        test_case_name: The test case name.
>>      """
>>
>
> Another spot where I think this should have a class variable comment.
>
>>
>>      test_case_name: str
>>
>>      def __init__(self, test_case_name: str):
>> +        """Extend the constructor with `test_case_name`.
>> +
>> +        Args:
>> +            test_case_name: The test case's name.
>> +        """
>>          super(TestCaseResult, self).__init__()
>>          self.test_case_name = test_case_name
>>
>>      def update(self, result: Result, error: Exception | None = None) -> None:
>> +        """Update the test case result.
>> +
>> +        This updates the result of the test case itself and doesn't affect
>> +        the results of the setup and teardown steps in any way.
>> +
>> +        Args:
>> +            result: The result of the test case.
>> +            error: The error that occurred in case of a failure.
>> +        """
>>          self.result = result
>>          self.error = error
>>
>> @@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]:
>>          return []
>>
>>      def add_stats(self, statistics: Statistics) -> None:
>> +        r"""Add the test case result to statistics.
>> +
>> +        The base method goes through the hierarchy recursively and this method is here to stop
>> +        the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree.
>> +
>> +        Args:
>> +            statistics: The :class:`Statistics` object where the stats will be added.
>> +        """
>>          statistics += self.result
>>
>>      def __bool__(self) -> bool:
>> +        """The test case passed only if setup, teardown and the test case itself passed."""
>>          return (
>>              bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
>>          )
>>
>>
>>  class TestSuiteResult(BaseResult):
>> -    """
>> -    The test suite specific result.
>> -    The _inner_results list stores results of test cases in a given test suite.
>> -    Also stores the test suite name.
>> +    """The test suite specific result.
>> +
>> +    The internal list stores the results of all test cases in a given test suite.
>> +
>> +    Attributes:
>> +        suite_name: The test suite name.
>>      """
>>
>
> I think this should also be a class variable.
>
>
>>
>>      suite_name: str
>>
>>      def __init__(self, suite_name: str):
>> +        """Extend the constructor with `suite_name`.
>> +
>> +        Args:
>> +            suite_name: The test suite's name.
>> +        """
>>          super(TestSuiteResult, self).__init__()
>>          self.suite_name = suite_name
>>
>>      def add_test_case(self, test_case_name: str) -> TestCaseResult:
>> +        """Add and return the inner result (test case).
>> +
>> +        Returns:
>> +            The test case's result.
>> +        """
>>          test_case_result = TestCaseResult(test_case_name)
>>          self._inner_results.append(test_case_result)
>>          return test_case_result
>>
>>
>>  class BuildTargetResult(BaseResult):
>> -    """
>> -    The build target specific result.
>> -    The _inner_results list stores results of test suites in a given build target.
>> -    Also stores build target specifics, such as compiler used to build DPDK.
>> +    """The build target specific result.
>> +
>> +    The internal list stores the results of all test suites in a given build target.
>> +
>> +    Attributes:
>> +        arch: The DPDK build target architecture.
>> +        os: The DPDK build target operating system.
>> +        cpu: The DPDK build target CPU.
>> +        compiler: The DPDK build target compiler.
>> +        compiler_version: The DPDK build target compiler version.
>> +        dpdk_version: The built DPDK version.
>>      """
>
>
> I think this should be broken into class variables as well.
>
>>
>>
>>      arch: Architecture
>> @@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult):
>>      dpdk_version: str | None
>>
>>      def __init__(self, build_target: BuildTargetConfiguration):
>> +        """Extend the constructor with the `build_target`'s build target config.
>> +
>> +        Args:
>> +            build_target: The build target's test run configuration.
>> +        """
>>          super(BuildTargetResult, self).__init__()
>>          self.arch = build_target.arch
>>          self.os = build_target.os
>> @@ -222,20 +345,35 @@ def __init__(self, build_target: BuildTargetConfiguration):
>>          self.dpdk_version = None
>>
>>      def add_build_target_info(self, versions: BuildTargetInfo) -> None:
>> +        """Add information about the build target gathered at runtime.
>> +
>> +        Args:
>> +            versions: The additional information.
>> +        """
>>          self.compiler_version = versions.compiler_version
>>          self.dpdk_version = versions.dpdk_version
>>
>>      def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
>> +        """Add and return the inner result (test suite).
>> +
>> +        Returns:
>> +            The test suite's result.
>> +        """
>>          test_suite_result = TestSuiteResult(test_suite_name)
>>          self._inner_results.append(test_suite_result)
>>          return test_suite_result
>>
>>
>>  class ExecutionResult(BaseResult):
>> -    """
>> -    The execution specific result.
>> -    The _inner_results list stores results of build targets in a given execution.
>> -    Also stores the SUT node configuration.
>> +    """The execution specific result.
>> +
>> +    The internal list stores the results of all build targets in a given execution.
>> +
>> +    Attributes:
>> +        sut_node: The SUT node used in the execution.
>> +        sut_os_name: The operating system of the SUT node.
>> +        sut_os_version: The operating system version of the SUT node.
>> +        sut_kernel_version: The operating system kernel version of the SUT node.
>>      """
>>
>
> I think these should be class variables as well.
>
>>
>>      sut_node: NodeConfiguration
>> @@ -244,36 +382,55 @@ class ExecutionResult(BaseResult):
>>      sut_kernel_version: str
>>
>>      def __init__(self, sut_node: NodeConfiguration):
>> +        """Extend the constructor with the `sut_node`'s config.
>> +
>> +        Args:
>> +            sut_node: The SUT node's test run configuration used in the execution.
>> +        """
>>          super(ExecutionResult, self).__init__()
>>          self.sut_node = sut_node
>>
>>      def add_build_target(
>>          self, build_target: BuildTargetConfiguration
>>      ) -> BuildTargetResult:
>> +        """Add and return the inner result (build target).
>> +
>> +        Args:
>> +            build_target: The build target's test run configuration.
>> +
>> +        Returns:
>> +            The build target's result.
>> +        """
>>          build_target_result = BuildTargetResult(build_target)
>>          self._inner_results.append(build_target_result)
>>          return build_target_result
>>
>>      def add_sut_info(self, sut_info: NodeInfo) -> None:
>> +        """Add SUT information gathered at runtime.
>> +
>> +        Args:
>> +            sut_info: The additional SUT node information.
>> +        """
>>          self.sut_os_name = sut_info.os_name
>>          self.sut_os_version = sut_info.os_version
>>          self.sut_kernel_version = sut_info.kernel_version
>>
>>
>>  class DTSResult(BaseResult):
>> -    """
>> -    Stores environment information and test results from a DTS run, which are:
>> -    * Execution level information, such as SUT and TG hardware.
>> -    * Build target level information, such as compiler, target OS and cpu.
>> -    * Test suite results.
>> -    * All errors that are caught and recorded during DTS execution.
>> +    """Stores environment information and test results from a DTS run.
>>
>> -    The information is stored in nested objects.
>> +        * Execution level information, such as testbed and the test suite list,
>> +        * Build target level information, such as compiler, target OS and cpu,
>> +        * Test suite and test case results,
>> +        * All errors that are caught and recorded during DTS execution.
>>
>> -    The class is capable of computing the return code used to exit DTS with
>> -    from the stored error.
>> +    The information is stored hierarchically. This is the first level of the hierarchy
>> +    and as such is where the data form the whole hierarchy is collated or processed.
>>
>> -    It also provides a brief statistical summary of passed/failed test cases.
>> +    The internal list stores the results of all executions.
>> +
>> +    Attributes:
>> +        dpdk_version: The DPDK version to record.
>>      """
>>
>
> I think this should be a class variable as well.
>

This is the only place where making this a class variable would work,
but I don't see a reason for it. An instance variable works just as
well.

>>
>>      dpdk_version: str | None
>> @@ -284,6 +441,11 @@ class DTSResult(BaseResult):
>>      _stats_filename: str
>>
>>      def __init__(self, logger: DTSLOG):
>> +        """Extend the constructor with top-level specifics.
>> +
>> +        Args:
>> +            logger: The logger instance the whole result will use.
>> +        """
>>          super(DTSResult, self).__init__()
>>          self.dpdk_version = None
>>          self._logger = logger
>> @@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG):
>>          self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
>>
>>      def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
>> +        """Add and return the inner result (execution).
>> +
>> +        Args:
>> +            sut_node: The SUT node's test run configuration.
>> +
>> +        Returns:
>> +            The execution's result.
>> +        """
>>          execution_result = ExecutionResult(sut_node)
>>          self._inner_results.append(execution_result)
>>          return execution_result
>>
>>      def add_error(self, error: Exception) -> None:
>> +        """Record an error that occurred outside any execution.
>> +
>> +        Args:
>> +            error: The exception to record.
>> +        """
>>          self._errors.append(error)
>>
>>      def process(self) -> None:
>> -        """
>> -        Process the data after a DTS run.
>> -        The data is added to nested objects during runtime and this parent object
>> -        is not updated at that time. This requires us to process the nested data
>> -        after it's all been gathered.
>> +        """Process the data after a whole DTS run.
>> +
>> +        The data is added to inner objects during runtime and this object is not updated
>> +        at that time. This requires us to process the inner data after it's all been gathered.
>>
>> -        The processing gathers all errors and the result statistics of test cases.
>> +        The processing gathers all errors and the statistics of test case results.
>>          """
>>          self._errors += self.get_errors()
>>          if self._errors and self._logger:
>> @@ -321,8 +495,10 @@ def process(self) -> None:
>>              stats_file.write(str(self._stats_result))
>>
>>      def get_return_code(self) -> int:
>> -        """
>> -        Go through all stored Exceptions and return the highest error code found.
>> +        """Go through all stored Exceptions and return the final DTS error code.
>> +
>> +        Returns:
>> +            The highest error code found.
>>          """
>>          for error in self._errors:
>>              error_return_code = ErrorSeverity.GENERIC_ERR
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 04/21] dts: exceptions docstring update
  2023-11-20 16:22                 ` Yoan Picchi
@ 2023-11-20 16:35                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-20 16:35 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Mon, Nov 20, 2023 at 5:22 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/__init__.py  |  12 ++++-
> >   dts/framework/exception.py | 106 +++++++++++++++++++++++++------------
> >   2 files changed, 83 insertions(+), 35 deletions(-)
> >
> > diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
> > index d551ad4bf0..662e6ccad2 100644
> > --- a/dts/framework/__init__.py
> > +++ b/dts/framework/__init__.py
> > @@ -1,3 +1,13 @@
> >   # SPDX-License-Identifier: BSD-3-Clause
> > -# Copyright(c) 2022 PANTHEON.tech s.r.o.
> > +# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022 University of New Hampshire
> > +
> > +"""Libraries and utilities for running DPDK Test Suite (DTS).
> > +
> > +The various modules in the DTS framework offer:
> > +
> > +* Connections to nodes, both interactive and non-interactive,
> > +* A straightforward way to add support for different operating systems of remote nodes,
> > +* Test suite setup, execution and teardown, along with test case setup, execution and teardown,
> > +* Pre-test suite setup and post-test suite teardown.
> > +"""
> > diff --git a/dts/framework/exception.py b/dts/framework/exception.py
> > index 7489c03570..ee1562c672 100644
> > --- a/dts/framework/exception.py
> > +++ b/dts/framework/exception.py
> > @@ -3,8 +3,10 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >
> > -"""
> > -User-defined exceptions used across the framework.
> > +"""DTS exceptions.
> > +
> > +The exceptions all have different severities expressed as an integer.
> > +The highest severity of all raised exception is used as the exit code of DTS.
>
> all raised exception*s*
>

Ack, will fix.

> >   """
> >
> >   from enum import IntEnum, unique
> > @@ -13,59 +15,79 @@
> >
> >   @unique
> >   class ErrorSeverity(IntEnum):
> > -    """
> > -    The severity of errors that occur during DTS execution.
> > +    """The severity of errors that occur during DTS execution.
> > +
> >       All exceptions are caught and the most severe error is used as return code.
> >       """
> >
> > +    #:
> >       NO_ERR = 0
> > +    #:
> >       GENERIC_ERR = 1
> > +    #:
> >       CONFIG_ERR = 2
> > +    #:
> >       REMOTE_CMD_EXEC_ERR = 3
> > +    #:
> >       SSH_ERR = 4
> > +    #:
> >       DPDK_BUILD_ERR = 10
> > +    #:
> >       TESTCASE_VERIFY_ERR = 20
> > +    #:
> >       BLOCKING_TESTSUITE_ERR = 25
> >
> >
> >   class DTSError(Exception):
> > -    """
> > -    The base exception from which all DTS exceptions are derived.
> > -    Stores error severity.
> > +    """The base exception from which all DTS exceptions are subclassed.
> > +
> > +    Do not use this exception, only use subclassed exceptions.
> >       """
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
> >
> >
> >   class SSHTimeoutError(DTSError):
> > -    """
> > -    Command execution timeout.
> > -    """
> > +    """The SSH execution of a command timed out."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> >       _command: str
> >
> >       def __init__(self, command: str):
> > +        """Define the meaning of the first argument.
> > +
> > +        Args:
> > +            command: The executed command.
> > +        """
> >           self._command = command
> >
> >       def __str__(self) -> str:
> > -        return f"TIMEOUT on {self._command}"
> > +        """Add some context to the string representation."""
> > +        return f"{self._command} execution timed out."
> >
> >
> >   class SSHConnectionError(DTSError):
> > -    """
> > -    SSH connection error.
> > -    """
> > +    """An unsuccessful SSH connection."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> >       _host: str
> >       _errors: list[str]
> >
> >       def __init__(self, host: str, errors: list[str] | None = None):
> > +        """Define the meaning of the first two arguments.
> > +
> > +        Args:
> > +            host: The hostname to which we're trying to connect.
> > +            errors: Any errors that occurred during the connection attempt.
> > +        """
> >           self._host = host
> >           self._errors = [] if errors is None else errors
> >
> >       def __str__(self) -> str:
> > +        """Include the errors in the string representation."""
> >           message = f"Error trying to connect with {self._host}."
> >           if self._errors:
> >               message += f" Errors encountered while retrying: {', '.join(self._errors)}"
> > @@ -74,43 +96,53 @@ def __str__(self) -> str:
> >
> >
> >   class SSHSessionDeadError(DTSError):
> > -    """
> > -    SSH session is not alive.
> > -    It can no longer be used.
> > -    """
> > +    """The SSH session is no longer alive."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
> >       _host: str
> >
> >       def __init__(self, host: str):
> > +        """Define the meaning of the first argument.
> > +
> > +        Args:
> > +            host: The hostname of the disconnected node.
> > +        """
> >           self._host = host
> >
> >       def __str__(self) -> str:
> > -        return f"SSH session with {self._host} has died"
> > +        """Add some context to the string representation."""
> > +        return f"SSH session with {self._host} has died."
> >
> >
> >   class ConfigurationError(DTSError):
> > -    """
> > -    Raised when an invalid configuration is encountered.
> > -    """
> > +    """An invalid configuration."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
> >
> >
> >   class RemoteCommandExecutionError(DTSError):
> > -    """
> > -    Raised when a command executed on a Node returns a non-zero exit status.
> > -    """
> > +    """An unsuccessful execution of a remote command."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
> > +    #: The executed command.
> >       command: str
> >       _command_return_code: int
> >
> >       def __init__(self, command: str, command_return_code: int):
> > +        """Define the meaning of the first two arguments.
> > +
> > +        Args:
> > +            command: The executed command.
> > +            command_return_code: The return code of the executed command.
> > +        """
> >           self.command = command
> >           self._command_return_code = command_return_code
> >
> >       def __str__(self) -> str:
> > +        """Include both the command and return code in the string representation."""
> >           return (
> >               f"Command {self.command} returned a non-zero exit code: "
> >               f"{self._command_return_code}"
> > @@ -118,35 +150,41 @@ def __str__(self) -> str:
> >
> >
> >   class RemoteDirectoryExistsError(DTSError):
> > -    """
> > -    Raised when a remote directory to be created already exists.
> > -    """
> > +    """A directory that exists on a remote node."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
> >
> >
> >   class DPDKBuildError(DTSError):
> > -    """
> > -    Raised when DPDK build fails for any reason.
> > -    """
> > +    """A DPDK build failure."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
> >
> >
> >   class TestCaseVerifyError(DTSError):
> > -    """
> > -    Used in test cases to verify the expected behavior.
> > -    """
> > +    """A test case failure."""
> >
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
> >
> >
> >   class BlockingTestSuiteError(DTSError):
> > +    """A failure in a blocking test suite."""
> > +
> > +    #:
> >       severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
> >       _suite_name: str
> >
> >       def __init__(self, suite_name: str) -> None:
> > +        """Define the meaning of the first argument.
> > +
> > +        Args:
> > +            suite_name: The blocking test suite.
> > +        """
> >           self._suite_name = suite_name
> >
> >       def __str__(self) -> str:
> > +        """Add some context to the string representation."""
> >           return f"Blocking suite {self._suite_name} failed."
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 06/21] dts: logger and utils docstring update
  2023-11-20 16:23                 ` Yoan Picchi
@ 2023-11-20 16:36                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-20 16:36 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Mon, Nov 20, 2023 at 5:23 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/logger.py | 72 ++++++++++++++++++++++-----------
> >   dts/framework/utils.py  | 88 +++++++++++++++++++++++++++++------------
> >   2 files changed, 113 insertions(+), 47 deletions(-)
> >
> > diff --git a/dts/framework/logger.py b/dts/framework/logger.py
> > index bb2991e994..d3eb75a4e4 100644
> > --- a/dts/framework/logger.py
> > +++ b/dts/framework/logger.py
> > @@ -3,9 +3,9 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >
> > -"""
> > -DTS logger module with several log level. DTS framework and TestSuite logs
> > -are saved in different log files.
> > +"""DTS logger module.
> > +
> > +DTS framework and TestSuite logs are saved in different log files.
> >   """
> >
> >   import logging
> > @@ -18,19 +18,21 @@
> >   stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
> >
> >
> > -class LoggerDictType(TypedDict):
> > -    logger: "DTSLOG"
> > -    name: str
> > -    node: str
> > -
> > +class DTSLOG(logging.LoggerAdapter):
> > +    """DTS logger adapter class for framework and testsuites.
> >
> > -# List for saving all using loggers
> > -Loggers: list[LoggerDictType] = []
> > +    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
> > +    variable control the verbosity of output. If enabled, all messages will be emitted to the
> > +    console.
> >
> > +    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
> > +    variable modify the directory where the logs will be stored.
> >
> > -class DTSLOG(logging.LoggerAdapter):
> > -    """
> > -    DTS log class for framework and testsuite.
> > +    Attributes:
> > +        node: The additional identifier. Currently unused.
> > +        sh: The handler which emits logs to console.
> > +        fh: The handler which emits logs to a file.
> > +        verbose_fh: Just as fh, but logs with a different, more verbose, format.
> >       """
> >
> >       _logger: logging.Logger
> > @@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
> >       verbose_fh: logging.FileHandler
> >
> >       def __init__(self, logger: logging.Logger, node: str = "suite"):
> > +        """Extend the constructor with additional handlers.
> > +
> > +        One handler logs to the console, the other one to a file, with either a regular or verbose
> > +        format.
> > +
> > +        Args:
> > +            logger: The logger from which to create the logger adapter.
> > +            node: An additional identifier. Currently unused.
> > +        """
> >           self._logger = logger
> >           # 1 means log everything, this will be used by file handlers if their level
> >           # is not set
> > @@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
> >           super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
> >
> >       def logger_exit(self) -> None:
> > -        """
> > -        Remove stream handler and logfile handler.
> > -        """
> > +        """Remove the stream handler and the logfile handler."""
> >           for handler in (self.sh, self.fh, self.verbose_fh):
> >               handler.flush()
> >               self._logger.removeHandler(handler)
> >
> >
> > +class _LoggerDictType(TypedDict):
> > +    logger: DTSLOG
> > +    name: str
> > +    node: str
> > +
> > +
> > +# List for saving all loggers in use
> > +_Loggers: list[_LoggerDictType] = []
> > +
> > +
> >   def getLogger(name: str, node: str = "suite") -> DTSLOG:
> > +    """Get DTS logger adapter identified by name and node.
> > +
> > +    An existing logger will be return if one with the exact name and node already exists.
>
> An existing logger will be return*ed*
>

Ack, will fix.

> > +    A new one will be created and stored otherwise.
> > +
> > +    Args:
> > +        name: The name of the logger.
> > +        node: An additional identifier for the logger.
> > +
> > +    Returns:
> > +        A logger uniquely identified by both name and node.
> >       """
> > -    Get logger handler and if there's no handler for specified Node will create one.
> > -    """
> > -    global Loggers
> > +    global _Loggers
> >       # return saved logger
> > -    logger: LoggerDictType
> > -    for logger in Loggers:
> > +    logger: _LoggerDictType
> > +    for logger in _Loggers:
> >           if logger["name"] == name and logger["node"] == node:
> >               return logger["logger"]
> >
> >       # return new logger
> >       dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
> > -    Loggers.append({"logger": dts_logger, "name": name, "node": node})
> > +    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
> >       return dts_logger
> > diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> > index f0c916471c..5016e3be10 100644
> > --- a/dts/framework/utils.py
> > +++ b/dts/framework/utils.py
> > @@ -3,6 +3,16 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >
> > +"""Various utility classes and functions.
> > +
> > +These are used in multiple modules across the framework. They're here because
> > +they provide some non-specific functionality, greatly simplify imports or just don't
> > +fit elsewhere.
> > +
> > +Attributes:
> > +    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
> > +"""
> > +
> >   import atexit
> >   import json
> >   import os
> > @@ -19,12 +29,20 @@
> >
> >
> >   def expand_range(range_str: str) -> list[int]:
> > -    """
> > -    Process range string into a list of integers. There are two possible formats:
> > -    n - a single integer
> > -    n-m - a range of integers
> > +    """Process `range_str` into a list of integers.
> > +
> > +    There are two possible formats of `range_str`:
> > +
> > +        * ``n`` - a single integer,
> > +        * ``n-m`` - a range of integers.
> >
> > -    The returned range includes both n and m. Empty string returns an empty list.
> > +    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
> > +
> > +    Args:
> > +        range_str: The range to expand.
> > +
> > +    Returns:
> > +        All the numbers from the range.
> >       """
> >       expanded_range: list[int] = []
> >       if range_str:
> > @@ -39,6 +57,14 @@ def expand_range(range_str: str) -> list[int]:
> >
> >
> >   def get_packet_summaries(packets: list[Packet]) -> str:
> > +    """Format a string summary from `packets`.
> > +
> > +    Args:
> > +        packets: The packets to format.
> > +
> > +    Returns:
> > +        The summary of `packets`.
> > +    """
> >       if len(packets) == 1:
> >           packet_summaries = packets[0].summary()
> >       else:
> > @@ -49,6 +75,8 @@ def get_packet_summaries(packets: list[Packet]) -> str:
> >
> >
> >   class StrEnum(Enum):
> > +    """Enum with members stored as strings."""
> > +
> >       @staticmethod
> >       def _generate_next_value_(
> >           name: str, start: int, count: int, last_values: object
> > @@ -56,22 +84,29 @@ def _generate_next_value_(
> >           return name
> >
> >       def __str__(self) -> str:
> > +        """The string representation is the name of the member."""
> >           return self.name
> >
> >
> >   class MesonArgs(object):
> > -    """
> > -    Aggregate the arguments needed to build DPDK:
> > -    default_library: Default library type, Meson allows "shared", "static" and "both".
> > -               Defaults to None, in which case the argument won't be used.
> > -    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
> > -               Do not use -D with them, for example:
> > -               meson_args = MesonArgs(enable_kmods=True).
> > -    """
> > +    """Aggregate the arguments needed to build DPDK."""
> >
> >       _default_library: str
> >
> >       def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> > +        """Initialize the meson arguments.
> > +
> > +        Args:
> > +            default_library: The default library type, Meson supports ``shared``, ``static`` and
> > +                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
> > +            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
> > +                Do not use ``-D`` with them.
> > +
> > +        Example:
> > +            ::
> > +
> > +                meson_args = MesonArgs(enable_kmods=True).
> > +        """
> >           self._default_library = (
> >               f"--default-library={default_library}" if default_library else ""
> >           )
> > @@ -83,6 +118,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
> >           )
> >
> >       def __str__(self) -> str:
> > +        """The actual args."""
> >           return " ".join(f"{self._default_library} {self._dpdk_args}".split())
> >
> >
> > @@ -104,24 +140,14 @@ class _TarCompressionFormat(StrEnum):
> >
> >
> >   class DPDKGitTarball(object):
> > -    """Create a compressed tarball of DPDK from the repository.
> > -
> > -    The DPDK version is specified with git object git_ref.
> > -    The tarball will be compressed with _TarCompressionFormat,
> > -    which must be supported by the DTS execution environment.
> > -    The resulting tarball will be put into output_dir.
> > +    """Compressed tarball of DPDK from the repository.
> >
> > -    The class supports the os.PathLike protocol,
> > +    The class supports the :class:`os.PathLike` protocol,
> >       which is used to get the Path of the tarball::
> >
> >           from pathlib import Path
> >           tarball = DPDKGitTarball("HEAD", "output")
> >           tarball_path = Path(tarball)
> > -
> > -    Arguments:
> > -        git_ref: A git commit ID, tag ID or tree ID.
> > -        output_dir: The directory where to put the resulting tarball.
> > -        tar_compression_format: The compression format to use.
> >       """
> >
> >       _git_ref: str
> > @@ -136,6 +162,17 @@ def __init__(
> >           output_dir: str,
> >           tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
> >       ):
> > +        """Create the tarball during initialization.
> > +
> > +        The DPDK version is specified with `git_ref`. The tarball will be compressed with
> > +        `tar_compression_format`, which must be supported by the DTS execution environment.
> > +        The resulting tarball will be put into `output_dir`.
> > +
> > +        Args:
> > +            git_ref: A git commit ID, tag ID or tree ID.
> > +            output_dir: The directory where to put the resulting tarball.
> > +            tar_compression_format: The compression format to use.
> > +        """
> >           self._git_ref = git_ref
> >           self._tar_compression_format = tar_compression_format
> >
> > @@ -204,4 +241,5 @@ def _delete_tarball(self) -> None:
> >               os.remove(self._tarball_path)
> >
> >       def __fspath__(self) -> str:
> > +        """The os.PathLike protocol implementation."""
> >           return str(self._tarball_path)
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 07/21] dts: dts runner and main docstring update
  2023-11-15 13:09               ` [PATCH v7 07/21] dts: dts runner and main " Juraj Linkeš
  2023-11-16 21:51                 ` Jeremy Spewock
@ 2023-11-20 17:43                 ` Yoan Picchi
  2023-11-21  9:10                   ` Juraj Linkeš
  1 sibling, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-20 17:43 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++-------
>   dts/main.py          |   8 ++-
>   2 files changed, 112 insertions(+), 24 deletions(-)
> 
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index 4c7fb0c40a..331fed7dc4 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -3,6 +3,33 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> +r"""Test suite runner module.

Is the r before the docstring intended?

> +
> +A DTS run is split into stages:
> +
> +    #. Execution stage,
> +    #. Build target stage,
> +    #. Test suite stage,
> +    #. Test case stage.
> +
> +The module is responsible for running tests on testbeds defined in the test run configuration.
> +Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or
> +one of its subclasses. The test case results are also recorded.
> +
> +If an error occurs, the current stage is aborted, the error is recorded and the run continues in
> +the next iteration of the same stage. The return code is the highest `severity` of all
> +:class:`~.framework.exception.DTSError`\s.

Is the . before the classname intended? considering the previous one 
doesn't have one. (I've not yet built the doc to check if it affect the 
rendered doc)

> +
> +Example:
> +    An error occurs in a build target setup. The current build target is aborted and the run
> +    continues with the next build target. If the errored build target was the last one in the given
> +    execution, the next execution begins.
> +
> +Attributes:
> +    dts_logger: The logger instance used in this module.
> +    result: The top level result used in the module.
> +"""
> +
>   import sys
>   
>   from .config import (
> @@ -23,9 +50,38 @@
>   
>   
>   def run_all() -> None:
> -    """
> -    The main process of DTS. Runs all build targets in all executions from the main
> -    config file.
> +    """Run all build targets in all executions from the test run configuration.
> +
> +    Before running test suites, executions and build targets are first set up.
> +    The executions and build targets defined in the test run configuration are iterated over.
> +    The executions define which tests to run and where to run them and build targets define
> +    the DPDK build setup.
> +
> +    The tests suites are set up for each execution/build target tuple and each scheduled
> +    test case within the test suite is set up, executed and torn down. After all test cases
> +    have been executed, the test suite is torn down and the next build target will be tested.
> +
> +    All the nested steps look like this:
> +
> +        #. Execution setup
> +
> +            #. Build target setup
> +
> +                #. Test suite setup
> +
> +                    #. Test case setup
> +                    #. Test case logic
> +                    #. Test case teardown
> +
> +                #. Test suite teardown
> +
> +            #. Build target teardown
> +
> +        #. Execution teardown
> +
> +    The test cases are filtered according to the specification in the test run configuration and
> +    the :option:`--test-cases` command line argument or
> +    the :envvar:`DTS_TESTCASES` environment variable.
>       """
>       global dts_logger
>       global result
> @@ -87,6 +143,8 @@ def run_all() -> None:
>   
>   
>   def _check_dts_python_version() -> None:
> +    """Check the required Python version - v3.10."""
> +
>       def RED(text: str) -> str:
>           return f"\u001B[31;1m{str(text)}\u001B[0m"
>   
> @@ -111,9 +169,16 @@ def _run_execution(
>       execution: ExecutionConfiguration,
>       result: DTSResult,
>   ) -> None:
> -    """
> -    Run the given execution. This involves running the execution setup as well as
> -    running all build targets in the given execution.
> +    """Run the given execution.
> +
> +    This involves running the execution setup as well as running all build targets
> +    in the given execution. After that, execution teardown is run.
> +
> +    Args:
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        execution: An execution's test run configuration.
> +        result: The top level result object.
>       """
>       dts_logger.info(
>           f"Running execution with SUT '{execution.system_under_test_node.name}'."
> @@ -150,8 +215,18 @@ def _run_build_target(
>       execution: ExecutionConfiguration,
>       execution_result: ExecutionResult,
>   ) -> None:
> -    """
> -    Run the given build target.
> +    """Run the given build target.
> +
> +    This involves running the build target setup as well as running all test suites
> +    in the given execution the build target is defined in.
> +    After that, build target teardown is run.
> +
> +    Args:
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        build_target: A build target's test run configuration.
> +        execution: The build target's execution's test run configuration.
> +        execution_result: The execution level result object associated with the execution.
>       """
>       dts_logger.info(f"Running build target '{build_target.name}'.")
>       build_target_result = execution_result.add_build_target(build_target)
> @@ -183,10 +258,17 @@ def _run_all_suites(
>       execution: ExecutionConfiguration,
>       build_target_result: BuildTargetResult,
>   ) -> None:
> -    """
> -    Use the given build_target to run execution's test suites
> -    with possibly only a subset of test cases.
> -    If no subset is specified, run all test cases.
> +    """Run the execution's (possibly a subset) test suites using the current build_target.
> +
> +    The function assumes the build target we're testing has already been built on the SUT node.
> +    The current build target thus corresponds to the current DPDK build present on the SUT node.
> +
> +    Args:
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        execution: The execution's test run configuration associated with the current build target.
> +        build_target_result: The build target level result object associated
> +            with the current build target.
>       """
>       end_build_target = False
>       if not execution.skip_smoke_tests:
> @@ -215,16 +297,22 @@ def _run_single_suite(
>       build_target_result: BuildTargetResult,
>       test_suite_config: TestSuiteConfig,
>   ) -> None:
> -    """Runs a single test suite.
> +    """Run all test suite in a single test suite module.
> +
> +    The function assumes the build target we're testing has already been built on the SUT node.
> +    The current build target thus corresponds to the current DPDK build present on the SUT node.
>   
>       Args:
> -        sut_node: Node to run tests on.
> -        execution: Execution the test case belongs to.
> -        build_target_result: Build target configuration test case is run on
> -        test_suite_config: Test suite configuration
> +        sut_node: The execution's SUT node.
> +        tg_node: The execution's TG node.
> +        execution: The execution's test run configuration associated with the current build target.
> +        build_target_result: The build target level result object associated
> +            with the current build target.
> +        test_suite_config: Test suite test run configuration specifying the test suite module
> +            and possibly a subset of test cases of test suites in that module.
>   
>       Raises:
> -        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
> +        BlockingTestSuiteError: If a blocking test suite fails.
>       """
>       try:
>           full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
> @@ -248,9 +336,7 @@ def _run_single_suite(
>   
>   
>   def _exit_dts() -> None:
> -    """
> -    Process all errors and exit with the proper exit code.
> -    """
> +    """Process all errors and exit with the proper exit code."""
>       result.process()
>   
>       if dts_logger:
> diff --git a/dts/main.py b/dts/main.py
> index 5d4714b0c3..f703615d11 100755
> --- a/dts/main.py
> +++ b/dts/main.py
> @@ -4,9 +4,7 @@
>   # Copyright(c) 2022 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022 University of New Hampshire
>   
> -"""
> -A test framework for testing DPDK.
> -"""
> +"""The DTS executable."""
>   
>   import logging
>   
> @@ -17,6 +15,10 @@ def main() -> None:
>       """Set DTS settings, then run DTS.
>   
>       The DTS settings are taken from the command line arguments and the environment variables.
> +    The settings object is stored in the module-level variable settings.SETTINGS which the entire
> +    framework uses. After importing the module (or the variable), any changes to the variable are
> +    not going to be reflected without a re-import. This means that the SETTINGS variable must
> +    be modified before the settings module is imported anywhere else in the framework.
>       """
>       settings.SETTINGS = settings.get_settings()
>       from framework import dts
  Nit: copyright notice update in main

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 07/21] dts: dts runner and main docstring update
  2023-11-20 17:43                 ` Yoan Picchi
@ 2023-11-21  9:10                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-21  9:10 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Mon, Nov 20, 2023 at 6:43 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/dts.py | 128 ++++++++++++++++++++++++++++++++++++-------
> >   dts/main.py          |   8 ++-
> >   2 files changed, 112 insertions(+), 24 deletions(-)
> >
> > diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> > index 4c7fb0c40a..331fed7dc4 100644
> > --- a/dts/framework/dts.py
> > +++ b/dts/framework/dts.py
> > @@ -3,6 +3,33 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >
> > +r"""Test suite runner module.
>
> Is the r before the docstring intended?
>

Yes, this is because of :class:`~.framework.exception.DTSError`\s. Any
alphabetical characters after backticks must be escaped for Sphinx to
interpret the string correctly and the way to do that is to make the
string raw (with r before the string).

> > +
> > +A DTS run is split into stages:
> > +
> > +    #. Execution stage,
> > +    #. Build target stage,
> > +    #. Test suite stage,
> > +    #. Test case stage.
> > +
> > +The module is responsible for running tests on testbeds defined in the test run configuration.
> > +Each setup or teardown of each stage is recorded in a :class:`~framework.test_result.DTSResult` or
> > +one of its subclasses. The test case results are also recorded.
> > +
> > +If an error occurs, the current stage is aborted, the error is recorded and the run continues in
> > +the next iteration of the same stage. The return code is the highest `severity` of all
> > +:class:`~.framework.exception.DTSError`\s.
>
> Is the . before the classname intended? considering the previous one
> doesn't have one. (I've not yet built the doc to check if it affect the
> rendered doc)
>

Good catch. Not only is the dot suspect, but I looked at all
references starting with the framework dir and the ones that refer to
files in the same directory don't have to specify the full path if
starting with a dot, such as:
~framework.test_result.DTSResult -> ~.test_result.DTSResult
~.framework.exception.DTSError -> ~.exception.DTSError

test_result and exception are in the same dir as dts.py, so the above
work. I'll make these changes in other files as well.

> > +
> > +Example:
> > +    An error occurs in a build target setup. The current build target is aborted and the run
> > +    continues with the next build target. If the errored build target was the last one in the given
> > +    execution, the next execution begins.
> > +
> > +Attributes:
> > +    dts_logger: The logger instance used in this module.
> > +    result: The top level result used in the module.
> > +"""
> > +
> >   import sys
> >
> >   from .config import (
> > @@ -23,9 +50,38 @@
> >
> >
> >   def run_all() -> None:
> > -    """
> > -    The main process of DTS. Runs all build targets in all executions from the main
> > -    config file.
> > +    """Run all build targets in all executions from the test run configuration.
> > +
> > +    Before running test suites, executions and build targets are first set up.
> > +    The executions and build targets defined in the test run configuration are iterated over.
> > +    The executions define which tests to run and where to run them and build targets define
> > +    the DPDK build setup.
> > +
> > +    The tests suites are set up for each execution/build target tuple and each scheduled
> > +    test case within the test suite is set up, executed and torn down. After all test cases
> > +    have been executed, the test suite is torn down and the next build target will be tested.
> > +
> > +    All the nested steps look like this:
> > +
> > +        #. Execution setup
> > +
> > +            #. Build target setup
> > +
> > +                #. Test suite setup
> > +
> > +                    #. Test case setup
> > +                    #. Test case logic
> > +                    #. Test case teardown
> > +
> > +                #. Test suite teardown
> > +
> > +            #. Build target teardown
> > +
> > +        #. Execution teardown
> > +
> > +    The test cases are filtered according to the specification in the test run configuration and
> > +    the :option:`--test-cases` command line argument or
> > +    the :envvar:`DTS_TESTCASES` environment variable.
> >       """
> >       global dts_logger
> >       global result
> > @@ -87,6 +143,8 @@ def run_all() -> None:
> >
> >
> >   def _check_dts_python_version() -> None:
> > +    """Check the required Python version - v3.10."""
> > +
> >       def RED(text: str) -> str:
> >           return f"\u001B[31;1m{str(text)}\u001B[0m"
> >
> > @@ -111,9 +169,16 @@ def _run_execution(
> >       execution: ExecutionConfiguration,
> >       result: DTSResult,
> >   ) -> None:
> > -    """
> > -    Run the given execution. This involves running the execution setup as well as
> > -    running all build targets in the given execution.
> > +    """Run the given execution.
> > +
> > +    This involves running the execution setup as well as running all build targets
> > +    in the given execution. After that, execution teardown is run.
> > +
> > +    Args:
> > +        sut_node: The execution's SUT node.
> > +        tg_node: The execution's TG node.
> > +        execution: An execution's test run configuration.
> > +        result: The top level result object.
> >       """
> >       dts_logger.info(
> >           f"Running execution with SUT '{execution.system_under_test_node.name}'."
> > @@ -150,8 +215,18 @@ def _run_build_target(
> >       execution: ExecutionConfiguration,
> >       execution_result: ExecutionResult,
> >   ) -> None:
> > -    """
> > -    Run the given build target.
> > +    """Run the given build target.
> > +
> > +    This involves running the build target setup as well as running all test suites
> > +    in the given execution the build target is defined in.
> > +    After that, build target teardown is run.
> > +
> > +    Args:
> > +        sut_node: The execution's SUT node.
> > +        tg_node: The execution's TG node.
> > +        build_target: A build target's test run configuration.
> > +        execution: The build target's execution's test run configuration.
> > +        execution_result: The execution level result object associated with the execution.
> >       """
> >       dts_logger.info(f"Running build target '{build_target.name}'.")
> >       build_target_result = execution_result.add_build_target(build_target)
> > @@ -183,10 +258,17 @@ def _run_all_suites(
> >       execution: ExecutionConfiguration,
> >       build_target_result: BuildTargetResult,
> >   ) -> None:
> > -    """
> > -    Use the given build_target to run execution's test suites
> > -    with possibly only a subset of test cases.
> > -    If no subset is specified, run all test cases.
> > +    """Run the execution's (possibly a subset) test suites using the current build_target.
> > +
> > +    The function assumes the build target we're testing has already been built on the SUT node.
> > +    The current build target thus corresponds to the current DPDK build present on the SUT node.
> > +
> > +    Args:
> > +        sut_node: The execution's SUT node.
> > +        tg_node: The execution's TG node.
> > +        execution: The execution's test run configuration associated with the current build target.
> > +        build_target_result: The build target level result object associated
> > +            with the current build target.
> >       """
> >       end_build_target = False
> >       if not execution.skip_smoke_tests:
> > @@ -215,16 +297,22 @@ def _run_single_suite(
> >       build_target_result: BuildTargetResult,
> >       test_suite_config: TestSuiteConfig,
> >   ) -> None:
> > -    """Runs a single test suite.
> > +    """Run all test suite in a single test suite module.
> > +
> > +    The function assumes the build target we're testing has already been built on the SUT node.
> > +    The current build target thus corresponds to the current DPDK build present on the SUT node.
> >
> >       Args:
> > -        sut_node: Node to run tests on.
> > -        execution: Execution the test case belongs to.
> > -        build_target_result: Build target configuration test case is run on
> > -        test_suite_config: Test suite configuration
> > +        sut_node: The execution's SUT node.
> > +        tg_node: The execution's TG node.
> > +        execution: The execution's test run configuration associated with the current build target.
> > +        build_target_result: The build target level result object associated
> > +            with the current build target.
> > +        test_suite_config: Test suite test run configuration specifying the test suite module
> > +            and possibly a subset of test cases of test suites in that module.
> >
> >       Raises:
> > -        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
> > +        BlockingTestSuiteError: If a blocking test suite fails.
> >       """
> >       try:
> >           full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
> > @@ -248,9 +336,7 @@ def _run_single_suite(
> >
> >
> >   def _exit_dts() -> None:
> > -    """
> > -    Process all errors and exit with the proper exit code.
> > -    """
> > +    """Process all errors and exit with the proper exit code."""
> >       result.process()
> >
> >       if dts_logger:
> > diff --git a/dts/main.py b/dts/main.py
> > index 5d4714b0c3..f703615d11 100755
> > --- a/dts/main.py
> > +++ b/dts/main.py
> > @@ -4,9 +4,7 @@
> >   # Copyright(c) 2022 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022 University of New Hampshire
> >
> > -"""
> > -A test framework for testing DPDK.
> > -"""
> > +"""The DTS executable."""
> >
> >   import logging
> >
> > @@ -17,6 +15,10 @@ def main() -> None:
> >       """Set DTS settings, then run DTS.
> >
> >       The DTS settings are taken from the command line arguments and the environment variables.
> > +    The settings object is stored in the module-level variable settings.SETTINGS which the entire
> > +    framework uses. After importing the module (or the variable), any changes to the variable are
> > +    not going to be reflected without a re-import. This means that the SETTINGS variable must
> > +    be modified before the settings module is imported anywhere else in the framework.
> >       """
> >       settings.SETTINGS = settings.get_settings()
> >       from framework import dts
>   Nit: copyright notice update in main

Nice catch again. Thanks.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 10/21] dts: config docstring update
  2023-11-15 13:09               ` [PATCH v7 10/21] dts: config " Juraj Linkeš
@ 2023-11-21 15:08                 ` Yoan Picchi
  2023-11-22 10:42                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-21 15:08 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/config/__init__.py | 371 ++++++++++++++++++++++++++-----
>   dts/framework/config/types.py    | 132 +++++++++++
>   2 files changed, 446 insertions(+), 57 deletions(-)
>   create mode 100644 dts/framework/config/types.py
> 
> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> index 2044c82611..0aa149a53d 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -3,8 +3,34 @@
>   # Copyright(c) 2022-2023 University of New Hampshire
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   
> -"""
> -Yaml config parsing methods
> +"""Testbed configuration and test suite specification.
> +
> +This package offers classes that hold real-time information about the testbed, hold test run
> +configuration describing the tested testbed and a loader function, :func:`load_config`, which loads
> +the YAML test run configuration file
> +and validates it according to :download:`the schema <conf_yaml_schema.json>`.
> +
> +The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout
> +this package. The allowed keys and types inside this dictionary are defined in
> +the :doc:`types <framework.config.types>` module.
> +
> +The test run configuration has two main sections:
> +
> +    * The :class:`ExecutionConfiguration` which defines what tests are going to be run
> +      and how DPDK will be built. It also references the testbed where these tests and DPDK
> +      are going to be run,
> +    * The nodes of the testbed are defined in the other section,
> +      a :class:`list` of :class:`NodeConfiguration` objects.
> +
> +The real-time information about testbed is supposed to be gathered at runtime.
> +
> +The classes defined in this package make heavy use of :mod:`dataclasses`.
> +All of them use slots and are frozen:
> +
> +    * Slots enables some optimizations, by pre-allocating space for the defined
> +      attributes in the underlying data structure,
> +    * Frozen makes the object immutable. This enables further optimizations,
> +      and makes it thread safe should we every want to move in that direction.

every -> ever ?

>   """
>   
>   import json
> @@ -12,11 +38,20 @@
>   import pathlib
>   from dataclasses import dataclass
>   from enum import auto, unique
> -from typing import Any, TypedDict, Union
> +from typing import Union
>   
>   import warlock  # type: ignore[import]
>   import yaml
>   
> +from framework.config.types import (
> +    BuildTargetConfigDict,
> +    ConfigurationDict,
> +    ExecutionConfigDict,
> +    NodeConfigDict,
> +    PortConfigDict,
> +    TestSuiteConfigDict,
> +    TrafficGeneratorConfigDict,
> +)
>   from framework.exception import ConfigurationError
>   from framework.settings import SETTINGS
>   from framework.utils import StrEnum
> @@ -24,55 +59,97 @@
>   
>   @unique
>   class Architecture(StrEnum):
> +    r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s."""
> +
> +    #:
>       i686 = auto()
> +    #:
>       x86_64 = auto()
> +    #:
>       x86_32 = auto()
> +    #:
>       arm64 = auto()
> +    #:
>       ppc64le = auto()
>   
>   
>   @unique
>   class OS(StrEnum):
> +    r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s."""
> +
> +    #:
>       linux = auto()
> +    #:
>       freebsd = auto()
> +    #:
>       windows = auto()
>   
>   
>   @unique
>   class CPUType(StrEnum):
> +    r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
> +
> +    #:
>       native = auto()
> +    #:
>       armv8a = auto()
> +    #:
>       dpaa2 = auto()
> +    #:
>       thunderx = auto()
> +    #:
>       xgene1 = auto()
>   
>   
>   @unique
>   class Compiler(StrEnum):
> +    r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s."""
> +
> +    #:
>       gcc = auto()
> +    #:
>       clang = auto()
> +    #:
>       icc = auto()
> +    #:
>       msvc = auto()
>   
>   
>   @unique
>   class TrafficGeneratorType(StrEnum):
> +    """The supported traffic generators."""
> +
> +    #:
>       SCAPY = auto()
>   
>   
> -# Slots enables some optimizations, by pre-allocating space for the defined
> -# attributes in the underlying data structure.
> -#
> -# Frozen makes the object immutable. This enables further optimizations,
> -# and makes it thread safe should we every want to move in that direction.
>   @dataclass(slots=True, frozen=True)
>   class HugepageConfiguration:
> +    r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s.
> +
> +    Attributes:
> +        amount: The number of hugepages.
> +        force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node.
> +    """
> +
>       amount: int
>       force_first_numa: bool
>   
>   
>   @dataclass(slots=True, frozen=True)
>   class PortConfig:
> +    r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s.
> +
> +    Attributes:
> +        node: The :class:`~framework.testbed_model.node.Node` where this port exists.
> +        pci: The PCI address of the port.
> +        os_driver_for_dpdk: The operating system driver name for use with DPDK.
> +        os_driver: The operating system driver name when the operating system controls the port.
> +        peer_node: The :class:`~framework.testbed_model.node.Node` of the port
> +            connected to this port.
> +        peer_pci: The PCI address of the port connected to this port.
> +    """
> +
>       node: str
>       pci: str
>       os_driver_for_dpdk: str
> @@ -81,18 +158,44 @@ class PortConfig:
>       peer_pci: str
>   
>       @staticmethod
> -    def from_dict(node: str, d: dict) -> "PortConfig":
> +    def from_dict(node: str, d: PortConfigDict) -> "PortConfig":
> +        """A convenience method that creates the object from fewer inputs.
> +
> +        Args:
> +            node: The node where this port exists.
> +            d: The configuration dictionary.
> +
> +        Returns:
> +            The port configuration instance.
> +        """
>           return PortConfig(node=node, **d)
>   
>   
>   @dataclass(slots=True, frozen=True)
>   class TrafficGeneratorConfig:
> +    """The configuration of traffic generators.
> +
> +    The class will be expanded when more configuration is needed.
> +
> +    Attributes:
> +        traffic_generator_type: The type of the traffic generator.
> +    """
> +
>       traffic_generator_type: TrafficGeneratorType
>   
>       @staticmethod
> -    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
> -        # This looks useless now, but is designed to allow expansion to traffic
> -        # generators that require more configuration later.
> +    def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig":
> +        """A convenience method that produces traffic generator config of the proper type.
> +
> +        Args:
> +            d: The configuration dictionary.
> +
> +        Returns:
> +            The traffic generator configuration instance.
> +
> +        Raises:
> +            ConfigurationError: An unknown traffic generator type was encountered.
> +        """
>           match TrafficGeneratorType(d["type"]):
>               case TrafficGeneratorType.SCAPY:
>                   return ScapyTrafficGeneratorConfig(
> @@ -106,11 +209,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
>   
>   @dataclass(slots=True, frozen=True)
>   class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
> +    """Scapy traffic generator specific configuration."""
> +
>       pass
>   
>   
>   @dataclass(slots=True, frozen=True)
>   class NodeConfiguration:
> +    r"""The configuration of :class:`~framework.testbed_model.node.Node`\s.
> +
> +    Attributes:
> +        name: The name of the :class:`~framework.testbed_model.node.Node`.
> +        hostname: The hostname of the :class:`~framework.testbed_model.node.Node`.
> +            Can be an IP or a domain name.
> +        user: The name of the user used to connect to
> +            the :class:`~framework.testbed_model.node.Node`.
> +        password: The password of the user. The use of passwords is heavily discouraged.
> +            Please use keys instead.
> +        arch: The architecture of the :class:`~framework.testbed_model.node.Node`.
> +        os: The operating system of the :class:`~framework.testbed_model.node.Node`.
> +        lcores: A comma delimited list of logical cores to use when running DPDK.
> +        use_first_core: If :data:`True`, the first logical core won't be used.
> +        hugepages: An optional hugepage configuration.
> +        ports: The ports that can be used in testing.
> +    """
> +
>       name: str
>       hostname: str
>       user: str
> @@ -123,57 +246,91 @@ class NodeConfiguration:
>       ports: list[PortConfig]
>   
>       @staticmethod
> -    def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
> -        hugepage_config = d.get("hugepages")
> -        if hugepage_config:
> -            if "force_first_numa" not in hugepage_config:
> -                hugepage_config["force_first_numa"] = False
> -            hugepage_config = HugepageConfiguration(**hugepage_config)
> -
> -        common_config = {
> -            "name": d["name"],
> -            "hostname": d["hostname"],
> -            "user": d["user"],
> -            "password": d.get("password"),
> -            "arch": Architecture(d["arch"]),
> -            "os": OS(d["os"]),
> -            "lcores": d.get("lcores", "1"),
> -            "use_first_core": d.get("use_first_core", False),
> -            "hugepages": hugepage_config,
> -            "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
> -        }
> -
> +    def from_dict(
> +        d: NodeConfigDict,
> +    ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
> +        """A convenience method that processes the inputs before creating a specialized instance.
> +
> +        Args:
> +            d: The configuration dictionary.
> +
> +        Returns:
> +            Either an SUT or TG configuration instance.
> +        """
> +        hugepage_config = None
> +        if "hugepages" in d:
> +            hugepage_config_dict = d["hugepages"]
> +            if "force_first_numa" not in hugepage_config_dict:
> +                hugepage_config_dict["force_first_numa"] = False
> +            hugepage_config = HugepageConfiguration(**hugepage_config_dict)
> +
> +        # The calls here contain duplicated code which is here because Mypy doesn't
> +        # properly support dictionary unpacking with TypedDicts
>           if "traffic_generator" in d:
>               return TGNodeConfiguration(
> +                name=d["name"],
> +                hostname=d["hostname"],
> +                user=d["user"],
> +                password=d.get("password"),
> +                arch=Architecture(d["arch"]),
> +                os=OS(d["os"]),
> +                lcores=d.get("lcores", "1"),
> +                use_first_core=d.get("use_first_core", False),
> +                hugepages=hugepage_config,
> +                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
>                   traffic_generator=TrafficGeneratorConfig.from_dict(
>                       d["traffic_generator"]
>                   ),
> -                **common_config,
>               )
>           else:
>               return SutNodeConfiguration(
> -                memory_channels=d.get("memory_channels", 1), **common_config
> +                name=d["name"],
> +                hostname=d["hostname"],
> +                user=d["user"],
> +                password=d.get("password"),
> +                arch=Architecture(d["arch"]),
> +                os=OS(d["os"]),
> +                lcores=d.get("lcores", "1"),
> +                use_first_core=d.get("use_first_core", False),
> +                hugepages=hugepage_config,
> +                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
> +                memory_channels=d.get("memory_channels", 1),
>               )
>   
>   
>   @dataclass(slots=True, frozen=True)
>   class SutNodeConfiguration(NodeConfiguration):
> +    """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration.
> +
> +    Attributes:
> +        memory_channels: The number of memory channels to use when running DPDK.
> +    """
> +
>       memory_channels: int
>   
>   
>   @dataclass(slots=True, frozen=True)
>   class TGNodeConfiguration(NodeConfiguration):
> +    """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration.
> +
> +    Attributes:
> +        traffic_generator: The configuration of the traffic generator present on the TG node.
> +    """
> +
>       traffic_generator: ScapyTrafficGeneratorConfig
>   
>   
>   @dataclass(slots=True, frozen=True)
>   class NodeInfo:
> -    """Class to hold important versions within the node.
> -
> -    This class, unlike the NodeConfiguration class, cannot be generated at the start.
> -    This is because we need to initialize a connection with the node before we can
> -    collect the information needed in this class. Therefore, it cannot be a part of
> -    the configuration class above.
> +    """Supplemental node information.
> +
> +    Attributes:
> +        os_name: The name of the running operating system of
> +            the :class:`~framework.testbed_model.node.Node`.
> +        os_version: The version of the running operating system of
> +            the :class:`~framework.testbed_model.node.Node`.
> +        kernel_version: The kernel version of the running operating system of
> +            the :class:`~framework.testbed_model.node.Node`.
>       """
>   
>       os_name: str
> @@ -183,6 +340,20 @@ class NodeInfo:
>   
>   @dataclass(slots=True, frozen=True)
>   class BuildTargetConfiguration:
> +    """DPDK build configuration.
> +
> +    The configuration used for building DPDK.
> +
> +    Attributes:
> +        arch: The target architecture to build for.
> +        os: The target os to build for.
> +        cpu: The target CPU to build for.
> +        compiler: The compiler executable to use.
> +        compiler_wrapper: This string will be put in front of the compiler when
> +            executing the build. Useful for adding wrapper commands, such as ``ccache``.
> +        name: The name of the compiler.
> +    """
> +
>       arch: Architecture
>       os: OS
>       cpu: CPUType
> @@ -191,7 +362,18 @@ class BuildTargetConfiguration:
>       name: str
>   
>       @staticmethod
> -    def from_dict(d: dict) -> "BuildTargetConfiguration":
> +    def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration":
> +        r"""A convenience method that processes the inputs before creating an instance.
> +
> +        `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and
> +        `name` is constructed from `arch`, `os`, `cpu` and `compiler`.
> +
> +        Args:
> +            d: The configuration dictionary.
> +
> +        Returns:
> +            The build target configuration instance.
> +        """
>           return BuildTargetConfiguration(
>               arch=Architecture(d["arch"]),
>               os=OS(d["os"]),
> @@ -204,23 +386,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
>   
>   @dataclass(slots=True, frozen=True)
>   class BuildTargetInfo:
> -    """Class to hold important versions within the build target.
> +    """Various versions and other information about a build target.
>   
> -    This is very similar to the NodeInfo class, it just instead holds information
> -    for the build target.
> +    Attributes:
> +        dpdk_version: The DPDK version that was built.
> +        compiler_version: The version of the compiler used to build DPDK.
>       """
>   
>       dpdk_version: str
>       compiler_version: str
>   
>   
> -class TestSuiteConfigDict(TypedDict):
> -    suite: str
> -    cases: list[str]
> -
> -
>   @dataclass(slots=True, frozen=True)
>   class TestSuiteConfig:
> +    """Test suite configuration.
> +
> +    Information about a single test suite to be executed.
> +
> +    Attributes:
> +        test_suite: The name of the test suite module without the starting ``TestSuite_``.
> +        test_cases: The names of test cases from this test suite to execute.
> +            If empty, all test cases will be executed.
> +    """
> +
>       test_suite: str
>       test_cases: list[str]
>   
> @@ -228,6 +416,14 @@ class TestSuiteConfig:
>       def from_dict(
>           entry: str | TestSuiteConfigDict,
>       ) -> "TestSuiteConfig":
> +        """Create an instance from two different types.
> +
> +        Args:
> +            entry: Either a suite name or a dictionary containing the config.
> +
> +        Returns:
> +            The test suite configuration instance.
> +        """
>           if isinstance(entry, str):
>               return TestSuiteConfig(test_suite=entry, test_cases=[])
>           elif isinstance(entry, dict):
> @@ -238,19 +434,49 @@ def from_dict(
>   
>   @dataclass(slots=True, frozen=True)
>   class ExecutionConfiguration:
> +    """The configuration of an execution.
> +
> +    The configuration contains testbed information, what tests to execute
> +    and with what DPDK build.
> +
> +    Attributes:
> +        build_targets: A list of DPDK builds to test.
> +        perf: Whether to run performance tests.
> +        func: Whether to run functional tests.
> +        skip_smoke_tests: Whether to skip smoke tests.
> +        test_suites: The names of test suites and/or test cases to execute.
> +        system_under_test_node: The SUT node to use in this execution.
> +        traffic_generator_node: The TG node to use in this execution.
> +        vdevs: The names of virtual devices to test.
> +    """
> +
>       build_targets: list[BuildTargetConfiguration]
>       perf: bool
>       func: bool
> +    skip_smoke_tests: bool
>       test_suites: list[TestSuiteConfig]
>       system_under_test_node: SutNodeConfiguration
>       traffic_generator_node: TGNodeConfiguration
>       vdevs: list[str]
> -    skip_smoke_tests: bool
>   
>       @staticmethod
>       def from_dict(
> -        d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
> +        d: ExecutionConfigDict,
> +        node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]],
>       ) -> "ExecutionConfiguration":
> +        """A convenience method that processes the inputs before creating an instance.
> +
> +        The build target and the test suite config is transformed into their respective objects.

is -> are

> +        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are

configuration*s*

> +        just stored.
> +
> +        Args:
> +            d: The configuration dictionary.
> +            node_map: A dictionary mapping node names to their config objects.
> +
> +        Returns:
> +            The execution configuration instance.
> +        """
>           build_targets: list[BuildTargetConfiguration] = list(
>               map(BuildTargetConfiguration.from_dict, d["build_targets"])
>           )
> @@ -291,10 +517,31 @@ def from_dict(
>   
>   @dataclass(slots=True, frozen=True)
>   class Configuration:
> +    """DTS testbed and test configuration.
> +
> +    The node configuration is not stored in this object. Rather, all used node configurations
> +    are stored inside the execution configuration where the nodes are actually used.
> +
> +    Attributes:
> +        executions: Execution configurations.
> +    """
> +
>       executions: list[ExecutionConfiguration]
>   
>       @staticmethod
> -    def from_dict(d: dict) -> "Configuration":
> +    def from_dict(d: ConfigurationDict) -> "Configuration":
> +        """A convenience method that processes the inputs before creating an instance.
> +
> +        Build target and test suite config is transformed into their respective objects.

is -> are

> +        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are

configuration*s*

> +        just stored.
> +
> +        Args:
> +            d: The configuration dictionary.
> +
> +        Returns:
> +            The whole configuration instance.
> +        """
>           nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
>               map(NodeConfiguration.from_dict, d["nodes"])
>           )
> @@ -313,9 +560,17 @@ def from_dict(d: dict) -> "Configuration":
>   
>   
>   def load_config() -> Configuration:
> -    """
> -    Loads the configuration file and the configuration file schema,
> -    validates the configuration file, and creates a configuration object.
> +    """Load DTS test run configuration from a file.
> +
> +    Load the YAML test run configuration file
> +    and :download:`the configuration file schema <conf_yaml_schema.json>`,
> +    validate the test run configuration file, and create a test run configuration object.
> +
> +    The YAML test run configuration file is specified in the :option:`--config-file` command line
> +    argument or the :envvar:`DTS_CFG_FILE` environment variable.
> +
> +    Returns:
> +        The parsed test run configuration.
>       """
>       with open(SETTINGS.config_file_path, "r") as f:
>           config_data = yaml.safe_load(f)
> @@ -326,6 +581,8 @@ def load_config() -> Configuration:
>   
>       with open(schema_path, "r") as f:
>           schema = json.load(f)
> -    config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
> -    config_obj: Configuration = Configuration.from_dict(dict(config))
> +    config = warlock.model_factory(schema, name="_Config")(config_data)
> +    config_obj: Configuration = Configuration.from_dict(
> +        dict(config)  # type: ignore[arg-type]
> +    )
>       return config_obj
> diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
> new file mode 100644
> index 0000000000..1927910d88
> --- /dev/null
> +++ b/dts/framework/config/types.py
> @@ -0,0 +1,132 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +"""Configuration dictionary contents specification.
> +
> +These type definitions serve as documentation of the configuration dictionary contents.
> +
> +The definitions use the built-in :class:`~typing.TypedDict` construct.
> +"""
> +
> +from typing import TypedDict
> +
> +
> +class PortConfigDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    pci: str
> +    #:
> +    os_driver_for_dpdk: str
> +    #:
> +    os_driver: str
> +    #:
> +    peer_node: str
> +    #:
> +    peer_pci: str
> +
> +
> +class TrafficGeneratorConfigDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    type: str
> +
> +
> +class HugepageConfigurationDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    amount: int
> +    #:
> +    force_first_numa: bool
> +
> +
> +class NodeConfigDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    hugepages: HugepageConfigurationDict
> +    #:
> +    name: str
> +    #:
> +    hostname: str
> +    #:
> +    user: str
> +    #:
> +    password: str
> +    #:
> +    arch: str
> +    #:
> +    os: str
> +    #:
> +    lcores: str
> +    #:
> +    use_first_core: bool
> +    #:
> +    ports: list[PortConfigDict]
> +    #:
> +    memory_channels: int
> +    #:
> +    traffic_generator: TrafficGeneratorConfigDict
> +
> +
> +class BuildTargetConfigDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    arch: str
> +    #:
> +    os: str
> +    #:
> +    cpu: str
> +    #:
> +    compiler: str
> +    #:
> +    compiler_wrapper: str
> +
> +
> +class TestSuiteConfigDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    suite: str
> +    #:
> +    cases: list[str]
> +
> +
> +class ExecutionSUTConfigDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    node_name: str
> +    #:
> +    vdevs: list[str]
> +
> +
> +class ExecutionConfigDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    build_targets: list[BuildTargetConfigDict]
> +    #:
> +    perf: bool
> +    #:
> +    func: bool
> +    #:
> +    skip_smoke_tests: bool
> +    #:
> +    test_suites: TestSuiteConfigDict
> +    #:
> +    system_under_test_node: ExecutionSUTConfigDict
> +    #:
> +    traffic_generator_node: str
> +
> +
> +class ConfigurationDict(TypedDict):
> +    """Allowed keys and values."""
> +
> +    #:
> +    nodes: list[NodeConfigDict]
> +    #:
> +    executions: list[ExecutionConfigDict]


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 11/21] dts: remote session docstring update
  2023-11-15 13:09               ` [PATCH v7 11/21] dts: remote session " Juraj Linkeš
@ 2023-11-21 15:36                 ` Yoan Picchi
  2023-11-22 11:13                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-21 15:36 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/remote_session/__init__.py      |  39 +++++-
>   .../remote_session/remote_session.py          | 128 +++++++++++++-----
>   dts/framework/remote_session/ssh_session.py   |  16 +--
>   3 files changed, 135 insertions(+), 48 deletions(-)
> 
> diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
> index 5e7ddb2b05..51a01d6b5e 100644
> --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__init__.py
> @@ -2,12 +2,14 @@
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2023 University of New Hampshire
>   
> -"""
> -The package provides modules for managing remote connections to a remote host (node),
> -differentiated by OS.
> -The package provides a factory function, create_session, that returns the appropriate
> -remote connection based on the passed configuration. The differences are in the
> -underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
> +"""Remote interactive and non-interactive sessions.
> +
> +This package provides modules for managing remote connections to a remote host (node).
> +
> +The non-interactive sessions send commands and return their output and exit code.
> +
> +The interactive sessions open an interactive shell which is continuously open,
> +allowing it to send and receive data within that particular shell.
>   """
>   
>   # pylama:ignore=W0611
> @@ -26,10 +28,35 @@
>   def create_remote_session(
>       node_config: NodeConfiguration, name: str, logger: DTSLOG
>   ) -> RemoteSession:
> +    """Factory for non-interactive remote sessions.
> +
> +    The function returns an SSH session, but will be extended if support
> +    for other protocols is added.
> +
> +    Args:
> +        node_config: The test run configuration of the node to connect to.
> +        name: The name of the session.
> +        logger: The logger instance this session will use.
> +
> +    Returns:
> +        The SSH remote session.
> +    """
>       return SSHSession(node_config, name, logger)
>   
>   
>   def create_interactive_session(
>       node_config: NodeConfiguration, logger: DTSLOG
>   ) -> InteractiveRemoteSession:
> +    """Factory for interactive remote sessions.
> +
> +    The function returns an interactive SSH session, but will be extended if support
> +    for other protocols is added.
> +
> +    Args:
> +        node_config: The test run configuration of the node to connect to.
> +        logger: The logger instance this session will use.
> +
> +    Returns:
> +        The interactive SSH remote session.
> +    """
>       return InteractiveRemoteSession(node_config, logger)
> diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
> index 0647d93de4..629c2d7b9c 100644
> --- a/dts/framework/remote_session/remote_session.py
> +++ b/dts/framework/remote_session/remote_session.py
> @@ -3,6 +3,13 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> +"""Base remote session.
> +
> +This module contains the abstract base class for remote sessions and defines
> +the structure of the result of a command execution.
> +"""
> +
> +
>   import dataclasses
>   from abc import ABC, abstractmethod
>   from pathlib import PurePath
> @@ -15,8 +22,14 @@
>   
>   @dataclasses.dataclass(slots=True, frozen=True)
>   class CommandResult:
> -    """
> -    The result of remote execution of a command.
> +    """The result of remote execution of a command.
> +
> +    Attributes:
> +        name: The name of the session that executed the command.
> +        command: The executed command.
> +        stdout: The standard output the command produced.
> +        stderr: The standard error output the command produced.
> +        return_code: The return code the command exited with.
>       """
>   
>       name: str
> @@ -26,6 +39,7 @@ class CommandResult:
>       return_code: int
>   
>       def __str__(self) -> str:
> +        """Format the command outputs."""
>           return (
>               f"stdout: '{self.stdout}'\n"
>               f"stderr: '{self.stderr}'\n"
> @@ -34,13 +48,24 @@ def __str__(self) -> str:
>   
>   
>   class RemoteSession(ABC):
> -    """
> -    The base class for defining which methods must be implemented in order to connect
> -    to a remote host (node) and maintain a remote session. The derived classes are
> -    supposed to implement/use some underlying transport protocol (e.g. SSH) to
> -    implement the methods. On top of that, it provides some basic services common to
> -    all derived classes, such as keeping history and logging what's being executed
> -    on the remote node.
> +    """Non-interactive remote session.
> +
> +    The abstract methods must be implemented in order to connect to a remote host (node)
> +    and maintain a remote session.
> +    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
> +    to implement the methods. On top of that, it provides some basic services common to all
> +    subclasses, such as keeping history and logging what's being executed on the remote node.
> +
> +    Attributes:
> +        name: The name of the session.
> +        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
> +            or a domain name.
> +        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
> +        port: The port of the node, if given in `hostname`.
> +        username: The username used in the connection.
> +        password: The password used in the connection. Most frequently empty,
> +            as the use of passwords is discouraged.
> +        history: The executed commands during this session.
>       """
>   
>       name: str
> @@ -59,6 +84,16 @@ def __init__(
>           session_name: str,
>           logger: DTSLOG,
>       ):
> +        """Connect to the node during initialization.
> +
> +        Args:
> +            node_config: The test run configuration of the node to connect to.
> +            session_name: The name of the session.
> +            logger: The logger instance this session will use.
> +
> +        Raises:
> +            SSHConnectionError: If the connection to the node was not successful.
> +        """
>           self._node_config = node_config
>   
>           self.name = session_name
> @@ -79,8 +114,13 @@ def __init__(
>   
>       @abstractmethod
>       def _connect(self) -> None:
> -        """
> -        Create connection to assigned node.
> +        """Create a connection to the node.
> +
> +        The implementation must assign the established session to self.session.
> +
> +        The implementation must except all exceptions and convert them to an SSHConnectionError.
> +
> +        The implementation may optionally implement retry attempts.
>           """
>   
>       def send_command(
> @@ -90,11 +130,24 @@ def send_command(
>           verify: bool = False,
>           env: dict | None = None,
>       ) -> CommandResult:
> -        """
> -        Send a command to the connected node using optional env vars
> -        and return CommandResult.
> -        If verify is True, check the return code of the executed command
> -        and raise a RemoteCommandExecutionError if the command failed.
> +        """Send `command` to the connected node.
> +
> +        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
> +        environment variable configure the timeout of command execution.
> +
> +        Args:
> +            command: The command to execute.
> +            timeout: Wait at most this long in seconds to execute `command`.
> +            verify: If :data:`True`, will check the exit code of `command`.
> +            env: A dictionary with environment variables to be used with `command` execution.
> +
> +        Raises:
> +            SSHSessionDeadError: If the session isn't alive when sending `command`.
> +            SSHTimeoutError: If `command` execution timed out.
> +            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
> +
> +        Returns:
> +            The output of the command along with the return code.
>           """
>           self._logger.info(
>               f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
> @@ -115,29 +168,36 @@ def send_command(
>       def _send_command(
>           self, command: str, timeout: float, env: dict | None
>       ) -> CommandResult:
> -        """
> -        Use the underlying protocol to execute the command using optional env vars
> -        and return CommandResult.
> +        """Send a command to the connected node.
> +
> +        The implementation must execute the command remotely with `env` environment variables
> +        and return the result.
> +
> +        The implementation must except all exceptions and raise an SSHSessionDeadError if
> +        the session is not alive and an SSHTimeoutError if the command execution times out.

3 way "and". Needs comas or splitting the sentence.

>           """
>   
>       def close(self, force: bool = False) -> None:
> -        """
> -        Close the remote session and free all used resources.
> +        """Close the remote session and free all used resources.
> +
> +        Args:
> +            force: Force the closure of the connection. This may not clean up all resources.
>           """
>           self._logger.logger_exit()
>           self._close(force)
>   
>       @abstractmethod
>       def _close(self, force: bool = False) -> None:
> -        """
> -        Execute protocol specific steps needed to close the session properly.
> +        """Protocol specific steps needed to close the session properly.
> +
> +        Args:
> +            force: Force the closure of the connection. This may not clean up all resources.
> +                This doesn't have to be implemented in the overloaded method.
>           """
>   
>       @abstractmethod
>       def is_alive(self) -> bool:
> -        """
> -        Check whether the remote session is still responding.
> -        """
> +        """Check whether the remote session is still responding."""
>   
>       @abstractmethod
>       def copy_from(
> @@ -147,12 +207,12 @@ def copy_from(
>       ) -> None:
>           """Copy a file from the remote Node to the local filesystem.
>   
> -        Copy source_file from the remote Node associated with this remote
> -        session to destination_file on the local filesystem.
> +        Copy `source_file` from the remote Node associated with this remote session
> +        to `destination_file` on the local filesystem.
>   
>           Args:
> -            source_file: the file on the remote Node.
> -            destination_file: a file or directory path on the local filesystem.
> +            source_file: The file on the remote Node.
> +            destination_file: A file or directory path on the local filesystem.
>           """
>   
>       @abstractmethod
> @@ -163,10 +223,10 @@ def copy_to(
>       ) -> None:
>           """Copy a file from local filesystem to the remote Node.
>   
> -        Copy source_file from local filesystem to destination_file
> -        on the remote Node associated with this remote session.
> +        Copy `source_file` from local filesystem to `destination_file` on the remote Node
> +        associated with this remote session.
>   
>           Args:
> -            source_file: the file on the local filesystem.
> -            destination_file: a file or directory path on the remote Node.
> +            source_file: The file on the local filesystem.
> +            destination_file: A file or directory path on the remote Node.
>           """
> diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
> index cee11d14d6..7186490a9a 100644
> --- a/dts/framework/remote_session/ssh_session.py
> +++ b/dts/framework/remote_session/ssh_session.py
> @@ -1,6 +1,8 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   
> +"""SSH session remote session."""

Is the double "session" intended?

> +
>   import socket
>   import traceback
>   from pathlib import PurePath
> @@ -26,13 +28,8 @@
>   class SSHSession(RemoteSession):
>       """A persistent SSH connection to a remote Node.
>   
> -    The connection is implemented with the Fabric Python library.
> -
> -    Args:
> -        node_config: The configuration of the Node to connect to.
> -        session_name: The name of the session.
> -        logger: The logger used for logging.
> -            This should be passed from the parent OSSession.
> +    The connection is implemented with
> +    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
>   
>       Attributes:
>           session: The underlying Fabric SSH connection.
> @@ -80,6 +77,7 @@ def _connect(self) -> None:
>               raise SSHConnectionError(self.hostname, errors)
>   
>       def is_alive(self) -> bool:
> +        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
>           return self.session.is_connected
>   
>       def _send_command(
> @@ -89,7 +87,7 @@ def _send_command(
>   
>           Args:
>               command: The command to execute.
> -            timeout: Wait at most this many seconds for the execution to complete.
> +            timeout: Wait at most this long in seconds to execute the command.

Is the timeout actually to start running the command and not to wait for 
it to be completed?

>               env: Extra environment variables that will be used in command execution.
>   
>           Raises:
> @@ -118,6 +116,7 @@ def copy_from(
>           source_file: str | PurePath,
>           destination_file: str | PurePath,
>       ) -> None:
> +        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
>           self.session.get(str(destination_file), str(source_file))
>   
>       def copy_to(
> @@ -125,6 +124,7 @@ def copy_to(
>           source_file: str | PurePath,
>           destination_file: str | PurePath,
>       ) -> None:
> +        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
>           self.session.put(str(source_file), str(destination_file))
>   
>       def _close(self, force: bool = False) -> None:


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 19/21] dts: base traffic generators docstring update
  2023-11-15 13:09               ` [PATCH v7 19/21] dts: base traffic generators " Juraj Linkeš
@ 2023-11-21 16:20                 ` Yoan Picchi
  2023-11-22 11:38                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-21 16:20 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   .../traffic_generator/__init__.py             | 22 ++++++++-
>   .../capturing_traffic_generator.py            | 46 +++++++++++--------
>   .../traffic_generator/traffic_generator.py    | 33 +++++++------
>   3 files changed, 68 insertions(+), 33 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
> index 11bfa1ee0f..51cca77da4 100644
> --- a/dts/framework/testbed_model/traffic_generator/__init__.py
> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> @@ -1,6 +1,19 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   
> +"""DTS traffic generators.
> +
> +A traffic generator is capable of generating traffic and then monitor returning traffic.
> +A traffic generator may just count the number of received packets
> +and it may additionally capture individual packets.

The sentence feels odd. Isn't it supposed to be "or" here? and no need 
for that early of a line break

> +
> +A traffic generator may be software running on generic hardware or it could be specialized hardware.
> +
> +The traffic generators that only count the number of received packets are suitable only for
> +performance testing. In functional testing, we need to be able to dissect each arrived packet
> +and a capturing traffic generator is required.
> +"""
> +
>   from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
>   from framework.exception import ConfigurationError
>   from framework.testbed_model.node import Node
> @@ -12,8 +25,15 @@
>   def create_traffic_generator(
>       tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
>   ) -> CapturingTrafficGenerator:
> -    """A factory function for creating traffic generator object from user config."""
> +    """The factory function for creating traffic generator objects from the test run configuration.
> +
> +    Args:
> +        tg_node: The traffic generator node where the created traffic generator will be running.
> +        traffic_generator_config: The traffic generator config.
>   
> +    Returns:
> +        A traffic generator capable of capturing received packets.
> +    """
>       match traffic_generator_config.traffic_generator_type:
>           case TrafficGeneratorType.SCAPY:
>               return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> index e521211ef0..b0a43ad003 100644
> --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> @@ -23,19 +23,22 @@
>   
>   
>   def _get_default_capture_name() -> str:
> -    """
> -    This is the function used for the default implementation of capture names.
> -    """
>       return str(uuid.uuid4())
>   
>   
>   class CapturingTrafficGenerator(TrafficGenerator):
>       """Capture packets after sending traffic.
>   
> -    A mixin interface which enables a packet generator to declare that it can capture
> +    The intermediary interface which enables a packet generator to declare that it can capture
>       packets and return them to the user.
>   
> +    Similarly to
> +    :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
> +    this class exposes the public methods specific to capturing traffic generators and defines
> +    a private method that must implement the traffic generation and capturing logic in subclasses.
> +
>       The methods of capturing traffic generators obey the following workflow:
> +
>           1. send packets
>           2. capture packets
>           3. write the capture to a .pcap file
> @@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
>   
>       @property
>       def is_capturing(self) -> bool:
> +        """This traffic generator can capture traffic."""
>           return True
>   
>       def send_packet_and_capture(
> @@ -54,11 +58,12 @@ def send_packet_and_capture(
>           duration: float,
>           capture_name: str = _get_default_capture_name(),
>       ) -> list[Packet]:
> -        """Send a packet, return received traffic.
> +        """Send `packet` and capture received traffic.
> +
> +        Send `packet` on `send_port` and then return all traffic captured
> +        on `receive_port` for the given `duration`.
>   
> -        Send a packet on the send_port and then return all traffic captured
> -        on the receive_port for the given duration. Also record the captured traffic
> -        in a pcap file.
> +        The captured traffic is recorded in the `capture_name`.pcap file.
>   
>           Args:
>               packet: The packet to send.
> @@ -68,7 +73,7 @@ def send_packet_and_capture(
>               capture_name: The name of the .pcap file where to store the capture.
>   
>           Returns:
> -             A list of received packets. May be empty if no packets are captured.
> +             The received packets. May be empty if no packets are captured.
>           """
>           return self.send_packets_and_capture(
>               [packet], send_port, receive_port, duration, capture_name
> @@ -82,11 +87,14 @@ def send_packets_and_capture(
>           duration: float,
>           capture_name: str = _get_default_capture_name(),
>       ) -> list[Packet]:
> -        """Send packets, return received traffic.
> +        """Send `packets` and capture received traffic.
>   
> -        Send packets on the send_port and then return all traffic captured
> -        on the receive_port for the given duration. Also record the captured traffic
> -        in a pcap file.
> +        Send `packets` on `send_port` and then return all traffic captured
> +        on `receive_port` for the given `duration`.
> +
> +        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
> +        can be configured with the :option:`--output-dir` command line argument or
> +        the :envvar:`DTS_OUTPUT_DIR` environment variable.
>   
>           Args:
>               packets: The packets to send.
> @@ -96,7 +104,7 @@ def send_packets_and_capture(
>               capture_name: The name of the .pcap file where to store the capture.
>   
>           Returns:
> -             A list of received packets. May be empty if no packets are captured.
> +             The received packets. May be empty if no packets are captured.
>           """
>           self._logger.debug(get_packet_summaries(packets))
>           self._logger.debug(
> @@ -124,10 +132,12 @@ def _send_packets_and_capture(
>           receive_port: Port,
>           duration: float,
>       ) -> list[Packet]:
> -        """
> -        The extended classes must implement this method which
> -        sends packets on send_port and receives packets on the receive_port
> -        for the specified duration. It must be able to handle no received packets.
> +        """The implementation of :method:`send_packets_and_capture`.
> +
> +        The subclasses must implement this method which sends `packets` on `send_port`
> +        and receives packets on `receive_port` for the specified `duration`.
> +
> +        It must be able to handle no received packets.

This sentence feels odd too. Maybe "It must be able to handle receiving 
no packets."

>           """
>   
>       def _write_capture_from_packets(
> diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> index ea7c3963da..ed396c6a2f 100644
> --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> @@ -22,7 +22,8 @@
>   class TrafficGenerator(ABC):
>       """The base traffic generator.
>   
> -    Defines the few basic methods that each traffic generator must implement.
> +    Exposes the common public methods of all traffic generators and defines private methods
> +    that must implement the traffic generation logic in subclasses.
>       """
>   
>       _config: TrafficGeneratorConfig
> @@ -30,6 +31,12 @@ class TrafficGenerator(ABC):
>       _logger: DTSLOG
>   
>       def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> +        """Initialize the traffic generator.
> +
> +        Args:
> +            tg_node: The traffic generator node where the created traffic generator will be running.
> +            config: The traffic generator's test run configuration.
> +        """
>           self._config = config
>           self._tg_node = tg_node
>           self._logger = getLogger(
> @@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
>           )
>   
>       def send_packet(self, packet: Packet, port: Port) -> None:
> -        """Send a packet and block until it is fully sent.
> +        """Send `packet` and block until it is fully sent.
>   
> -        What fully sent means is defined by the traffic generator.
> +        Send `packet` on `port`, then wait until `packet` is fully sent.
>   
>           Args:
>               packet: The packet to send.
> @@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
>           self.send_packets([packet], port)
>   
>       def send_packets(self, packets: list[Packet], port: Port) -> None:
> -        """Send packets and block until they are fully sent.
> +        """Send `packets` and block until they are fully sent.
>   
> -        What fully sent means is defined by the traffic generator.
> +        Send `packets` on `port`, then wait until `packets` are fully sent.
>   
>           Args:
>               packets: The packets to send.
> @@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
>   
>       @abstractmethod
>       def _send_packets(self, packets: list[Packet], port: Port) -> None:
> -        """
> -        The extended classes must implement this method which
> -        sends packets on send_port. The method should block until all packets
> -        are fully sent.
> +        """The implementation of :method:`send_packets`.
> +
> +        The subclasses must implement this method which sends `packets` on `port`.
> +        The method should block until all `packets` are fully sent.
> +
> +        What full sent means is defined by the traffic generator.

full -> fully

>           """
>   
>       @property
>       def is_capturing(self) -> bool:
> -        """Whether this traffic generator can capture traffic.
> -
> -        Returns:
> -            True if the traffic generator can capture traffic, False otherwise.
> -        """
> +        """This traffic generator can't capture traffic."""
>           return False
>   
>       @abstractmethod


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 20/21] dts: scapy tg docstring update
  2023-11-15 13:09               ` [PATCH v7 20/21] dts: scapy tg " Juraj Linkeš
@ 2023-11-21 16:33                 ` Yoan Picchi
  2023-11-22 13:18                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-21 16:33 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
>   1 file changed, 54 insertions(+), 37 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
> index 51864b6e6b..ed4f879925 100644
> --- a/dts/framework/testbed_model/traffic_generator/scapy.py
> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
> @@ -2,14 +2,15 @@
>   # Copyright(c) 2022 University of New Hampshire
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   
> -"""Scapy traffic generator.
> +"""The Scapy traffic generator.
>   
> -Traffic generator used for functional testing, implemented using the Scapy library.
> +A traffic generator used for functional testing, implemented with
> +`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
>   The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
>   
> -The XML-RPC server runs in an interactive remote SSH session running Python console,
> -where we start the server. The communication with the server is facilitated with
> -a local server proxy.
> +The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
> +in an interactive remote Python SSH session. The communication with the server is facilitated
> +with a local server proxy from the :mod:`xmlrpc.client` module.
>   """
>   
>   import inspect
> @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
>       recv_iface: str,
>       duration: float,
>   ) -> list[bytes]:
> -    """RPC function to send and capture packets.
> +    """The RPC function to send and capture packets.
>   
> -    The function is meant to be executed on the remote TG node.
> +    The function is meant to be executed on the remote TG node via the server proxy.
>   
>       Args:
>           xmlrpc_packets: The packets to send. These need to be converted to
> -            xmlrpc.client.Binary before sending to the remote server.
> +            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.

The string is not raw and no \s. As per you explanation a few commits 
earlier this might cause an issue with the tilda ?
Looking around I see it also happen several time here and also in the 
previous commit.

>           send_iface: The logical name of the egress interface.
>           recv_iface: The logical name of the ingress interface.
>           duration: Capture for this amount of time, in seconds.
>   
>       Returns:
>           A list of bytes. Each item in the list represents one packet, which needs
> -            to be converted back upon transfer from the remote node.
> +        to be converted back upon transfer from the remote node.
>       """
>       scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
>       sniffer = scapy.all.AsyncSniffer(
> @@ -98,19 +99,15 @@ def scapy_send_packets_and_capture(
>   def scapy_send_packets(
>       xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
>   ) -> None:
> -    """RPC function to send packets.
> +    """The RPC function to send packets.
>   
> -    The function is meant to be executed on the remote TG node.
> -    It doesn't return anything, only sends packets.
> +    The function is meant to be executed on the remote TG node via the server proxy.
> +    It only sends `xmlrpc_packets`, without capturing them.
>   
>       Args:
>           xmlrpc_packets: The packets to send. These need to be converted to
> -            xmlrpc.client.Binary before sending to the remote server.
> +            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
>           send_iface: The logical name of the egress interface.
> -
> -    Returns:
> -        A list of bytes. Each item in the list represents one packet, which needs
> -            to be converted back upon transfer from the remote node.
>       """
>       scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
>       scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
> @@ -130,11 +127,19 @@ def scapy_send_packets(
>   
>   
>   class QuittableXMLRPCServer(SimpleXMLRPCServer):
> -    """Basic XML-RPC server that may be extended
> -    by functions serializable by the marshal module.
> +    r"""Basic XML-RPC server.

But you have a raw string here, and I don't see the need why.

> +
> +    The server may be augmented by functions serializable by the :mod:`marshal` module.
>       """
>   
>       def __init__(self, *args, **kwargs):
> +        """Extend the XML-RPC server initialization.
> +
> +        Args:
> +            args: The positional arguments that will be passed to the superclass's constructor.
> +            kwargs: The keyword arguments that will be passed to the superclass's constructor.
> +                The `allow_none` argument will be set to :data:`True`.
> +        """
>           kwargs["allow_none"] = True
>           super().__init__(*args, **kwargs)
>           self.register_introspection_functions()
> @@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs):
>           self.register_function(self.add_rpc_function)
>   
>       def quit(self) -> None:
> +        """Quit the server."""
>           self._BaseServer__shutdown_request = True
>           return None
>   
>       def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
> -        """Add a function to the server.
> -
> -        This is meant to be executed remotely.
> +        """Add a function to the server from the local server proxy.
>   
>           Args:
>                 name: The name of the function.
> @@ -159,6 +163,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
>           self.register_function(function)
>   
>       def serve_forever(self, poll_interval: float = 0.5) -> None:
> +        """Extend the superclass method with an additional print.
> +
> +        Once executed in the local server proxy, the print gives us a clear string to expect
> +        when starting the server. The print means the function was executed on the XML-RPC server.
> +        """
>           print("XMLRPC OK")
>           super().serve_forever(poll_interval)
>   
> @@ -166,19 +175,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
>   class ScapyTrafficGenerator(CapturingTrafficGenerator):
>       """Provides access to scapy functions via an RPC interface.
>   
> -    The traffic generator first starts an XML-RPC on the remote TG node.
> -    Then it populates the server with functions which use the Scapy library
> -    to send/receive traffic.
> -
> -    Any packets sent to the remote server are first converted to bytes.
> -    They are received as xmlrpc.client.Binary objects on the server side.
> -    When the server sends the packets back, they are also received as
> -    xmlrpc.client.Binary object on the client side, are converted back to Scapy
> -    packets and only then returned from the methods.
> +    The class extends the base with remote execution of scapy functions.
>   
> -    Arguments:
> -        tg_node: The node where the traffic generator resides.
> -        config: The user configuration of the traffic generator.
> +    Any packets sent to the remote server are first converted to bytes. They are received as
> +    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
> +    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
> +    converted back to :class:`scapy.packet.Packet` objects and only then returned from the methods.
>   
>       Attributes:
>           session: The exclusive interactive remote session created by the Scapy
> @@ -192,6 +194,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
>       _config: ScapyTrafficGeneratorConfig
>   
>       def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
> +        """Extend the constructor with Scapy TG specifics.
> +
> +        The traffic generator first starts an XML-RPC on the remote `tg_node`.
> +        Then it populates the server with functions which use the Scapy library
> +        to send/receive traffic:
> +
> +            * :func:`scapy_send_packets_and_capture`
> +            * :func:`scapy_send_packets`
> +
> +        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
> +        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
> +
> +        Args:
> +            tg_node: The node where the traffic generator resides.
> +            config: The traffic generator's test run configuration.
> +        """
>           super().__init__(tg_node, config)
>   
>           assert (
> @@ -237,10 +255,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
>               [line for line in src.splitlines() if not line.isspace() and line != ""]
>           )
>   
> -        spacing = "\n" * 4
> -
>           # execute it in the python terminal
> -        self.session.send_command(spacing + src + spacing)
> +        self.session.send_command(src + "\n")
>           self.session.send_command(
>               f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
>               f"server.serve_forever()",
> @@ -274,6 +290,7 @@ def _send_packets_and_capture(
>           return scapy_packets
>   
>       def close(self) -> None:
> +        """Close the traffic generator."""
>           try:
>               self.rpc_server_proxy.quit()
>           except ConnectionRefusedError:


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 14/21] dts: cpu docstring update
  2023-11-15 13:09               ` [PATCH v7 14/21] dts: cpu " Juraj Linkeš
@ 2023-11-21 17:45                 ` Yoan Picchi
  2023-11-22 11:18                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-21 17:45 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++--------
>   1 file changed, 144 insertions(+), 52 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
> index 8fe785dfe4..4edeb4a7c2 100644
> --- a/dts/framework/testbed_model/cpu.py
> +++ b/dts/framework/testbed_model/cpu.py
> @@ -1,6 +1,22 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   
> +"""CPU core representation and filtering.
> +
> +This module provides a unified representation of logical CPU cores along
> +with filtering capabilities.
> +
> +When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
> +the physical CPU cores are split into logical CPU cores with different IDs.
> +
> +:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify
> +the socket from which to filter the number of logical cores. It's also possible to not use all
> +logical CPU cores from each physical core (e.g. only the first logical core of each physical core).
> +
> +:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that
> +the logical cores are actually present on the server.
> +"""
> +
>   import dataclasses
>   from abc import ABC, abstractmethod
>   from collections.abc import Iterable, ValuesView
> @@ -11,9 +27,17 @@
>   
>   @dataclass(slots=True, frozen=True)
>   class LogicalCore(object):
> -    """
> -    Representation of a CPU core. A physical core is represented in OS
> -    by multiple logical cores (lcores) if CPU multithreading is enabled.
> +    """Representation of a logical CPU core.
> +
> +    A physical core is represented in OS by multiple logical cores (lcores)
> +    if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same.
> +
> +    Attributes:
> +        lcore: The logical core ID of a CPU core. It's the same as `core` with
> +            disabled multithreading.
> +        core: The physical core ID of a CPU core.
> +        socket: The physical socket ID where the CPU resides.
> +        node: The NUMA node ID where the CPU resides.
>       """
>   
>       lcore: int
> @@ -22,27 +46,36 @@ class LogicalCore(object):
>       node: int
>   
>       def __int__(self) -> int:
> +        """The CPU is best represented by the logical core, as that's what we configure in EAL."""
>           return self.lcore
>   
>   
>   class LogicalCoreList(object):
> -    """
> -    Convert these options into a list of logical core ids.
> -    lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
> -    lcore_list=[0,1,2,3] - a list of int indices
> -    lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
> -    lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
> -
> -    The class creates a unified format used across the framework and allows
> -    the user to use either a str representation (using str(instance) or directly
> -    in f-strings) or a list representation (by accessing instance.lcore_list).
> -    Empty lcore_list is allowed.
> +    r"""A unified way to store :class:`LogicalCore`\s.
> +
> +    Create a unified format used across the framework and allow the user to use
> +    either a :class:`str` representation (using ``str(instance)`` or directly in f-strings)
> +    or a :class:`list` representation (by accessing the `lcore_list` property,
> +    which stores logical core IDs).
>       """
>   
>       _lcore_list: list[int]
>       _lcore_str: str
>   
>       def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
> +        """Process `lcore_list`, then sort.
> +
> +        There are four supported logical core list formats::
> +
> +            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
> +            lcore_list=[0,1,2,3]        # a list of int indices
> +            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
> +            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
> +
> +        Args:
> +            lcore_list: Various ways to represent multiple logical cores.
> +                Empty `lcore_list` is allowed.
> +        """
>           self._lcore_list = []
>           if isinstance(lcore_list, str):
>               lcore_list = lcore_list.split(",")
> @@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
>   
>       @property
>       def lcore_list(self) -> list[int]:
> +        """The logical core IDs."""
>           return self._lcore_list
>   
>       def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
> @@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
>           return formatted_core_list
>   
>       def __str__(self) -> str:
> +        """The consecutive ranges of logical core IDs."""
>           return self._lcore_str
>   
>   
>   @dataclasses.dataclass(slots=True, frozen=True)
>   class LogicalCoreCount(object):
> -    """
> -    Define the number of logical cores to use.
> -    If sockets is not None, socket_count is ignored.
> -    """
> +    """Define the number of logical cores per physical cores per sockets."""
>   
> +    #: Use this many logical cores per each physical core.
>       lcores_per_core: int = 1
> +    #: Use this many physical cores per each socket.
>       cores_per_socket: int = 2
> +    #: Use this many sockets.
>       socket_count: int = 1
> +    #: Use exactly these sockets. This takes precedence over `socket_count`,
> +    #: so when `sockets` is not :data:`None`, `socket_count` is ignored.
>       sockets: list[int] | None = None
>   
>   
>   class LogicalCoreFilter(ABC):
> -    """
> -    Filter according to the input filter specifier. Each filter needs to be
> -    implemented in a derived class.
> -    This class only implements operations common to all filters, such as sorting
> -    the list to be filtered beforehand.
> +    """Common filtering class.
> +
> +    Each filter needs to be implemented in a subclass. This base class sorts the list of cores
> +    and defines the filtering method, which must be implemented by subclasses.
>       """
>   
>       _filter_specifier: LogicalCoreCount | LogicalCoreList
> @@ -122,6 +158,17 @@ def __init__(
>           filter_specifier: LogicalCoreCount | LogicalCoreList,
>           ascending: bool = True,
>       ):
> +        """Filter according to the input filter specifier.
> +
> +        The input `lcore_list` is copied and sorted by physical core before filtering.
> +        The list is copied so that the original is left intact.
> +
> +        Args:
> +            lcore_list: The logical CPU cores to filter.
> +            filter_specifier: Filter cores from `lcore_list` according to this filter.
> +            ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`,
> +                sort in descending order.
> +        """
>           self._filter_specifier = filter_specifier
>   
>           # sorting by core is needed in case hyperthreading is enabled
> @@ -132,31 +179,45 @@ def __init__(
>   
>       @abstractmethod
>       def filter(self) -> list[LogicalCore]:
> -        """
> -        Use self._filter_specifier to filter self._lcores_to_filter
> -        and return the list of filtered LogicalCores.
> -        self._lcores_to_filter is a sorted copy of the original list,
> -        so it may be modified.
> +        r"""Filter the cores.
> +
> +        Use `self._filter_specifier` to filter `self._lcores_to_filter` and return
> +        the filtered :class:`LogicalCore`\s.
> +        `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified.
> +
> +        Returns:
> +            The filtered cores.
>           """
>   
>   
>   class LogicalCoreCountFilter(LogicalCoreFilter):
> -    """
> +    """Filter cores by specified counts.
> +
>       Filter the input list of LogicalCores according to specified rules:
> -    Use cores from the specified number of sockets or from the specified socket ids.
> -    If sockets is specified, it takes precedence over socket_count.
> -    From each of those sockets, use only cores_per_socket of cores.
> -    And for each core, use lcores_per_core of logical cores. Hypertheading
> -    must be enabled for this to take effect.
> -    If ascending is True, use cores with the lowest numerical id first
> -    and continue in ascending order. If False, start with the highest
> -    id and continue in descending order. This ordering affects which
> -    sockets to consider first as well.
> +
> +        * The input `filter_specifier` is :class:`LogicalCoreCount`,
> +        * Use cores from the specified number of sockets or from the specified socket ids,
> +        * If `sockets` is specified, it takes precedence over `socket_count`,
> +        * From each of those sockets, use only `cores_per_socket` of cores,
> +        * And for each core, use `lcores_per_core` of logical cores. Hypertheading
> +          must be enabled for this to take effect.
>       """
>   
>       _filter_specifier: LogicalCoreCount
>   
>       def filter(self) -> list[LogicalCore]:
> +        """Filter the cores according to :class:`LogicalCoreCount`.
> +
> +        Start by filtering the allowed sockets. The cores matching the allowed socket are returned.

allowed socket*s*

> +        The cores of each socket are stored in separate lists.
> +
> +        Then filter the allowed physical cores from those lists of cores per socket. When filtering
> +        physical cores, store the desired number of logical cores per physical core which then
> +        together constitute the final filtered list.
> +
> +        Returns:
> +            The filtered cores.
> +        """
>           sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
>           filtered_lcores = []
>           for socket_to_filter in sockets_to_filter:
> @@ -166,24 +227,37 @@ def filter(self) -> list[LogicalCore]:
>       def _filter_sockets(
>           self, lcores_to_filter: Iterable[LogicalCore]
>       ) -> ValuesView[list[LogicalCore]]:
> -        """
> -        Remove all lcores that don't match the specified socket(s).
> -        If self._filter_specifier.sockets is not None, keep lcores from those sockets,
> -        otherwise keep lcores from the first
> -        self._filter_specifier.socket_count sockets.
> +        """Filter a list of cores per each allowed socket.
> +
> +        The sockets may be specified in two ways, either a number or a specific list of sockets.
> +        In case of a specific list, we just need to return the cores from those sockets.
> +        If filtering a number of cores, we need to go through all cores and note which sockets
> +        appear and only filter from the first n that appear.
> +
> +        Args:
> +            lcores_to_filter: The cores to filter. These must be sorted by the physical core.
> +
> +        Returns:
> +            A list of lists of logical CPU cores. Each list contains cores from one socket.
>           """
>           allowed_sockets: set[int] = set()
>           socket_count = self._filter_specifier.socket_count
>           if self._filter_specifier.sockets:
> +            # when sockets in filter is specified, the sockets are already set
>               socket_count = len(self._filter_specifier.sockets)
>               allowed_sockets = set(self._filter_specifier.sockets)
>   
> +        # filter socket_count sockets from all sockets by checking the socket of each CPU
>           filtered_lcores: dict[int, list[LogicalCore]] = {}
>           for lcore in lcores_to_filter:
>               if not self._filter_specifier.sockets:
> +                # this is when sockets is not set, so we do the actual filtering
> +                # when it is set, allowed_sockets is already defined and can't be changed
>                   if len(allowed_sockets) < socket_count:
> +                    # allowed_sockets is a set, so adding an existing socket won't re-add it
>                       allowed_sockets.add(lcore.socket)
>               if lcore.socket in allowed_sockets:
> +                # separate sockets per socket; this makes it easier in further processing

socket*s* per socket ?

>                   if lcore.socket in filtered_lcores:
>                       filtered_lcores[lcore.socket].append(lcore)
>                   else:
> @@ -200,12 +274,13 @@ def _filter_sockets(
>       def _filter_cores_from_socket(
>           self, lcores_to_filter: Iterable[LogicalCore]
>       ) -> list[LogicalCore]:
> -        """
> -        Keep only the first self._filter_specifier.cores_per_socket cores.
> -        In multithreaded environments, keep only
> -        the first self._filter_specifier.lcores_per_core lcores of those cores.
> -        """
> +        """Filter a list of cores from the given socket.
> +
> +        Go through the cores and note how many logical cores per physical core have been filtered.
>   
> +        Returns:
> +            The filtered logical CPU cores.
> +        """
>           # no need to use ordered dict, from Python3.7 the dict
>           # insertion order is preserved (LIFO).
>           lcore_count_per_core_map: dict[int, int] = {}
> @@ -248,15 +323,21 @@ def _filter_cores_from_socket(
>   
>   
>   class LogicalCoreListFilter(LogicalCoreFilter):
> -    """
> -    Filter the input list of Logical Cores according to the input list of
> -    lcore indices.
> -    An empty LogicalCoreList won't filter anything.
> +    """Filter the logical CPU cores by logical CPU core IDs.
> +
> +    This is a simple filter that looks at logical CPU IDs and only filter those that match.
> +
> +    The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything.
>       """
>   
>       _filter_specifier: LogicalCoreList
>   
>       def filter(self) -> list[LogicalCore]:
> +        """Filter based on logical CPU core ID.
> +
> +        Return:
> +            The filtered logical CPU cores.
> +        """
>           if not len(self._filter_specifier.lcore_list):
>               return self._lcores_to_filter
>   
> @@ -279,6 +360,17 @@ def lcore_filter(
>       filter_specifier: LogicalCoreCount | LogicalCoreList,
>       ascending: bool,
>   ) -> LogicalCoreFilter:
> +    """Factory for using the right filter with `filter_specifier`.
> +
> +    Args:
> +        core_list: The logical CPU cores to filter.
> +        filter_specifier: The filter to use.
> +        ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
> +            sort in descending order.
> +
> +    Returns:
> +        The filter matching `filter_specifier`.
> +    """
>       if isinstance(filter_specifier, LogicalCoreList):
>           return LogicalCoreListFilter(core_list, filter_specifier, ascending)
>       elif isinstance(filter_specifier, LogicalCoreCount):


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 10/21] dts: config docstring update
  2023-11-21 15:08                 ` Yoan Picchi
@ 2023-11-22 10:42                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 10:42 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

Thanks, Yoan, I'll make these changes in v8.

On Tue, Nov 21, 2023 at 4:08 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/config/__init__.py | 371 ++++++++++++++++++++++++++-----
> >   dts/framework/config/types.py    | 132 +++++++++++
> >   2 files changed, 446 insertions(+), 57 deletions(-)
> >   create mode 100644 dts/framework/config/types.py
> >
> > diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
> > index 2044c82611..0aa149a53d 100644
> > --- a/dts/framework/config/__init__.py
> > +++ b/dts/framework/config/__init__.py
> > @@ -3,8 +3,34 @@
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >
> > -"""
> > -Yaml config parsing methods
> > +"""Testbed configuration and test suite specification.
> > +
> > +This package offers classes that hold real-time information about the testbed, hold test run
> > +configuration describing the tested testbed and a loader function, :func:`load_config`, which loads
> > +the YAML test run configuration file
> > +and validates it according to :download:`the schema <conf_yaml_schema.json>`.
> > +
> > +The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout
> > +this package. The allowed keys and types inside this dictionary are defined in
> > +the :doc:`types <framework.config.types>` module.
> > +
> > +The test run configuration has two main sections:
> > +
> > +    * The :class:`ExecutionConfiguration` which defines what tests are going to be run
> > +      and how DPDK will be built. It also references the testbed where these tests and DPDK
> > +      are going to be run,
> > +    * The nodes of the testbed are defined in the other section,
> > +      a :class:`list` of :class:`NodeConfiguration` objects.
> > +
> > +The real-time information about testbed is supposed to be gathered at runtime.
> > +
> > +The classes defined in this package make heavy use of :mod:`dataclasses`.
> > +All of them use slots and are frozen:
> > +
> > +    * Slots enables some optimizations, by pre-allocating space for the defined
> > +      attributes in the underlying data structure,
> > +    * Frozen makes the object immutable. This enables further optimizations,
> > +      and makes it thread safe should we every want to move in that direction.
>
> every -> ever ?
>
> >   """
> >
> >   import json
> > @@ -12,11 +38,20 @@
> >   import pathlib
> >   from dataclasses import dataclass
> >   from enum import auto, unique
> > -from typing import Any, TypedDict, Union
> > +from typing import Union
> >
> >   import warlock  # type: ignore[import]
> >   import yaml
> >
> > +from framework.config.types import (
> > +    BuildTargetConfigDict,
> > +    ConfigurationDict,
> > +    ExecutionConfigDict,
> > +    NodeConfigDict,
> > +    PortConfigDict,
> > +    TestSuiteConfigDict,
> > +    TrafficGeneratorConfigDict,
> > +)
> >   from framework.exception import ConfigurationError
> >   from framework.settings import SETTINGS
> >   from framework.utils import StrEnum
> > @@ -24,55 +59,97 @@
> >
> >   @unique
> >   class Architecture(StrEnum):
> > +    r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s."""
> > +
> > +    #:
> >       i686 = auto()
> > +    #:
> >       x86_64 = auto()
> > +    #:
> >       x86_32 = auto()
> > +    #:
> >       arm64 = auto()
> > +    #:
> >       ppc64le = auto()
> >
> >
> >   @unique
> >   class OS(StrEnum):
> > +    r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s."""
> > +
> > +    #:
> >       linux = auto()
> > +    #:
> >       freebsd = auto()
> > +    #:
> >       windows = auto()
> >
> >
> >   @unique
> >   class CPUType(StrEnum):
> > +    r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
> > +
> > +    #:
> >       native = auto()
> > +    #:
> >       armv8a = auto()
> > +    #:
> >       dpaa2 = auto()
> > +    #:
> >       thunderx = auto()
> > +    #:
> >       xgene1 = auto()
> >
> >
> >   @unique
> >   class Compiler(StrEnum):
> > +    r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s."""
> > +
> > +    #:
> >       gcc = auto()
> > +    #:
> >       clang = auto()
> > +    #:
> >       icc = auto()
> > +    #:
> >       msvc = auto()
> >
> >
> >   @unique
> >   class TrafficGeneratorType(StrEnum):
> > +    """The supported traffic generators."""
> > +
> > +    #:
> >       SCAPY = auto()
> >
> >
> > -# Slots enables some optimizations, by pre-allocating space for the defined
> > -# attributes in the underlying data structure.
> > -#
> > -# Frozen makes the object immutable. This enables further optimizations,
> > -# and makes it thread safe should we every want to move in that direction.
> >   @dataclass(slots=True, frozen=True)
> >   class HugepageConfiguration:
> > +    r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s.
> > +
> > +    Attributes:
> > +        amount: The number of hugepages.
> > +        force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node.
> > +    """
> > +
> >       amount: int
> >       force_first_numa: bool
> >
> >
> >   @dataclass(slots=True, frozen=True)
> >   class PortConfig:
> > +    r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s.
> > +
> > +    Attributes:
> > +        node: The :class:`~framework.testbed_model.node.Node` where this port exists.
> > +        pci: The PCI address of the port.
> > +        os_driver_for_dpdk: The operating system driver name for use with DPDK.
> > +        os_driver: The operating system driver name when the operating system controls the port.
> > +        peer_node: The :class:`~framework.testbed_model.node.Node` of the port
> > +            connected to this port.
> > +        peer_pci: The PCI address of the port connected to this port.
> > +    """
> > +
> >       node: str
> >       pci: str
> >       os_driver_for_dpdk: str
> > @@ -81,18 +158,44 @@ class PortConfig:
> >       peer_pci: str
> >
> >       @staticmethod
> > -    def from_dict(node: str, d: dict) -> "PortConfig":
> > +    def from_dict(node: str, d: PortConfigDict) -> "PortConfig":
> > +        """A convenience method that creates the object from fewer inputs.
> > +
> > +        Args:
> > +            node: The node where this port exists.
> > +            d: The configuration dictionary.
> > +
> > +        Returns:
> > +            The port configuration instance.
> > +        """
> >           return PortConfig(node=node, **d)
> >
> >
> >   @dataclass(slots=True, frozen=True)
> >   class TrafficGeneratorConfig:
> > +    """The configuration of traffic generators.
> > +
> > +    The class will be expanded when more configuration is needed.
> > +
> > +    Attributes:
> > +        traffic_generator_type: The type of the traffic generator.
> > +    """
> > +
> >       traffic_generator_type: TrafficGeneratorType
> >
> >       @staticmethod
> > -    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
> > -        # This looks useless now, but is designed to allow expansion to traffic
> > -        # generators that require more configuration later.
> > +    def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig":
> > +        """A convenience method that produces traffic generator config of the proper type.
> > +
> > +        Args:
> > +            d: The configuration dictionary.
> > +
> > +        Returns:
> > +            The traffic generator configuration instance.
> > +
> > +        Raises:
> > +            ConfigurationError: An unknown traffic generator type was encountered.
> > +        """
> >           match TrafficGeneratorType(d["type"]):
> >               case TrafficGeneratorType.SCAPY:
> >                   return ScapyTrafficGeneratorConfig(
> > @@ -106,11 +209,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
> >
> >   @dataclass(slots=True, frozen=True)
> >   class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
> > +    """Scapy traffic generator specific configuration."""
> > +
> >       pass
> >
> >
> >   @dataclass(slots=True, frozen=True)
> >   class NodeConfiguration:
> > +    r"""The configuration of :class:`~framework.testbed_model.node.Node`\s.
> > +
> > +    Attributes:
> > +        name: The name of the :class:`~framework.testbed_model.node.Node`.
> > +        hostname: The hostname of the :class:`~framework.testbed_model.node.Node`.
> > +            Can be an IP or a domain name.
> > +        user: The name of the user used to connect to
> > +            the :class:`~framework.testbed_model.node.Node`.
> > +        password: The password of the user. The use of passwords is heavily discouraged.
> > +            Please use keys instead.
> > +        arch: The architecture of the :class:`~framework.testbed_model.node.Node`.
> > +        os: The operating system of the :class:`~framework.testbed_model.node.Node`.
> > +        lcores: A comma delimited list of logical cores to use when running DPDK.
> > +        use_first_core: If :data:`True`, the first logical core won't be used.
> > +        hugepages: An optional hugepage configuration.
> > +        ports: The ports that can be used in testing.
> > +    """
> > +
> >       name: str
> >       hostname: str
> >       user: str
> > @@ -123,57 +246,91 @@ class NodeConfiguration:
> >       ports: list[PortConfig]
> >
> >       @staticmethod
> > -    def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
> > -        hugepage_config = d.get("hugepages")
> > -        if hugepage_config:
> > -            if "force_first_numa" not in hugepage_config:
> > -                hugepage_config["force_first_numa"] = False
> > -            hugepage_config = HugepageConfiguration(**hugepage_config)
> > -
> > -        common_config = {
> > -            "name": d["name"],
> > -            "hostname": d["hostname"],
> > -            "user": d["user"],
> > -            "password": d.get("password"),
> > -            "arch": Architecture(d["arch"]),
> > -            "os": OS(d["os"]),
> > -            "lcores": d.get("lcores", "1"),
> > -            "use_first_core": d.get("use_first_core", False),
> > -            "hugepages": hugepage_config,
> > -            "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
> > -        }
> > -
> > +    def from_dict(
> > +        d: NodeConfigDict,
> > +    ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
> > +        """A convenience method that processes the inputs before creating a specialized instance.
> > +
> > +        Args:
> > +            d: The configuration dictionary.
> > +
> > +        Returns:
> > +            Either an SUT or TG configuration instance.
> > +        """
> > +        hugepage_config = None
> > +        if "hugepages" in d:
> > +            hugepage_config_dict = d["hugepages"]
> > +            if "force_first_numa" not in hugepage_config_dict:
> > +                hugepage_config_dict["force_first_numa"] = False
> > +            hugepage_config = HugepageConfiguration(**hugepage_config_dict)
> > +
> > +        # The calls here contain duplicated code which is here because Mypy doesn't
> > +        # properly support dictionary unpacking with TypedDicts
> >           if "traffic_generator" in d:
> >               return TGNodeConfiguration(
> > +                name=d["name"],
> > +                hostname=d["hostname"],
> > +                user=d["user"],
> > +                password=d.get("password"),
> > +                arch=Architecture(d["arch"]),
> > +                os=OS(d["os"]),
> > +                lcores=d.get("lcores", "1"),
> > +                use_first_core=d.get("use_first_core", False),
> > +                hugepages=hugepage_config,
> > +                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
> >                   traffic_generator=TrafficGeneratorConfig.from_dict(
> >                       d["traffic_generator"]
> >                   ),
> > -                **common_config,
> >               )
> >           else:
> >               return SutNodeConfiguration(
> > -                memory_channels=d.get("memory_channels", 1), **common_config
> > +                name=d["name"],
> > +                hostname=d["hostname"],
> > +                user=d["user"],
> > +                password=d.get("password"),
> > +                arch=Architecture(d["arch"]),
> > +                os=OS(d["os"]),
> > +                lcores=d.get("lcores", "1"),
> > +                use_first_core=d.get("use_first_core", False),
> > +                hugepages=hugepage_config,
> > +                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
> > +                memory_channels=d.get("memory_channels", 1),
> >               )
> >
> >
> >   @dataclass(slots=True, frozen=True)
> >   class SutNodeConfiguration(NodeConfiguration):
> > +    """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration.
> > +
> > +    Attributes:
> > +        memory_channels: The number of memory channels to use when running DPDK.
> > +    """
> > +
> >       memory_channels: int
> >
> >
> >   @dataclass(slots=True, frozen=True)
> >   class TGNodeConfiguration(NodeConfiguration):
> > +    """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration.
> > +
> > +    Attributes:
> > +        traffic_generator: The configuration of the traffic generator present on the TG node.
> > +    """
> > +
> >       traffic_generator: ScapyTrafficGeneratorConfig
> >
> >
> >   @dataclass(slots=True, frozen=True)
> >   class NodeInfo:
> > -    """Class to hold important versions within the node.
> > -
> > -    This class, unlike the NodeConfiguration class, cannot be generated at the start.
> > -    This is because we need to initialize a connection with the node before we can
> > -    collect the information needed in this class. Therefore, it cannot be a part of
> > -    the configuration class above.
> > +    """Supplemental node information.
> > +
> > +    Attributes:
> > +        os_name: The name of the running operating system of
> > +            the :class:`~framework.testbed_model.node.Node`.
> > +        os_version: The version of the running operating system of
> > +            the :class:`~framework.testbed_model.node.Node`.
> > +        kernel_version: The kernel version of the running operating system of
> > +            the :class:`~framework.testbed_model.node.Node`.
> >       """
> >
> >       os_name: str
> > @@ -183,6 +340,20 @@ class NodeInfo:
> >
> >   @dataclass(slots=True, frozen=True)
> >   class BuildTargetConfiguration:
> > +    """DPDK build configuration.
> > +
> > +    The configuration used for building DPDK.
> > +
> > +    Attributes:
> > +        arch: The target architecture to build for.
> > +        os: The target os to build for.
> > +        cpu: The target CPU to build for.
> > +        compiler: The compiler executable to use.
> > +        compiler_wrapper: This string will be put in front of the compiler when
> > +            executing the build. Useful for adding wrapper commands, such as ``ccache``.
> > +        name: The name of the compiler.
> > +    """
> > +
> >       arch: Architecture
> >       os: OS
> >       cpu: CPUType
> > @@ -191,7 +362,18 @@ class BuildTargetConfiguration:
> >       name: str
> >
> >       @staticmethod
> > -    def from_dict(d: dict) -> "BuildTargetConfiguration":
> > +    def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration":
> > +        r"""A convenience method that processes the inputs before creating an instance.
> > +
> > +        `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and
> > +        `name` is constructed from `arch`, `os`, `cpu` and `compiler`.
> > +
> > +        Args:
> > +            d: The configuration dictionary.
> > +
> > +        Returns:
> > +            The build target configuration instance.
> > +        """
> >           return BuildTargetConfiguration(
> >               arch=Architecture(d["arch"]),
> >               os=OS(d["os"]),
> > @@ -204,23 +386,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
> >
> >   @dataclass(slots=True, frozen=True)
> >   class BuildTargetInfo:
> > -    """Class to hold important versions within the build target.
> > +    """Various versions and other information about a build target.
> >
> > -    This is very similar to the NodeInfo class, it just instead holds information
> > -    for the build target.
> > +    Attributes:
> > +        dpdk_version: The DPDK version that was built.
> > +        compiler_version: The version of the compiler used to build DPDK.
> >       """
> >
> >       dpdk_version: str
> >       compiler_version: str
> >
> >
> > -class TestSuiteConfigDict(TypedDict):
> > -    suite: str
> > -    cases: list[str]
> > -
> > -
> >   @dataclass(slots=True, frozen=True)
> >   class TestSuiteConfig:
> > +    """Test suite configuration.
> > +
> > +    Information about a single test suite to be executed.
> > +
> > +    Attributes:
> > +        test_suite: The name of the test suite module without the starting ``TestSuite_``.
> > +        test_cases: The names of test cases from this test suite to execute.
> > +            If empty, all test cases will be executed.
> > +    """
> > +
> >       test_suite: str
> >       test_cases: list[str]
> >
> > @@ -228,6 +416,14 @@ class TestSuiteConfig:
> >       def from_dict(
> >           entry: str | TestSuiteConfigDict,
> >       ) -> "TestSuiteConfig":
> > +        """Create an instance from two different types.
> > +
> > +        Args:
> > +            entry: Either a suite name or a dictionary containing the config.
> > +
> > +        Returns:
> > +            The test suite configuration instance.
> > +        """
> >           if isinstance(entry, str):
> >               return TestSuiteConfig(test_suite=entry, test_cases=[])
> >           elif isinstance(entry, dict):
> > @@ -238,19 +434,49 @@ def from_dict(
> >
> >   @dataclass(slots=True, frozen=True)
> >   class ExecutionConfiguration:
> > +    """The configuration of an execution.
> > +
> > +    The configuration contains testbed information, what tests to execute
> > +    and with what DPDK build.
> > +
> > +    Attributes:
> > +        build_targets: A list of DPDK builds to test.
> > +        perf: Whether to run performance tests.
> > +        func: Whether to run functional tests.
> > +        skip_smoke_tests: Whether to skip smoke tests.
> > +        test_suites: The names of test suites and/or test cases to execute.
> > +        system_under_test_node: The SUT node to use in this execution.
> > +        traffic_generator_node: The TG node to use in this execution.
> > +        vdevs: The names of virtual devices to test.
> > +    """
> > +
> >       build_targets: list[BuildTargetConfiguration]
> >       perf: bool
> >       func: bool
> > +    skip_smoke_tests: bool
> >       test_suites: list[TestSuiteConfig]
> >       system_under_test_node: SutNodeConfiguration
> >       traffic_generator_node: TGNodeConfiguration
> >       vdevs: list[str]
> > -    skip_smoke_tests: bool
> >
> >       @staticmethod
> >       def from_dict(
> > -        d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
> > +        d: ExecutionConfigDict,
> > +        node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]],
> >       ) -> "ExecutionConfiguration":
> > +        """A convenience method that processes the inputs before creating an instance.
> > +
> > +        The build target and the test suite config is transformed into their respective objects.
>
> is -> are
>
> > +        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
>
> configuration*s*
>
> > +        just stored.
> > +
> > +        Args:
> > +            d: The configuration dictionary.
> > +            node_map: A dictionary mapping node names to their config objects.
> > +
> > +        Returns:
> > +            The execution configuration instance.
> > +        """
> >           build_targets: list[BuildTargetConfiguration] = list(
> >               map(BuildTargetConfiguration.from_dict, d["build_targets"])
> >           )
> > @@ -291,10 +517,31 @@ def from_dict(
> >
> >   @dataclass(slots=True, frozen=True)
> >   class Configuration:
> > +    """DTS testbed and test configuration.
> > +
> > +    The node configuration is not stored in this object. Rather, all used node configurations
> > +    are stored inside the execution configuration where the nodes are actually used.
> > +
> > +    Attributes:
> > +        executions: Execution configurations.
> > +    """
> > +
> >       executions: list[ExecutionConfiguration]
> >
> >       @staticmethod
> > -    def from_dict(d: dict) -> "Configuration":
> > +    def from_dict(d: ConfigurationDict) -> "Configuration":
> > +        """A convenience method that processes the inputs before creating an instance.
> > +
> > +        Build target and test suite config is transformed into their respective objects.
>
> is -> are
>
> > +        SUT and TG configuration are taken from `node_map`. The other (:class:`bool`) attributes are
>
> configuration*s*
>
> > +        just stored.
> > +
> > +        Args:
> > +            d: The configuration dictionary.
> > +
> > +        Returns:
> > +            The whole configuration instance.
> > +        """
> >           nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
> >               map(NodeConfiguration.from_dict, d["nodes"])
> >           )
> > @@ -313,9 +560,17 @@ def from_dict(d: dict) -> "Configuration":
> >
> >
> >   def load_config() -> Configuration:
> > -    """
> > -    Loads the configuration file and the configuration file schema,
> > -    validates the configuration file, and creates a configuration object.
> > +    """Load DTS test run configuration from a file.
> > +
> > +    Load the YAML test run configuration file
> > +    and :download:`the configuration file schema <conf_yaml_schema.json>`,
> > +    validate the test run configuration file, and create a test run configuration object.
> > +
> > +    The YAML test run configuration file is specified in the :option:`--config-file` command line
> > +    argument or the :envvar:`DTS_CFG_FILE` environment variable.
> > +
> > +    Returns:
> > +        The parsed test run configuration.
> >       """
> >       with open(SETTINGS.config_file_path, "r") as f:
> >           config_data = yaml.safe_load(f)
> > @@ -326,6 +581,8 @@ def load_config() -> Configuration:
> >
> >       with open(schema_path, "r") as f:
> >           schema = json.load(f)
> > -    config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
> > -    config_obj: Configuration = Configuration.from_dict(dict(config))
> > +    config = warlock.model_factory(schema, name="_Config")(config_data)
> > +    config_obj: Configuration = Configuration.from_dict(
> > +        dict(config)  # type: ignore[arg-type]
> > +    )
> >       return config_obj
> > diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
> > new file mode 100644
> > index 0000000000..1927910d88
> > --- /dev/null
> > +++ b/dts/framework/config/types.py
> > @@ -0,0 +1,132 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> > +
> > +"""Configuration dictionary contents specification.
> > +
> > +These type definitions serve as documentation of the configuration dictionary contents.
> > +
> > +The definitions use the built-in :class:`~typing.TypedDict` construct.
> > +"""
> > +
> > +from typing import TypedDict
> > +
> > +
> > +class PortConfigDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    pci: str
> > +    #:
> > +    os_driver_for_dpdk: str
> > +    #:
> > +    os_driver: str
> > +    #:
> > +    peer_node: str
> > +    #:
> > +    peer_pci: str
> > +
> > +
> > +class TrafficGeneratorConfigDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    type: str
> > +
> > +
> > +class HugepageConfigurationDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    amount: int
> > +    #:
> > +    force_first_numa: bool
> > +
> > +
> > +class NodeConfigDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    hugepages: HugepageConfigurationDict
> > +    #:
> > +    name: str
> > +    #:
> > +    hostname: str
> > +    #:
> > +    user: str
> > +    #:
> > +    password: str
> > +    #:
> > +    arch: str
> > +    #:
> > +    os: str
> > +    #:
> > +    lcores: str
> > +    #:
> > +    use_first_core: bool
> > +    #:
> > +    ports: list[PortConfigDict]
> > +    #:
> > +    memory_channels: int
> > +    #:
> > +    traffic_generator: TrafficGeneratorConfigDict
> > +
> > +
> > +class BuildTargetConfigDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    arch: str
> > +    #:
> > +    os: str
> > +    #:
> > +    cpu: str
> > +    #:
> > +    compiler: str
> > +    #:
> > +    compiler_wrapper: str
> > +
> > +
> > +class TestSuiteConfigDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    suite: str
> > +    #:
> > +    cases: list[str]
> > +
> > +
> > +class ExecutionSUTConfigDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    node_name: str
> > +    #:
> > +    vdevs: list[str]
> > +
> > +
> > +class ExecutionConfigDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    build_targets: list[BuildTargetConfigDict]
> > +    #:
> > +    perf: bool
> > +    #:
> > +    func: bool
> > +    #:
> > +    skip_smoke_tests: bool
> > +    #:
> > +    test_suites: TestSuiteConfigDict
> > +    #:
> > +    system_under_test_node: ExecutionSUTConfigDict
> > +    #:
> > +    traffic_generator_node: str
> > +
> > +
> > +class ConfigurationDict(TypedDict):
> > +    """Allowed keys and values."""
> > +
> > +    #:
> > +    nodes: list[NodeConfigDict]
> > +    #:
> > +    executions: list[ExecutionConfigDict]
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 11/21] dts: remote session docstring update
  2023-11-21 15:36                 ` Yoan Picchi
@ 2023-11-22 11:13                   ` Juraj Linkeš
  2023-11-22 11:25                     ` Yoan Picchi
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 11:13 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Tue, Nov 21, 2023 at 4:36 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/remote_session/__init__.py      |  39 +++++-
> >   .../remote_session/remote_session.py          | 128 +++++++++++++-----
> >   dts/framework/remote_session/ssh_session.py   |  16 +--
> >   3 files changed, 135 insertions(+), 48 deletions(-)
> >
> > diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
> > index 5e7ddb2b05..51a01d6b5e 100644
> > --- a/dts/framework/remote_session/__init__.py
> > +++ b/dts/framework/remote_session/__init__.py
> > @@ -2,12 +2,14 @@
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2023 University of New Hampshire
> >
> > -"""
> > -The package provides modules for managing remote connections to a remote host (node),
> > -differentiated by OS.
> > -The package provides a factory function, create_session, that returns the appropriate
> > -remote connection based on the passed configuration. The differences are in the
> > -underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
> > +"""Remote interactive and non-interactive sessions.
> > +
> > +This package provides modules for managing remote connections to a remote host (node).
> > +
> > +The non-interactive sessions send commands and return their output and exit code.
> > +
> > +The interactive sessions open an interactive shell which is continuously open,
> > +allowing it to send and receive data within that particular shell.
> >   """
> >
> >   # pylama:ignore=W0611
> > @@ -26,10 +28,35 @@
> >   def create_remote_session(
> >       node_config: NodeConfiguration, name: str, logger: DTSLOG
> >   ) -> RemoteSession:
> > +    """Factory for non-interactive remote sessions.
> > +
> > +    The function returns an SSH session, but will be extended if support
> > +    for other protocols is added.
> > +
> > +    Args:
> > +        node_config: The test run configuration of the node to connect to.
> > +        name: The name of the session.
> > +        logger: The logger instance this session will use.
> > +
> > +    Returns:
> > +        The SSH remote session.
> > +    """
> >       return SSHSession(node_config, name, logger)
> >
> >
> >   def create_interactive_session(
> >       node_config: NodeConfiguration, logger: DTSLOG
> >   ) -> InteractiveRemoteSession:
> > +    """Factory for interactive remote sessions.
> > +
> > +    The function returns an interactive SSH session, but will be extended if support
> > +    for other protocols is added.
> > +
> > +    Args:
> > +        node_config: The test run configuration of the node to connect to.
> > +        logger: The logger instance this session will use.
> > +
> > +    Returns:
> > +        The interactive SSH remote session.
> > +    """
> >       return InteractiveRemoteSession(node_config, logger)
> > diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
> > index 0647d93de4..629c2d7b9c 100644
> > --- a/dts/framework/remote_session/remote_session.py
> > +++ b/dts/framework/remote_session/remote_session.py
> > @@ -3,6 +3,13 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >
> > +"""Base remote session.
> > +
> > +This module contains the abstract base class for remote sessions and defines
> > +the structure of the result of a command execution.
> > +"""
> > +
> > +
> >   import dataclasses
> >   from abc import ABC, abstractmethod
> >   from pathlib import PurePath
> > @@ -15,8 +22,14 @@
> >
> >   @dataclasses.dataclass(slots=True, frozen=True)
> >   class CommandResult:
> > -    """
> > -    The result of remote execution of a command.
> > +    """The result of remote execution of a command.
> > +
> > +    Attributes:
> > +        name: The name of the session that executed the command.
> > +        command: The executed command.
> > +        stdout: The standard output the command produced.
> > +        stderr: The standard error output the command produced.
> > +        return_code: The return code the command exited with.
> >       """
> >
> >       name: str
> > @@ -26,6 +39,7 @@ class CommandResult:
> >       return_code: int
> >
> >       def __str__(self) -> str:
> > +        """Format the command outputs."""
> >           return (
> >               f"stdout: '{self.stdout}'\n"
> >               f"stderr: '{self.stderr}'\n"
> > @@ -34,13 +48,24 @@ def __str__(self) -> str:
> >
> >
> >   class RemoteSession(ABC):
> > -    """
> > -    The base class for defining which methods must be implemented in order to connect
> > -    to a remote host (node) and maintain a remote session. The derived classes are
> > -    supposed to implement/use some underlying transport protocol (e.g. SSH) to
> > -    implement the methods. On top of that, it provides some basic services common to
> > -    all derived classes, such as keeping history and logging what's being executed
> > -    on the remote node.
> > +    """Non-interactive remote session.
> > +
> > +    The abstract methods must be implemented in order to connect to a remote host (node)
> > +    and maintain a remote session.
> > +    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
> > +    to implement the methods. On top of that, it provides some basic services common to all
> > +    subclasses, such as keeping history and logging what's being executed on the remote node.
> > +
> > +    Attributes:
> > +        name: The name of the session.
> > +        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
> > +            or a domain name.
> > +        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
> > +        port: The port of the node, if given in `hostname`.
> > +        username: The username used in the connection.
> > +        password: The password used in the connection. Most frequently empty,
> > +            as the use of passwords is discouraged.
> > +        history: The executed commands during this session.
> >       """
> >
> >       name: str
> > @@ -59,6 +84,16 @@ def __init__(
> >           session_name: str,
> >           logger: DTSLOG,
> >       ):
> > +        """Connect to the node during initialization.
> > +
> > +        Args:
> > +            node_config: The test run configuration of the node to connect to.
> > +            session_name: The name of the session.
> > +            logger: The logger instance this session will use.
> > +
> > +        Raises:
> > +            SSHConnectionError: If the connection to the node was not successful.
> > +        """
> >           self._node_config = node_config
> >
> >           self.name = session_name
> > @@ -79,8 +114,13 @@ def __init__(
> >
> >       @abstractmethod
> >       def _connect(self) -> None:
> > -        """
> > -        Create connection to assigned node.
> > +        """Create a connection to the node.
> > +
> > +        The implementation must assign the established session to self.session.
> > +
> > +        The implementation must except all exceptions and convert them to an SSHConnectionError.
> > +
> > +        The implementation may optionally implement retry attempts.
> >           """
> >
> >       def send_command(
> > @@ -90,11 +130,24 @@ def send_command(
> >           verify: bool = False,
> >           env: dict | None = None,
> >       ) -> CommandResult:
> > -        """
> > -        Send a command to the connected node using optional env vars
> > -        and return CommandResult.
> > -        If verify is True, check the return code of the executed command
> > -        and raise a RemoteCommandExecutionError if the command failed.
> > +        """Send `command` to the connected node.
> > +
> > +        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
> > +        environment variable configure the timeout of command execution.
> > +
> > +        Args:
> > +            command: The command to execute.
> > +            timeout: Wait at most this long in seconds to execute `command`.
> > +            verify: If :data:`True`, will check the exit code of `command`.
> > +            env: A dictionary with environment variables to be used with `command` execution.
> > +
> > +        Raises:
> > +            SSHSessionDeadError: If the session isn't alive when sending `command`.
> > +            SSHTimeoutError: If `command` execution timed out.
> > +            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
> > +
> > +        Returns:
> > +            The output of the command along with the return code.
> >           """
> >           self._logger.info(
> >               f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
> > @@ -115,29 +168,36 @@ def send_command(
> >       def _send_command(
> >           self, command: str, timeout: float, env: dict | None
> >       ) -> CommandResult:
> > -        """
> > -        Use the underlying protocol to execute the command using optional env vars
> > -        and return CommandResult.
> > +        """Send a command to the connected node.
> > +
> > +        The implementation must execute the command remotely with `env` environment variables
> > +        and return the result.
> > +
> > +        The implementation must except all exceptions and raise an SSHSessionDeadError if
> > +        the session is not alive and an SSHTimeoutError if the command execution times out.
>
> 3 way "and". Needs comas or splitting the sentence.
>

What about this?

The implementation must except all exceptions and raise:

    * SSHSessionDeadError if the session is not alive,
    * SSHTimeoutError if the command execution times out.


> >           """
> >
> >       def close(self, force: bool = False) -> None:
> > -        """
> > -        Close the remote session and free all used resources.
> > +        """Close the remote session and free all used resources.
> > +
> > +        Args:
> > +            force: Force the closure of the connection. This may not clean up all resources.
> >           """
> >           self._logger.logger_exit()
> >           self._close(force)
> >
> >       @abstractmethod
> >       def _close(self, force: bool = False) -> None:
> > -        """
> > -        Execute protocol specific steps needed to close the session properly.
> > +        """Protocol specific steps needed to close the session properly.
> > +
> > +        Args:
> > +            force: Force the closure of the connection. This may not clean up all resources.
> > +                This doesn't have to be implemented in the overloaded method.
> >           """
> >
> >       @abstractmethod
> >       def is_alive(self) -> bool:
> > -        """
> > -        Check whether the remote session is still responding.
> > -        """
> > +        """Check whether the remote session is still responding."""
> >
> >       @abstractmethod
> >       def copy_from(
> > @@ -147,12 +207,12 @@ def copy_from(
> >       ) -> None:
> >           """Copy a file from the remote Node to the local filesystem.
> >
> > -        Copy source_file from the remote Node associated with this remote
> > -        session to destination_file on the local filesystem.
> > +        Copy `source_file` from the remote Node associated with this remote session
> > +        to `destination_file` on the local filesystem.
> >
> >           Args:
> > -            source_file: the file on the remote Node.
> > -            destination_file: a file or directory path on the local filesystem.
> > +            source_file: The file on the remote Node.
> > +            destination_file: A file or directory path on the local filesystem.
> >           """
> >
> >       @abstractmethod
> > @@ -163,10 +223,10 @@ def copy_to(
> >       ) -> None:
> >           """Copy a file from local filesystem to the remote Node.
> >
> > -        Copy source_file from local filesystem to destination_file
> > -        on the remote Node associated with this remote session.
> > +        Copy `source_file` from local filesystem to `destination_file` on the remote Node
> > +        associated with this remote session.
> >
> >           Args:
> > -            source_file: the file on the local filesystem.
> > -            destination_file: a file or directory path on the remote Node.
> > +            source_file: The file on the local filesystem.
> > +            destination_file: A file or directory path on the remote Node.
> >           """
> > diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
> > index cee11d14d6..7186490a9a 100644
> > --- a/dts/framework/remote_session/ssh_session.py
> > +++ b/dts/framework/remote_session/ssh_session.py
> > @@ -1,6 +1,8 @@
> >   # SPDX-License-Identifier: BSD-3-Clause
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >
> > +"""SSH session remote session."""
>
> Is the double "session" intended?
>

Not really, I'll remove the first occurence.

> > +
> >   import socket
> >   import traceback
> >   from pathlib import PurePath
> > @@ -26,13 +28,8 @@
> >   class SSHSession(RemoteSession):
> >       """A persistent SSH connection to a remote Node.
> >
> > -    The connection is implemented with the Fabric Python library.
> > -
> > -    Args:
> > -        node_config: The configuration of the Node to connect to.
> > -        session_name: The name of the session.
> > -        logger: The logger used for logging.
> > -            This should be passed from the parent OSSession.
> > +    The connection is implemented with
> > +    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
> >
> >       Attributes:
> >           session: The underlying Fabric SSH connection.
> > @@ -80,6 +77,7 @@ def _connect(self) -> None:
> >               raise SSHConnectionError(self.hostname, errors)
> >
> >       def is_alive(self) -> bool:
> > +        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
> >           return self.session.is_connected
> >
> >       def _send_command(
> > @@ -89,7 +87,7 @@ def _send_command(
> >
> >           Args:
> >               command: The command to execute.
> > -            timeout: Wait at most this many seconds for the execution to complete.
> > +            timeout: Wait at most this long in seconds to execute the command.
>
> Is the timeout actually to start running the command and not to wait for
> it to be completed?
>

It is to wait for it to be completed. The wording is a bit confusing,
what about:

Wait at most this long in seconds for the command execution to complete.

I'll change this in all places where timeout is documented.

> >               env: Extra environment variables that will be used in command execution.
> >
> >           Raises:
> > @@ -118,6 +116,7 @@ def copy_from(
> >           source_file: str | PurePath,
> >           destination_file: str | PurePath,
> >       ) -> None:
> > +        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
> >           self.session.get(str(destination_file), str(source_file))
> >
> >       def copy_to(
> > @@ -125,6 +124,7 @@ def copy_to(
> >           source_file: str | PurePath,
> >           destination_file: str | PurePath,
> >       ) -> None:
> > +        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
> >           self.session.put(str(source_file), str(destination_file))
> >
> >       def _close(self, force: bool = False) -> None:
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 14/21] dts: cpu docstring update
  2023-11-21 17:45                 ` Yoan Picchi
@ 2023-11-22 11:18                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 11:18 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Tue, Nov 21, 2023 at 6:45 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++--------
> >   1 file changed, 144 insertions(+), 52 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
> > index 8fe785dfe4..4edeb4a7c2 100644
> > --- a/dts/framework/testbed_model/cpu.py
> > +++ b/dts/framework/testbed_model/cpu.py
> > @@ -1,6 +1,22 @@
> >   # SPDX-License-Identifier: BSD-3-Clause
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >
> > +"""CPU core representation and filtering.
> > +
> > +This module provides a unified representation of logical CPU cores along
> > +with filtering capabilities.
> > +
> > +When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
> > +the physical CPU cores are split into logical CPU cores with different IDs.
> > +
> > +:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify
> > +the socket from which to filter the number of logical cores. It's also possible to not use all
> > +logical CPU cores from each physical core (e.g. only the first logical core of each physical core).
> > +
> > +:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that
> > +the logical cores are actually present on the server.
> > +"""
> > +
> >   import dataclasses
> >   from abc import ABC, abstractmethod
> >   from collections.abc import Iterable, ValuesView
> > @@ -11,9 +27,17 @@
> >
> >   @dataclass(slots=True, frozen=True)
> >   class LogicalCore(object):
> > -    """
> > -    Representation of a CPU core. A physical core is represented in OS
> > -    by multiple logical cores (lcores) if CPU multithreading is enabled.
> > +    """Representation of a logical CPU core.
> > +
> > +    A physical core is represented in OS by multiple logical cores (lcores)
> > +    if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same.
> > +
> > +    Attributes:
> > +        lcore: The logical core ID of a CPU core. It's the same as `core` with
> > +            disabled multithreading.
> > +        core: The physical core ID of a CPU core.
> > +        socket: The physical socket ID where the CPU resides.
> > +        node: The NUMA node ID where the CPU resides.
> >       """
> >
> >       lcore: int
> > @@ -22,27 +46,36 @@ class LogicalCore(object):
> >       node: int
> >
> >       def __int__(self) -> int:
> > +        """The CPU is best represented by the logical core, as that's what we configure in EAL."""
> >           return self.lcore
> >
> >
> >   class LogicalCoreList(object):
> > -    """
> > -    Convert these options into a list of logical core ids.
> > -    lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
> > -    lcore_list=[0,1,2,3] - a list of int indices
> > -    lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
> > -    lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
> > -
> > -    The class creates a unified format used across the framework and allows
> > -    the user to use either a str representation (using str(instance) or directly
> > -    in f-strings) or a list representation (by accessing instance.lcore_list).
> > -    Empty lcore_list is allowed.
> > +    r"""A unified way to store :class:`LogicalCore`\s.
> > +
> > +    Create a unified format used across the framework and allow the user to use
> > +    either a :class:`str` representation (using ``str(instance)`` or directly in f-strings)
> > +    or a :class:`list` representation (by accessing the `lcore_list` property,
> > +    which stores logical core IDs).
> >       """
> >
> >       _lcore_list: list[int]
> >       _lcore_str: str
> >
> >       def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
> > +        """Process `lcore_list`, then sort.
> > +
> > +        There are four supported logical core list formats::
> > +
> > +            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
> > +            lcore_list=[0,1,2,3]        # a list of int indices
> > +            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
> > +            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
> > +
> > +        Args:
> > +            lcore_list: Various ways to represent multiple logical cores.
> > +                Empty `lcore_list` is allowed.
> > +        """
> >           self._lcore_list = []
> >           if isinstance(lcore_list, str):
> >               lcore_list = lcore_list.split(",")
> > @@ -60,6 +93,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
> >
> >       @property
> >       def lcore_list(self) -> list[int]:
> > +        """The logical core IDs."""
> >           return self._lcore_list
> >
> >       def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
> > @@ -89,28 +123,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
> >           return formatted_core_list
> >
> >       def __str__(self) -> str:
> > +        """The consecutive ranges of logical core IDs."""
> >           return self._lcore_str
> >
> >
> >   @dataclasses.dataclass(slots=True, frozen=True)
> >   class LogicalCoreCount(object):
> > -    """
> > -    Define the number of logical cores to use.
> > -    If sockets is not None, socket_count is ignored.
> > -    """
> > +    """Define the number of logical cores per physical cores per sockets."""
> >
> > +    #: Use this many logical cores per each physical core.
> >       lcores_per_core: int = 1
> > +    #: Use this many physical cores per each socket.
> >       cores_per_socket: int = 2
> > +    #: Use this many sockets.
> >       socket_count: int = 1
> > +    #: Use exactly these sockets. This takes precedence over `socket_count`,
> > +    #: so when `sockets` is not :data:`None`, `socket_count` is ignored.
> >       sockets: list[int] | None = None
> >
> >
> >   class LogicalCoreFilter(ABC):
> > -    """
> > -    Filter according to the input filter specifier. Each filter needs to be
> > -    implemented in a derived class.
> > -    This class only implements operations common to all filters, such as sorting
> > -    the list to be filtered beforehand.
> > +    """Common filtering class.
> > +
> > +    Each filter needs to be implemented in a subclass. This base class sorts the list of cores
> > +    and defines the filtering method, which must be implemented by subclasses.
> >       """
> >
> >       _filter_specifier: LogicalCoreCount | LogicalCoreList
> > @@ -122,6 +158,17 @@ def __init__(
> >           filter_specifier: LogicalCoreCount | LogicalCoreList,
> >           ascending: bool = True,
> >       ):
> > +        """Filter according to the input filter specifier.
> > +
> > +        The input `lcore_list` is copied and sorted by physical core before filtering.
> > +        The list is copied so that the original is left intact.
> > +
> > +        Args:
> > +            lcore_list: The logical CPU cores to filter.
> > +            filter_specifier: Filter cores from `lcore_list` according to this filter.
> > +            ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`,
> > +                sort in descending order.
> > +        """
> >           self._filter_specifier = filter_specifier
> >
> >           # sorting by core is needed in case hyperthreading is enabled
> > @@ -132,31 +179,45 @@ def __init__(
> >
> >       @abstractmethod
> >       def filter(self) -> list[LogicalCore]:
> > -        """
> > -        Use self._filter_specifier to filter self._lcores_to_filter
> > -        and return the list of filtered LogicalCores.
> > -        self._lcores_to_filter is a sorted copy of the original list,
> > -        so it may be modified.
> > +        r"""Filter the cores.
> > +
> > +        Use `self._filter_specifier` to filter `self._lcores_to_filter` and return
> > +        the filtered :class:`LogicalCore`\s.
> > +        `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified.
> > +
> > +        Returns:
> > +            The filtered cores.
> >           """
> >
> >
> >   class LogicalCoreCountFilter(LogicalCoreFilter):
> > -    """
> > +    """Filter cores by specified counts.
> > +
> >       Filter the input list of LogicalCores according to specified rules:
> > -    Use cores from the specified number of sockets or from the specified socket ids.
> > -    If sockets is specified, it takes precedence over socket_count.
> > -    From each of those sockets, use only cores_per_socket of cores.
> > -    And for each core, use lcores_per_core of logical cores. Hypertheading
> > -    must be enabled for this to take effect.
> > -    If ascending is True, use cores with the lowest numerical id first
> > -    and continue in ascending order. If False, start with the highest
> > -    id and continue in descending order. This ordering affects which
> > -    sockets to consider first as well.
> > +
> > +        * The input `filter_specifier` is :class:`LogicalCoreCount`,
> > +        * Use cores from the specified number of sockets or from the specified socket ids,
> > +        * If `sockets` is specified, it takes precedence over `socket_count`,
> > +        * From each of those sockets, use only `cores_per_socket` of cores,
> > +        * And for each core, use `lcores_per_core` of logical cores. Hypertheading
> > +          must be enabled for this to take effect.
> >       """
> >
> >       _filter_specifier: LogicalCoreCount
> >
> >       def filter(self) -> list[LogicalCore]:
> > +        """Filter the cores according to :class:`LogicalCoreCount`.
> > +
> > +        Start by filtering the allowed sockets. The cores matching the allowed socket are returned.
>
> allowed socket*s*
>

Ack.

> > +        The cores of each socket are stored in separate lists.
> > +
> > +        Then filter the allowed physical cores from those lists of cores per socket. When filtering
> > +        physical cores, store the desired number of logical cores per physical core which then
> > +        together constitute the final filtered list.
> > +
> > +        Returns:
> > +            The filtered cores.
> > +        """
> >           sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
> >           filtered_lcores = []
> >           for socket_to_filter in sockets_to_filter:
> > @@ -166,24 +227,37 @@ def filter(self) -> list[LogicalCore]:
> >       def _filter_sockets(
> >           self, lcores_to_filter: Iterable[LogicalCore]
> >       ) -> ValuesView[list[LogicalCore]]:
> > -        """
> > -        Remove all lcores that don't match the specified socket(s).
> > -        If self._filter_specifier.sockets is not None, keep lcores from those sockets,
> > -        otherwise keep lcores from the first
> > -        self._filter_specifier.socket_count sockets.
> > +        """Filter a list of cores per each allowed socket.
> > +
> > +        The sockets may be specified in two ways, either a number or a specific list of sockets.
> > +        In case of a specific list, we just need to return the cores from those sockets.
> > +        If filtering a number of cores, we need to go through all cores and note which sockets
> > +        appear and only filter from the first n that appear.
> > +
> > +        Args:
> > +            lcores_to_filter: The cores to filter. These must be sorted by the physical core.
> > +
> > +        Returns:
> > +            A list of lists of logical CPU cores. Each list contains cores from one socket.
> >           """
> >           allowed_sockets: set[int] = set()
> >           socket_count = self._filter_specifier.socket_count
> >           if self._filter_specifier.sockets:
> > +            # when sockets in filter is specified, the sockets are already set
> >               socket_count = len(self._filter_specifier.sockets)
> >               allowed_sockets = set(self._filter_specifier.sockets)
> >
> > +        # filter socket_count sockets from all sockets by checking the socket of each CPU
> >           filtered_lcores: dict[int, list[LogicalCore]] = {}
> >           for lcore in lcores_to_filter:
> >               if not self._filter_specifier.sockets:
> > +                # this is when sockets is not set, so we do the actual filtering
> > +                # when it is set, allowed_sockets is already defined and can't be changed
> >                   if len(allowed_sockets) < socket_count:
> > +                    # allowed_sockets is a set, so adding an existing socket won't re-add it
> >                       allowed_sockets.add(lcore.socket)
> >               if lcore.socket in allowed_sockets:
> > +                # separate sockets per socket; this makes it easier in further processing
>
> socket*s* per socket ?
>

Good catch, this should be "separate lcores into sockets".

> >                   if lcore.socket in filtered_lcores:
> >                       filtered_lcores[lcore.socket].append(lcore)
> >                   else:
> > @@ -200,12 +274,13 @@ def _filter_sockets(
> >       def _filter_cores_from_socket(
> >           self, lcores_to_filter: Iterable[LogicalCore]
> >       ) -> list[LogicalCore]:
> > -        """
> > -        Keep only the first self._filter_specifier.cores_per_socket cores.
> > -        In multithreaded environments, keep only
> > -        the first self._filter_specifier.lcores_per_core lcores of those cores.
> > -        """
> > +        """Filter a list of cores from the given socket.
> > +
> > +        Go through the cores and note how many logical cores per physical core have been filtered.
> >
> > +        Returns:
> > +            The filtered logical CPU cores.
> > +        """
> >           # no need to use ordered dict, from Python3.7 the dict
> >           # insertion order is preserved (LIFO).
> >           lcore_count_per_core_map: dict[int, int] = {}
> > @@ -248,15 +323,21 @@ def _filter_cores_from_socket(
> >
> >
> >   class LogicalCoreListFilter(LogicalCoreFilter):
> > -    """
> > -    Filter the input list of Logical Cores according to the input list of
> > -    lcore indices.
> > -    An empty LogicalCoreList won't filter anything.
> > +    """Filter the logical CPU cores by logical CPU core IDs.
> > +
> > +    This is a simple filter that looks at logical CPU IDs and only filter those that match.
> > +
> > +    The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything.
> >       """
> >
> >       _filter_specifier: LogicalCoreList
> >
> >       def filter(self) -> list[LogicalCore]:
> > +        """Filter based on logical CPU core ID.
> > +
> > +        Return:
> > +            The filtered logical CPU cores.
> > +        """
> >           if not len(self._filter_specifier.lcore_list):
> >               return self._lcores_to_filter
> >
> > @@ -279,6 +360,17 @@ def lcore_filter(
> >       filter_specifier: LogicalCoreCount | LogicalCoreList,
> >       ascending: bool,
> >   ) -> LogicalCoreFilter:
> > +    """Factory for using the right filter with `filter_specifier`.
> > +
> > +    Args:
> > +        core_list: The logical CPU cores to filter.
> > +        filter_specifier: The filter to use.
> > +        ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
> > +            sort in descending order.
> > +
> > +    Returns:
> > +        The filter matching `filter_specifier`.
> > +    """
> >       if isinstance(filter_specifier, LogicalCoreList):
> >           return LogicalCoreListFilter(core_list, filter_specifier, ascending)
> >       elif isinstance(filter_specifier, LogicalCoreCount):
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 11/21] dts: remote session docstring update
  2023-11-22 11:13                   ` Juraj Linkeš
@ 2023-11-22 11:25                     ` Yoan Picchi
  0 siblings, 0 replies; 255+ messages in thread
From: Yoan Picchi @ 2023-11-22 11:25 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On 11/22/23 11:13, Juraj Linkeš wrote:
> On Tue, Nov 21, 2023 at 4:36 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>>
>> On 11/15/23 13:09, Juraj Linkeš wrote:
>>> Format according to the Google format and PEP257, with slight
>>> deviations.
>>>
>>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>>> ---
>>>    dts/framework/remote_session/__init__.py      |  39 +++++-
>>>    .../remote_session/remote_session.py          | 128 +++++++++++++-----
>>>    dts/framework/remote_session/ssh_session.py   |  16 +--
>>>    3 files changed, 135 insertions(+), 48 deletions(-)
>>>
>>> diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
>>> index 5e7ddb2b05..51a01d6b5e 100644
>>> --- a/dts/framework/remote_session/__init__.py
>>> +++ b/dts/framework/remote_session/__init__.py
>>> @@ -2,12 +2,14 @@
>>>    # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>>    # Copyright(c) 2023 University of New Hampshire
>>>
>>> -"""
>>> -The package provides modules for managing remote connections to a remote host (node),
>>> -differentiated by OS.
>>> -The package provides a factory function, create_session, that returns the appropriate
>>> -remote connection based on the passed configuration. The differences are in the
>>> -underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
>>> +"""Remote interactive and non-interactive sessions.
>>> +
>>> +This package provides modules for managing remote connections to a remote host (node).
>>> +
>>> +The non-interactive sessions send commands and return their output and exit code.
>>> +
>>> +The interactive sessions open an interactive shell which is continuously open,
>>> +allowing it to send and receive data within that particular shell.
>>>    """
>>>
>>>    # pylama:ignore=W0611
>>> @@ -26,10 +28,35 @@
>>>    def create_remote_session(
>>>        node_config: NodeConfiguration, name: str, logger: DTSLOG
>>>    ) -> RemoteSession:
>>> +    """Factory for non-interactive remote sessions.
>>> +
>>> +    The function returns an SSH session, but will be extended if support
>>> +    for other protocols is added.
>>> +
>>> +    Args:
>>> +        node_config: The test run configuration of the node to connect to.
>>> +        name: The name of the session.
>>> +        logger: The logger instance this session will use.
>>> +
>>> +    Returns:
>>> +        The SSH remote session.
>>> +    """
>>>        return SSHSession(node_config, name, logger)
>>>
>>>
>>>    def create_interactive_session(
>>>        node_config: NodeConfiguration, logger: DTSLOG
>>>    ) -> InteractiveRemoteSession:
>>> +    """Factory for interactive remote sessions.
>>> +
>>> +    The function returns an interactive SSH session, but will be extended if support
>>> +    for other protocols is added.
>>> +
>>> +    Args:
>>> +        node_config: The test run configuration of the node to connect to.
>>> +        logger: The logger instance this session will use.
>>> +
>>> +    Returns:
>>> +        The interactive SSH remote session.
>>> +    """
>>>        return InteractiveRemoteSession(node_config, logger)
>>> diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
>>> index 0647d93de4..629c2d7b9c 100644
>>> --- a/dts/framework/remote_session/remote_session.py
>>> +++ b/dts/framework/remote_session/remote_session.py
>>> @@ -3,6 +3,13 @@
>>>    # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>>>    # Copyright(c) 2022-2023 University of New Hampshire
>>>
>>> +"""Base remote session.
>>> +
>>> +This module contains the abstract base class for remote sessions and defines
>>> +the structure of the result of a command execution.
>>> +"""
>>> +
>>> +
>>>    import dataclasses
>>>    from abc import ABC, abstractmethod
>>>    from pathlib import PurePath
>>> @@ -15,8 +22,14 @@
>>>
>>>    @dataclasses.dataclass(slots=True, frozen=True)
>>>    class CommandResult:
>>> -    """
>>> -    The result of remote execution of a command.
>>> +    """The result of remote execution of a command.
>>> +
>>> +    Attributes:
>>> +        name: The name of the session that executed the command.
>>> +        command: The executed command.
>>> +        stdout: The standard output the command produced.
>>> +        stderr: The standard error output the command produced.
>>> +        return_code: The return code the command exited with.
>>>        """
>>>
>>>        name: str
>>> @@ -26,6 +39,7 @@ class CommandResult:
>>>        return_code: int
>>>
>>>        def __str__(self) -> str:
>>> +        """Format the command outputs."""
>>>            return (
>>>                f"stdout: '{self.stdout}'\n"
>>>                f"stderr: '{self.stderr}'\n"
>>> @@ -34,13 +48,24 @@ def __str__(self) -> str:
>>>
>>>
>>>    class RemoteSession(ABC):
>>> -    """
>>> -    The base class for defining which methods must be implemented in order to connect
>>> -    to a remote host (node) and maintain a remote session. The derived classes are
>>> -    supposed to implement/use some underlying transport protocol (e.g. SSH) to
>>> -    implement the methods. On top of that, it provides some basic services common to
>>> -    all derived classes, such as keeping history and logging what's being executed
>>> -    on the remote node.
>>> +    """Non-interactive remote session.
>>> +
>>> +    The abstract methods must be implemented in order to connect to a remote host (node)
>>> +    and maintain a remote session.
>>> +    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
>>> +    to implement the methods. On top of that, it provides some basic services common to all
>>> +    subclasses, such as keeping history and logging what's being executed on the remote node.
>>> +
>>> +    Attributes:
>>> +        name: The name of the session.
>>> +        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
>>> +            or a domain name.
>>> +        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
>>> +        port: The port of the node, if given in `hostname`.
>>> +        username: The username used in the connection.
>>> +        password: The password used in the connection. Most frequently empty,
>>> +            as the use of passwords is discouraged.
>>> +        history: The executed commands during this session.
>>>        """
>>>
>>>        name: str
>>> @@ -59,6 +84,16 @@ def __init__(
>>>            session_name: str,
>>>            logger: DTSLOG,
>>>        ):
>>> +        """Connect to the node during initialization.
>>> +
>>> +        Args:
>>> +            node_config: The test run configuration of the node to connect to.
>>> +            session_name: The name of the session.
>>> +            logger: The logger instance this session will use.
>>> +
>>> +        Raises:
>>> +            SSHConnectionError: If the connection to the node was not successful.
>>> +        """
>>>            self._node_config = node_config
>>>
>>>            self.name = session_name
>>> @@ -79,8 +114,13 @@ def __init__(
>>>
>>>        @abstractmethod
>>>        def _connect(self) -> None:
>>> -        """
>>> -        Create connection to assigned node.
>>> +        """Create a connection to the node.
>>> +
>>> +        The implementation must assign the established session to self.session.
>>> +
>>> +        The implementation must except all exceptions and convert them to an SSHConnectionError.
>>> +
>>> +        The implementation may optionally implement retry attempts.
>>>            """
>>>
>>>        def send_command(
>>> @@ -90,11 +130,24 @@ def send_command(
>>>            verify: bool = False,
>>>            env: dict | None = None,
>>>        ) -> CommandResult:
>>> -        """
>>> -        Send a command to the connected node using optional env vars
>>> -        and return CommandResult.
>>> -        If verify is True, check the return code of the executed command
>>> -        and raise a RemoteCommandExecutionError if the command failed.
>>> +        """Send `command` to the connected node.
>>> +
>>> +        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
>>> +        environment variable configure the timeout of command execution.
>>> +
>>> +        Args:
>>> +            command: The command to execute.
>>> +            timeout: Wait at most this long in seconds to execute `command`.
>>> +            verify: If :data:`True`, will check the exit code of `command`.
>>> +            env: A dictionary with environment variables to be used with `command` execution.
>>> +
>>> +        Raises:
>>> +            SSHSessionDeadError: If the session isn't alive when sending `command`.
>>> +            SSHTimeoutError: If `command` execution timed out.
>>> +            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
>>> +
>>> +        Returns:
>>> +            The output of the command along with the return code.
>>>            """
>>>            self._logger.info(
>>>                f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else "")
>>> @@ -115,29 +168,36 @@ def send_command(
>>>        def _send_command(
>>>            self, command: str, timeout: float, env: dict | None
>>>        ) -> CommandResult:
>>> -        """
>>> -        Use the underlying protocol to execute the command using optional env vars
>>> -        and return CommandResult.
>>> +        """Send a command to the connected node.
>>> +
>>> +        The implementation must execute the command remotely with `env` environment variables
>>> +        and return the result.
>>> +
>>> +        The implementation must except all exceptions and raise an SSHSessionDeadError if
>>> +        the session is not alive and an SSHTimeoutError if the command execution times out.
>>
>> 3 way "and". Needs comas or splitting the sentence.
>>
> 
> What about this?
> 
> The implementation must except all exceptions and raise:
> 
>      * SSHSessionDeadError if the session is not alive,
>      * SSHTimeoutError if the command execution times out.
> 

Sounds good.

> 
>>>            """
>>>
>>>        def close(self, force: bool = False) -> None:
>>> -        """
>>> -        Close the remote session and free all used resources.
>>> +        """Close the remote session and free all used resources.
>>> +
>>> +        Args:
>>> +            force: Force the closure of the connection. This may not clean up all resources.
>>>            """
>>>            self._logger.logger_exit()
>>>            self._close(force)
>>>
>>>        @abstractmethod
>>>        def _close(self, force: bool = False) -> None:
>>> -        """
>>> -        Execute protocol specific steps needed to close the session properly.
>>> +        """Protocol specific steps needed to close the session properly.
>>> +
>>> +        Args:
>>> +            force: Force the closure of the connection. This may not clean up all resources.
>>> +                This doesn't have to be implemented in the overloaded method.
>>>            """
>>>
>>>        @abstractmethod
>>>        def is_alive(self) -> bool:
>>> -        """
>>> -        Check whether the remote session is still responding.
>>> -        """
>>> +        """Check whether the remote session is still responding."""
>>>
>>>        @abstractmethod
>>>        def copy_from(
>>> @@ -147,12 +207,12 @@ def copy_from(
>>>        ) -> None:
>>>            """Copy a file from the remote Node to the local filesystem.
>>>
>>> -        Copy source_file from the remote Node associated with this remote
>>> -        session to destination_file on the local filesystem.
>>> +        Copy `source_file` from the remote Node associated with this remote session
>>> +        to `destination_file` on the local filesystem.
>>>
>>>            Args:
>>> -            source_file: the file on the remote Node.
>>> -            destination_file: a file or directory path on the local filesystem.
>>> +            source_file: The file on the remote Node.
>>> +            destination_file: A file or directory path on the local filesystem.
>>>            """
>>>
>>>        @abstractmethod
>>> @@ -163,10 +223,10 @@ def copy_to(
>>>        ) -> None:
>>>            """Copy a file from local filesystem to the remote Node.
>>>
>>> -        Copy source_file from local filesystem to destination_file
>>> -        on the remote Node associated with this remote session.
>>> +        Copy `source_file` from local filesystem to `destination_file` on the remote Node
>>> +        associated with this remote session.
>>>
>>>            Args:
>>> -            source_file: the file on the local filesystem.
>>> -            destination_file: a file or directory path on the remote Node.
>>> +            source_file: The file on the local filesystem.
>>> +            destination_file: A file or directory path on the remote Node.
>>>            """
>>> diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
>>> index cee11d14d6..7186490a9a 100644
>>> --- a/dts/framework/remote_session/ssh_session.py
>>> +++ b/dts/framework/remote_session/ssh_session.py
>>> @@ -1,6 +1,8 @@
>>>    # SPDX-License-Identifier: BSD-3-Clause
>>>    # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>>
>>> +"""SSH session remote session."""
>>
>> Is the double "session" intended?
>>
> 
> Not really, I'll remove the first occurence.
> 
>>> +
>>>    import socket
>>>    import traceback
>>>    from pathlib import PurePath
>>> @@ -26,13 +28,8 @@
>>>    class SSHSession(RemoteSession):
>>>        """A persistent SSH connection to a remote Node.
>>>
>>> -    The connection is implemented with the Fabric Python library.
>>> -
>>> -    Args:
>>> -        node_config: The configuration of the Node to connect to.
>>> -        session_name: The name of the session.
>>> -        logger: The logger used for logging.
>>> -            This should be passed from the parent OSSession.
>>> +    The connection is implemented with
>>> +    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
>>>
>>>        Attributes:
>>>            session: The underlying Fabric SSH connection.
>>> @@ -80,6 +77,7 @@ def _connect(self) -> None:
>>>                raise SSHConnectionError(self.hostname, errors)
>>>
>>>        def is_alive(self) -> bool:
>>> +        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
>>>            return self.session.is_connected
>>>
>>>        def _send_command(
>>> @@ -89,7 +87,7 @@ def _send_command(
>>>
>>>            Args:
>>>                command: The command to execute.
>>> -            timeout: Wait at most this many seconds for the execution to complete.
>>> +            timeout: Wait at most this long in seconds to execute the command.
>>
>> Is the timeout actually to start running the command and not to wait for
>> it to be completed?
>>
> 
> It is to wait for it to be completed. The wording is a bit confusing,
> what about:
> 
> Wait at most this long in seconds for the command execution to complete.
> 
> I'll change this in all places where timeout is documented.

Sounds good. I think I saw this confusing wording 3 times so far, but I 
can't quite remember which files it was.

> 
>>>                env: Extra environment variables that will be used in command execution.
>>>
>>>            Raises:
>>> @@ -118,6 +116,7 @@ def copy_from(
>>>            source_file: str | PurePath,
>>>            destination_file: str | PurePath,
>>>        ) -> None:
>>> +        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
>>>            self.session.get(str(destination_file), str(source_file))
>>>
>>>        def copy_to(
>>> @@ -125,6 +124,7 @@ def copy_to(
>>>            source_file: str | PurePath,
>>>            destination_file: str | PurePath,
>>>        ) -> None:
>>> +        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
>>>            self.session.put(str(source_file), str(destination_file))
>>>
>>>        def _close(self, force: bool = False) -> None:
>>


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 19/21] dts: base traffic generators docstring update
  2023-11-21 16:20                 ` Yoan Picchi
@ 2023-11-22 11:38                   ` Juraj Linkeš
  2023-11-22 11:56                     ` Yoan Picchi
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 11:38 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Tue, Nov 21, 2023 at 5:20 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   .../traffic_generator/__init__.py             | 22 ++++++++-
> >   .../capturing_traffic_generator.py            | 46 +++++++++++--------
> >   .../traffic_generator/traffic_generator.py    | 33 +++++++------
> >   3 files changed, 68 insertions(+), 33 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
> > index 11bfa1ee0f..51cca77da4 100644
> > --- a/dts/framework/testbed_model/traffic_generator/__init__.py
> > +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> > @@ -1,6 +1,19 @@
> >   # SPDX-License-Identifier: BSD-3-Clause
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >
> > +"""DTS traffic generators.
> > +
> > +A traffic generator is capable of generating traffic and then monitor returning traffic.
> > +A traffic generator may just count the number of received packets
> > +and it may additionally capture individual packets.
>
> The sentence feels odd. Isn't it supposed to be "or" here? and no need
> for that early of a line break
>

There are two mays, so there probably should be an or. But I'd like to
reword it to this:

All traffic generators count the number of received packets, and they
may additionally
capture individual packets.

What do you think?

> > +
> > +A traffic generator may be software running on generic hardware or it could be specialized hardware.
> > +
> > +The traffic generators that only count the number of received packets are suitable only for
> > +performance testing. In functional testing, we need to be able to dissect each arrived packet
> > +and a capturing traffic generator is required.
> > +"""
> > +
> >   from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
> >   from framework.exception import ConfigurationError
> >   from framework.testbed_model.node import Node
> > @@ -12,8 +25,15 @@
> >   def create_traffic_generator(
> >       tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
> >   ) -> CapturingTrafficGenerator:
> > -    """A factory function for creating traffic generator object from user config."""
> > +    """The factory function for creating traffic generator objects from the test run configuration.
> > +
> > +    Args:
> > +        tg_node: The traffic generator node where the created traffic generator will be running.
> > +        traffic_generator_config: The traffic generator config.
> >
> > +    Returns:
> > +        A traffic generator capable of capturing received packets.
> > +    """
> >       match traffic_generator_config.traffic_generator_type:
> >           case TrafficGeneratorType.SCAPY:
> >               return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> > diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> > index e521211ef0..b0a43ad003 100644
> > --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> > +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> > @@ -23,19 +23,22 @@
> >
> >
> >   def _get_default_capture_name() -> str:
> > -    """
> > -    This is the function used for the default implementation of capture names.
> > -    """
> >       return str(uuid.uuid4())
> >
> >
> >   class CapturingTrafficGenerator(TrafficGenerator):
> >       """Capture packets after sending traffic.
> >
> > -    A mixin interface which enables a packet generator to declare that it can capture
> > +    The intermediary interface which enables a packet generator to declare that it can capture
> >       packets and return them to the user.
> >
> > +    Similarly to
> > +    :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
> > +    this class exposes the public methods specific to capturing traffic generators and defines
> > +    a private method that must implement the traffic generation and capturing logic in subclasses.
> > +
> >       The methods of capturing traffic generators obey the following workflow:
> > +
> >           1. send packets
> >           2. capture packets
> >           3. write the capture to a .pcap file
> > @@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
> >
> >       @property
> >       def is_capturing(self) -> bool:
> > +        """This traffic generator can capture traffic."""
> >           return True
> >
> >       def send_packet_and_capture(
> > @@ -54,11 +58,12 @@ def send_packet_and_capture(
> >           duration: float,
> >           capture_name: str = _get_default_capture_name(),
> >       ) -> list[Packet]:
> > -        """Send a packet, return received traffic.
> > +        """Send `packet` and capture received traffic.
> > +
> > +        Send `packet` on `send_port` and then return all traffic captured
> > +        on `receive_port` for the given `duration`.
> >
> > -        Send a packet on the send_port and then return all traffic captured
> > -        on the receive_port for the given duration. Also record the captured traffic
> > -        in a pcap file.
> > +        The captured traffic is recorded in the `capture_name`.pcap file.
> >
> >           Args:
> >               packet: The packet to send.
> > @@ -68,7 +73,7 @@ def send_packet_and_capture(
> >               capture_name: The name of the .pcap file where to store the capture.
> >
> >           Returns:
> > -             A list of received packets. May be empty if no packets are captured.
> > +             The received packets. May be empty if no packets are captured.
> >           """
> >           return self.send_packets_and_capture(
> >               [packet], send_port, receive_port, duration, capture_name
> > @@ -82,11 +87,14 @@ def send_packets_and_capture(
> >           duration: float,
> >           capture_name: str = _get_default_capture_name(),
> >       ) -> list[Packet]:
> > -        """Send packets, return received traffic.
> > +        """Send `packets` and capture received traffic.
> >
> > -        Send packets on the send_port and then return all traffic captured
> > -        on the receive_port for the given duration. Also record the captured traffic
> > -        in a pcap file.
> > +        Send `packets` on `send_port` and then return all traffic captured
> > +        on `receive_port` for the given `duration`.
> > +
> > +        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
> > +        can be configured with the :option:`--output-dir` command line argument or
> > +        the :envvar:`DTS_OUTPUT_DIR` environment variable.
> >
> >           Args:
> >               packets: The packets to send.
> > @@ -96,7 +104,7 @@ def send_packets_and_capture(
> >               capture_name: The name of the .pcap file where to store the capture.
> >
> >           Returns:
> > -             A list of received packets. May be empty if no packets are captured.
> > +             The received packets. May be empty if no packets are captured.
> >           """
> >           self._logger.debug(get_packet_summaries(packets))
> >           self._logger.debug(
> > @@ -124,10 +132,12 @@ def _send_packets_and_capture(
> >           receive_port: Port,
> >           duration: float,
> >       ) -> list[Packet]:
> > -        """
> > -        The extended classes must implement this method which
> > -        sends packets on send_port and receives packets on the receive_port
> > -        for the specified duration. It must be able to handle no received packets.
> > +        """The implementation of :method:`send_packets_and_capture`.
> > +
> > +        The subclasses must implement this method which sends `packets` on `send_port`
> > +        and receives packets on `receive_port` for the specified `duration`.
> > +
> > +        It must be able to handle no received packets.
>
> This sentence feels odd too. Maybe "It must be able to handle receiving
> no packets."
>

Right, your suggestion is better.

> >           """
> >
> >       def _write_capture_from_packets(
> > diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> > index ea7c3963da..ed396c6a2f 100644
> > --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> > +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> > @@ -22,7 +22,8 @@
> >   class TrafficGenerator(ABC):
> >       """The base traffic generator.
> >
> > -    Defines the few basic methods that each traffic generator must implement.
> > +    Exposes the common public methods of all traffic generators and defines private methods
> > +    that must implement the traffic generation logic in subclasses.
> >       """
> >
> >       _config: TrafficGeneratorConfig
> > @@ -30,6 +31,12 @@ class TrafficGenerator(ABC):
> >       _logger: DTSLOG
> >
> >       def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> > +        """Initialize the traffic generator.
> > +
> > +        Args:
> > +            tg_node: The traffic generator node where the created traffic generator will be running.
> > +            config: The traffic generator's test run configuration.
> > +        """
> >           self._config = config
> >           self._tg_node = tg_node
> >           self._logger = getLogger(
> > @@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> >           )
> >
> >       def send_packet(self, packet: Packet, port: Port) -> None:
> > -        """Send a packet and block until it is fully sent.
> > +        """Send `packet` and block until it is fully sent.
> >
> > -        What fully sent means is defined by the traffic generator.
> > +        Send `packet` on `port`, then wait until `packet` is fully sent.
> >
> >           Args:
> >               packet: The packet to send.
> > @@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
> >           self.send_packets([packet], port)
> >
> >       def send_packets(self, packets: list[Packet], port: Port) -> None:
> > -        """Send packets and block until they are fully sent.
> > +        """Send `packets` and block until they are fully sent.
> >
> > -        What fully sent means is defined by the traffic generator.
> > +        Send `packets` on `port`, then wait until `packets` are fully sent.
> >
> >           Args:
> >               packets: The packets to send.
> > @@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
> >
> >       @abstractmethod
> >       def _send_packets(self, packets: list[Packet], port: Port) -> None:
> > -        """
> > -        The extended classes must implement this method which
> > -        sends packets on send_port. The method should block until all packets
> > -        are fully sent.
> > +        """The implementation of :method:`send_packets`.
> > +
> > +        The subclasses must implement this method which sends `packets` on `port`.
> > +        The method should block until all `packets` are fully sent.
> > +
> > +        What full sent means is defined by the traffic generator.
>
> full -> fully
>
> >           """
> >
> >       @property
> >       def is_capturing(self) -> bool:
> > -        """Whether this traffic generator can capture traffic.
> > -
> > -        Returns:
> > -            True if the traffic generator can capture traffic, False otherwise.
> > -        """
> > +        """This traffic generator can't capture traffic."""
> >           return False
> >
> >       @abstractmethod
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 15/21] dts: os session docstring update
  2023-11-15 13:09               ` [PATCH v7 15/21] dts: os session " Juraj Linkeš
@ 2023-11-22 11:50                 ` Yoan Picchi
  2023-11-22 13:27                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-22 11:50 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/testbed_model/os_session.py | 275 ++++++++++++++++------
>   1 file changed, 208 insertions(+), 67 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
> index 76e595a518..72b9193a61 100644
> --- a/dts/framework/testbed_model/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -2,6 +2,29 @@
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2023 University of New Hampshire
>   
> +"""OS-aware remote session.
> +
> +DPDK supports multiple different operating systems, meaning it can run on these different operating
> +systems. This module defines the common API that OS-unaware layers use and translates the API into
> +OS-aware calls/utility usage.
> +
> +Note:
> +    Running commands with administrative privileges requires OS awareness. This is the only layer
> +    that's aware of OS differences, so this is where non-privileged command get converted
> +    to privileged commands.
> +
> +Example:
> +    A user wishes to remove a directory on
> +    a remote :class:`~framework.testbed_model.sut_node.SutNode`.
> +    The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware what OS the node
> +    is running - it delegates the OS translation logic
> +    to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node calls
> +    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
> +    the :attr:`~framework.testbed_model.node.Node.main_session` translates that
> +    to ``rm -rf`` if the node's OS is Linux and other commands for other OSs.
> +    It also translates the path to match the underlying OS.
> +"""
> +
>   from abc import ABC, abstractmethod
>   from collections.abc import Iterable
>   from ipaddress import IPv4Interface, IPv6Interface
> @@ -28,10 +51,16 @@
>   
>   
>   class OSSession(ABC):
> -    """
> -    The OS classes create a DTS node remote session and implement OS specific
> +    """OS-unaware to OS-aware translation API definition.
> +
> +    The OSSession classes create a remote session to a DTS node and implement OS specific
>       behavior. There a few control methods implemented by the base class, the rest need
> -    to be implemented by derived classes.
> +    to be implemented by subclasses.
> +
> +    Attributes:
> +        name: The name of the session.
> +        remote_session: The remote session maintaining the connection to the node.
> +        interactive_session: The interactive remote session maintaining the connection to the node.
>       """
>   
>       _config: NodeConfiguration
> @@ -46,6 +75,15 @@ def __init__(
>           name: str,
>           logger: DTSLOG,
>       ):
> +        """Initialize the OS-aware session.
> +
> +        Connect to the node right away and also create an interactive remote session.
> +
> +        Args:
> +            node_config: The test run configuration of the node to connect to.
> +            name: The name of the session.
> +            logger: The logger instance this session will use.
> +        """
>           self._config = node_config
>           self.name = name
>           self._logger = logger
> @@ -53,15 +91,15 @@ def __init__(
>           self.interactive_session = create_interactive_session(node_config, logger)
>   
>       def close(self, force: bool = False) -> None:
> -        """
> -        Close the remote session.
> +        """Close the underlying remote session.
> +
> +        Args:
> +            force: Force the closure of the connection.
>           """
>           self.remote_session.close(force)
>   
>       def is_alive(self) -> bool:
> -        """
> -        Check whether the remote session is still responding.
> -        """
> +        """Check whether the underlying remote session is still responding."""
>           return self.remote_session.is_alive()
>   
>       def send_command(
> @@ -72,10 +110,23 @@ def send_command(
>           verify: bool = False,
>           env: dict | None = None,
>       ) -> CommandResult:
> -        """
> -        An all-purpose API in case the command to be executed is already
> -        OS-agnostic, such as when the path to the executed command has been
> -        constructed beforehand.
> +        """An all-purpose API for OS-agnostic commands.
> +
> +        This can be used for an execution of a portable command that's executed the same way
> +        on all operating systems, such as Python.
> +
> +        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
> +        environment variable configure the timeout of command execution.
> +
> +        Args:
> +            command: The command to execute.
> +            timeout: Wait at most this long in seconds to execute the command.

confusing start/end of execution

> +            privileged: Whether to run the command with administrative privileges.
> +            verify: If :data:`True`, will check the exit code of the command.
> +            env: A dictionary with environment variables to be used with the command execution.
> +
> +        Raises:
> +            RemoteCommandExecutionError: If verify is :data:`True` and the command failed.
>           """
>           if privileged:
>               command = self._get_privileged_command(command)
> @@ -89,8 +140,20 @@ def create_interactive_shell(
>           privileged: bool,
>           app_args: str,
>       ) -> InteractiveShellType:
> -        """
> -        See "create_interactive_shell" in SutNode
> +        """Factory for interactive session handlers.
> +
> +        Instantiate `shell_cls` according to the remote OS specifics.
> +
> +        Args:
> +            shell_cls: The class of the shell.
> +            timeout: Timeout for reading output from the SSH channel. If you are
> +                reading from the buffer and don't receive any data within the timeout
> +                it will throw an error.
> +            privileged: Whether to run the shell with administrative privileges.
> +            app_args: The arguments to be passed to the application.
> +
> +        Returns:
> +            An instance of the desired interactive application shell.
>           """
>           return shell_cls(
>               self.interactive_session.session,
> @@ -114,27 +177,42 @@ def _get_privileged_command(command: str) -> str:
>   
>       @abstractmethod
>       def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
> -        """
> -        Try to find DPDK remote dir in remote_dir.
> +        """Try to find DPDK directory in `remote_dir`.
> +
> +        The directory is the one which is created after the extraction of the tarball. The files
> +        are usually extracted into a directory starting with ``dpdk-``.
> +
> +        Returns:
> +            The absolute path of the DPDK remote directory, empty path if not found.
>           """
>   
>       @abstractmethod
>       def get_remote_tmp_dir(self) -> PurePath:
> -        """
> -        Get the path of the temporary directory of the remote OS.
> +        """Get the path of the temporary directory of the remote OS.
> +
> +        Returns:
> +            The absolute path of the temporary directory.
>           """
>   
>       @abstractmethod
>       def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> -        """
> -        Create extra environment variables needed for the target architecture. Get
> -        information from the node if needed.
> +        """Create extra environment variables needed for the target architecture.
> +
> +        Different architectures may require different configuration, such as setting 32-bit CFLAGS.
> +
> +        Returns:
> +            A dictionary with keys as environment variables.
>           """
>   
>       @abstractmethod
>       def join_remote_path(self, *args: str | PurePath) -> PurePath:
> -        """
> -        Join path parts using the path separator that fits the remote OS.
> +        """Join path parts using the path separator that fits the remote OS.
> +
> +        Args:
> +            args: Any number of paths to join.
> +
> +        Returns:
> +            The resulting joined path.
>           """
>   
>       @abstractmethod
> @@ -143,13 +221,13 @@ def copy_from(
>           source_file: str | PurePath,
>           destination_file: str | PurePath,
>       ) -> None:
> -        """Copy a file from the remote Node to the local filesystem.
> +        """Copy a file from the remote node to the local filesystem.
>   
> -        Copy source_file from the remote Node associated with this remote
> -        session to destination_file on the local filesystem.
> +        Copy `source_file` from the remote node associated with this remote
> +        session to `destination_file` on the local filesystem.
>   
>           Args:
> -            source_file: the file on the remote Node.
> +            source_file: the file on the remote node.
>               destination_file: a file or directory path on the local filesystem.
>           """
>   
> @@ -159,14 +237,14 @@ def copy_to(
>           source_file: str | PurePath,
>           destination_file: str | PurePath,
>       ) -> None:
> -        """Copy a file from local filesystem to the remote Node.
> +        """Copy a file from local filesystem to the remote node.
>   
> -        Copy source_file from local filesystem to destination_file
> -        on the remote Node associated with this remote session.
> +        Copy `source_file` from local filesystem to `destination_file`
> +        on the remote node associated with this remote session.
>   
>           Args:
>               source_file: the file on the local filesystem.
> -            destination_file: a file or directory path on the remote Node.
> +            destination_file: a file or directory path on the remote node.
>           """
>   
>       @abstractmethod
> @@ -176,8 +254,12 @@ def remove_remote_dir(
>           recursive: bool = True,
>           force: bool = True,
>       ) -> None:
> -        """
> -        Remove remote directory, by default remove recursively and forcefully.
> +        """Remove remote directory, by default remove recursively and forcefully.
> +
> +        Args:
> +            remote_dir_path: The path of the directory to remove.
> +            recursive: If :data:`True`, also remove all contents inside the directory.
> +            force: If :data:`True`, ignore all warnings and try to remove at all costs.
>           """
>   
>       @abstractmethod
> @@ -186,9 +268,12 @@ def extract_remote_tarball(
>           remote_tarball_path: str | PurePath,
>           expected_dir: str | PurePath | None = None,
>       ) -> None:
> -        """
> -        Extract remote tarball in place. If expected_dir is a non-empty string, check
> -        whether the dir exists after extracting the archive.
> +        """Extract remote tarball in its remote directory.
> +
> +        Args:
> +            remote_tarball_path: The path of the tarball on the remote node.
> +            expected_dir: If non-empty, check whether `expected_dir` exists after extracting
> +                the archive.
>           """
>   
>       @abstractmethod
> @@ -201,69 +286,119 @@ def build_dpdk(
>           rebuild: bool = False,
>           timeout: float = SETTINGS.compile_timeout,
>       ) -> None:
> -        """
> -        Build DPDK in the input dir with specified environment variables and meson
> -        arguments.
> +        """Build DPDK on the remote node.
> +
> +        An extracted DPDK tarball must be present on the node. The build consists of two steps::
> +
> +            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
> +            ninja -C remote_dpdk_build_dir
> +
> +        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
> +        environment variable configure the timeout of DPDK build.
> +
> +        Args:
> +            env_vars: Use these environment variables then building DPDK.
> +            meson_args: Use these meson arguments when building DPDK.
> +            remote_dpdk_dir: The directory on the remote node where DPDK will be built.
> +            remote_dpdk_build_dir: The target build directory on the remote node.
> +            rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead
> +                of ``meson setup``.
> +            timeout: Wait at most this long in seconds for the build to execute.

confusing start/end of execution

>           """
>   
>       @abstractmethod
>       def get_dpdk_version(self, version_path: str | PurePath) -> str:
> -        """
> -        Inspect DPDK version on the remote node from version_path.
> +        """Inspect the DPDK version on the remote node.
> +
> +        Args:
> +            version_path: The path to the VERSION file containing the DPDK version.
> +
> +        Returns:
> +            The DPDK version.
>           """
>   
>       @abstractmethod
>       def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
> -        """
> -        Compose a list of LogicalCores present on the remote node.
> -        If use_first_core is False, the first physical core won't be used.
> +        r"""Get the list of :class:`~framework.testbed_model.cpu.LogicalCore`\s on the remote node.
> +
> +        Args:
> +            use_first_core: If :data:`False`, the first physical core won't be used.
> +
> +        Returns:
> +            The logical cores present on the node.
>           """
>   
>       @abstractmethod
>       def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
> -        """
> -        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
> -        dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
> +        """Kill and cleanup all DPDK apps.
> +
> +        Args:
> +            dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`.
> +                If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean.
>           """
>   
>       @abstractmethod
>       def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> -        """
> -        Get the DPDK file prefix that will be used when running DPDK apps.
> +        """Make OS-specific modification to the DPDK file prefix.
> +
> +        Args:
> +           dpdk_prefix: The OS-unaware file prefix.
> +
> +        Returns:
> +            The OS-specific file prefix.
>           """
>   
>       @abstractmethod
> -    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
> -        """
> -        Get the node's Hugepage Size, configure the specified amount of hugepages
> +    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> +        """Configure hugepages on the node.
> +
> +        Get the node's Hugepage Size, configure the specified count of hugepages
>           if needed and mount the hugepages if needed.
> -        If force_first_numa is True, configure hugepages just on the first socket.
> +
> +        Args:
> +            hugepage_count: Configure this many hugepages.
> +            force_first_numa:  If :data:`True`, configure hugepages just on the first socket.

force *numa* configures the first *socket* ?

>           """
>   
>       @abstractmethod
>       def get_compiler_version(self, compiler_name: str) -> str:
> -        """
> -        Get installed version of compiler used for DPDK
> +        """Get installed version of compiler used for DPDK.
> +
> +        Args:
> +            compiler_name: The name of the compiler executable.
> +
> +        Returns:
> +            The compiler's version.
>           """
>   
>       @abstractmethod
>       def get_node_info(self) -> NodeInfo:
> -        """
> -        Collect information about the node
> +        """Collect additional information about the node.
> +
> +        Returns:
> +            Node information.
>           """
>   
>       @abstractmethod
>       def update_ports(self, ports: list[Port]) -> None:
> -        """
> -        Get additional information about ports:
> -            Logical name (e.g. enp7s0) if applicable
> -            Mac address
> +        """Get additional information about ports from the operating system and update them.
> +
> +        The additional information is:
> +
> +            * Logical name (e.g. ``enp7s0``) if applicable,
> +            * Mac address.
> +
> +        Args:
> +            ports: The ports to update.
>           """
>   
>       @abstractmethod
>       def configure_port_state(self, port: Port, enable: bool) -> None:
> -        """
> -        Enable/disable port.
> +        """Enable/disable `port` in the operating system.
> +
> +        Args:
> +            port: The port to configure.
> +            enable: If :data:`True`, enable the port, otherwise shut it down.
>           """
>   
>       @abstractmethod
> @@ -273,12 +408,18 @@ def configure_port_ip_address(
>           port: Port,
>           delete: bool,
>       ) -> None:
> -        """
> -        Configure (add or delete) an IP address of the input port.
> +        """Configure an IP address on `port` in the operating system.
> +
> +        Args:
> +            address: The address to configure.
> +            port: The port to configure.
> +            delete: If :data:`True`, remove the IP address, otherwise configure it.
>           """
>   
>       @abstractmethod
>       def configure_ipv4_forwarding(self, enable: bool) -> None:
> -        """
> -        Enable IPv4 forwarding in the underlying OS.
> +        """Enable IPv4 forwarding in the operating system.
> +
> +        Args:
> +            enable: If :data:`True`, enable the forwarding, otherwise disable it.
>           """


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 19/21] dts: base traffic generators docstring update
  2023-11-22 11:38                   ` Juraj Linkeš
@ 2023-11-22 11:56                     ` Yoan Picchi
  2023-11-22 13:11                       ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-22 11:56 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On 11/22/23 11:38, Juraj Linkeš wrote:
> On Tue, Nov 21, 2023 at 5:20 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>>
>> On 11/15/23 13:09, Juraj Linkeš wrote:
>>> Format according to the Google format and PEP257, with slight
>>> deviations.
>>>
>>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>>> ---
>>>    .../traffic_generator/__init__.py             | 22 ++++++++-
>>>    .../capturing_traffic_generator.py            | 46 +++++++++++--------
>>>    .../traffic_generator/traffic_generator.py    | 33 +++++++------
>>>    3 files changed, 68 insertions(+), 33 deletions(-)
>>>
>>> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
>>> index 11bfa1ee0f..51cca77da4 100644
>>> --- a/dts/framework/testbed_model/traffic_generator/__init__.py
>>> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
>>> @@ -1,6 +1,19 @@
>>>    # SPDX-License-Identifier: BSD-3-Clause
>>>    # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>>
>>> +"""DTS traffic generators.
>>> +
>>> +A traffic generator is capable of generating traffic and then monitor returning traffic.
>>> +A traffic generator may just count the number of received packets
>>> +and it may additionally capture individual packets.
>>
>> The sentence feels odd. Isn't it supposed to be "or" here? and no need
>> for that early of a line break
>>
> 
> There are two mays, so there probably should be an or. But I'd like to
> reword it to this:
> 
> All traffic generators count the number of received packets, and they
> may additionally
> capture individual packets.
> 
> What do you think?

I think it's better with the new sentence. But I think it'd be even 
better to split into two sentences to highlight the must/may:
All traffic generators must count the number of received packets. Some
may additionally capture individual packets.

> 
>>> +
>>> +A traffic generator may be software running on generic hardware or it could be specialized hardware.
>>> +
>>> +The traffic generators that only count the number of received packets are suitable only for
>>> +performance testing. In functional testing, we need to be able to dissect each arrived packet
>>> +and a capturing traffic generator is required.
>>> +"""
>>> +
>>>    from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
>>>    from framework.exception import ConfigurationError
>>>    from framework.testbed_model.node import Node
>>> @@ -12,8 +25,15 @@
>>>    def create_traffic_generator(
>>>        tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
>>>    ) -> CapturingTrafficGenerator:
>>> -    """A factory function for creating traffic generator object from user config."""
>>> +    """The factory function for creating traffic generator objects from the test run configuration.
>>> +
>>> +    Args:
>>> +        tg_node: The traffic generator node where the created traffic generator will be running.
>>> +        traffic_generator_config: The traffic generator config.
>>>
>>> +    Returns:
>>> +        A traffic generator capable of capturing received packets.
>>> +    """
>>>        match traffic_generator_config.traffic_generator_type:
>>>            case TrafficGeneratorType.SCAPY:
>>>                return ScapyTrafficGenerator(tg_node, traffic_generator_config)
>>> diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
>>> index e521211ef0..b0a43ad003 100644
>>> --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
>>> +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
>>> @@ -23,19 +23,22 @@
>>>
>>>
>>>    def _get_default_capture_name() -> str:
>>> -    """
>>> -    This is the function used for the default implementation of capture names.
>>> -    """
>>>        return str(uuid.uuid4())
>>>
>>>
>>>    class CapturingTrafficGenerator(TrafficGenerator):
>>>        """Capture packets after sending traffic.
>>>
>>> -    A mixin interface which enables a packet generator to declare that it can capture
>>> +    The intermediary interface which enables a packet generator to declare that it can capture
>>>        packets and return them to the user.
>>>
>>> +    Similarly to
>>> +    :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
>>> +    this class exposes the public methods specific to capturing traffic generators and defines
>>> +    a private method that must implement the traffic generation and capturing logic in subclasses.
>>> +
>>>        The methods of capturing traffic generators obey the following workflow:
>>> +
>>>            1. send packets
>>>            2. capture packets
>>>            3. write the capture to a .pcap file
>>> @@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
>>>
>>>        @property
>>>        def is_capturing(self) -> bool:
>>> +        """This traffic generator can capture traffic."""
>>>            return True
>>>
>>>        def send_packet_and_capture(
>>> @@ -54,11 +58,12 @@ def send_packet_and_capture(
>>>            duration: float,
>>>            capture_name: str = _get_default_capture_name(),
>>>        ) -> list[Packet]:
>>> -        """Send a packet, return received traffic.
>>> +        """Send `packet` and capture received traffic.
>>> +
>>> +        Send `packet` on `send_port` and then return all traffic captured
>>> +        on `receive_port` for the given `duration`.
>>>
>>> -        Send a packet on the send_port and then return all traffic captured
>>> -        on the receive_port for the given duration. Also record the captured traffic
>>> -        in a pcap file.
>>> +        The captured traffic is recorded in the `capture_name`.pcap file.
>>>
>>>            Args:
>>>                packet: The packet to send.
>>> @@ -68,7 +73,7 @@ def send_packet_and_capture(
>>>                capture_name: The name of the .pcap file where to store the capture.
>>>
>>>            Returns:
>>> -             A list of received packets. May be empty if no packets are captured.
>>> +             The received packets. May be empty if no packets are captured.
>>>            """
>>>            return self.send_packets_and_capture(
>>>                [packet], send_port, receive_port, duration, capture_name
>>> @@ -82,11 +87,14 @@ def send_packets_and_capture(
>>>            duration: float,
>>>            capture_name: str = _get_default_capture_name(),
>>>        ) -> list[Packet]:
>>> -        """Send packets, return received traffic.
>>> +        """Send `packets` and capture received traffic.
>>>
>>> -        Send packets on the send_port and then return all traffic captured
>>> -        on the receive_port for the given duration. Also record the captured traffic
>>> -        in a pcap file.
>>> +        Send `packets` on `send_port` and then return all traffic captured
>>> +        on `receive_port` for the given `duration`.
>>> +
>>> +        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
>>> +        can be configured with the :option:`--output-dir` command line argument or
>>> +        the :envvar:`DTS_OUTPUT_DIR` environment variable.
>>>
>>>            Args:
>>>                packets: The packets to send.
>>> @@ -96,7 +104,7 @@ def send_packets_and_capture(
>>>                capture_name: The name of the .pcap file where to store the capture.
>>>
>>>            Returns:
>>> -             A list of received packets. May be empty if no packets are captured.
>>> +             The received packets. May be empty if no packets are captured.
>>>            """
>>>            self._logger.debug(get_packet_summaries(packets))
>>>            self._logger.debug(
>>> @@ -124,10 +132,12 @@ def _send_packets_and_capture(
>>>            receive_port: Port,
>>>            duration: float,
>>>        ) -> list[Packet]:
>>> -        """
>>> -        The extended classes must implement this method which
>>> -        sends packets on send_port and receives packets on the receive_port
>>> -        for the specified duration. It must be able to handle no received packets.
>>> +        """The implementation of :method:`send_packets_and_capture`.
>>> +
>>> +        The subclasses must implement this method which sends `packets` on `send_port`
>>> +        and receives packets on `receive_port` for the specified `duration`.
>>> +
>>> +        It must be able to handle no received packets.
>>
>> This sentence feels odd too. Maybe "It must be able to handle receiving
>> no packets."
>>
> 
> Right, your suggestion is better.
> 
>>>            """
>>>
>>>        def _write_capture_from_packets(
>>> diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
>>> index ea7c3963da..ed396c6a2f 100644
>>> --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
>>> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
>>> @@ -22,7 +22,8 @@
>>>    class TrafficGenerator(ABC):
>>>        """The base traffic generator.
>>>
>>> -    Defines the few basic methods that each traffic generator must implement.
>>> +    Exposes the common public methods of all traffic generators and defines private methods
>>> +    that must implement the traffic generation logic in subclasses.
>>>        """
>>>
>>>        _config: TrafficGeneratorConfig
>>> @@ -30,6 +31,12 @@ class TrafficGenerator(ABC):
>>>        _logger: DTSLOG
>>>
>>>        def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
>>> +        """Initialize the traffic generator.
>>> +
>>> +        Args:
>>> +            tg_node: The traffic generator node where the created traffic generator will be running.
>>> +            config: The traffic generator's test run configuration.
>>> +        """
>>>            self._config = config
>>>            self._tg_node = tg_node
>>>            self._logger = getLogger(
>>> @@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
>>>            )
>>>
>>>        def send_packet(self, packet: Packet, port: Port) -> None:
>>> -        """Send a packet and block until it is fully sent.
>>> +        """Send `packet` and block until it is fully sent.
>>>
>>> -        What fully sent means is defined by the traffic generator.
>>> +        Send `packet` on `port`, then wait until `packet` is fully sent.
>>>
>>>            Args:
>>>                packet: The packet to send.
>>> @@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
>>>            self.send_packets([packet], port)
>>>
>>>        def send_packets(self, packets: list[Packet], port: Port) -> None:
>>> -        """Send packets and block until they are fully sent.
>>> +        """Send `packets` and block until they are fully sent.
>>>
>>> -        What fully sent means is defined by the traffic generator.
>>> +        Send `packets` on `port`, then wait until `packets` are fully sent.
>>>
>>>            Args:
>>>                packets: The packets to send.
>>> @@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
>>>
>>>        @abstractmethod
>>>        def _send_packets(self, packets: list[Packet], port: Port) -> None:
>>> -        """
>>> -        The extended classes must implement this method which
>>> -        sends packets on send_port. The method should block until all packets
>>> -        are fully sent.
>>> +        """The implementation of :method:`send_packets`.
>>> +
>>> +        The subclasses must implement this method which sends `packets` on `port`.
>>> +        The method should block until all `packets` are fully sent.
>>> +
>>> +        What full sent means is defined by the traffic generator.
>>
>> full -> fully
>>
>>>            """
>>>
>>>        @property
>>>        def is_capturing(self) -> bool:
>>> -        """Whether this traffic generator can capture traffic.
>>> -
>>> -        Returns:
>>> -            True if the traffic generator can capture traffic, False otherwise.
>>> -        """
>>> +        """This traffic generator can't capture traffic."""
>>>            return False
>>>
>>>        @abstractmethod
>>


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 17/21] dts: node docstring update
  2023-11-15 13:09               ` [PATCH v7 17/21] dts: node " Juraj Linkeš
@ 2023-11-22 12:18                 ` Yoan Picchi
  2023-11-22 13:28                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-22 12:18 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/testbed_model/node.py | 191 +++++++++++++++++++---------
>   1 file changed, 131 insertions(+), 60 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> index fa5b143cdd..f93b4acecd 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -3,8 +3,13 @@
>   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2022-2023 University of New Hampshire
>   
> -"""
> -A node is a generic host that DTS connects to and manages.
> +"""Common functionality for node management.
> +
> +A node is any host/server DTS connects to.
> +
> +The base class, :class:`Node`, provides functionality common to all nodes and is supposed
> +to be extended by subclasses with functionality specific to each node type.

functionality -> functionalities

> +The decorator :func:`Node.skip_setup` can be used without subclassing.
>   """
>   
>   from abc import ABC
> @@ -35,10 +40,22 @@
>   
>   
>   class Node(ABC):
> -    """
> -    Basic class for node management. This class implements methods that
> -    manage a node, such as information gathering (of CPU/PCI/NIC) and
> -    environment setup.
> +    """The base class for node management.
> +
> +    It shouldn't be instantiated, but rather subclassed.
> +    It implements common methods to manage any node:
> +
> +        * Connection to the node,
> +        * Hugepages setup.
> +
> +    Attributes:
> +        main_session: The primary OS-aware remote session used to communicate with the node.
> +        config: The node configuration.
> +        name: The name of the node.
> +        lcores: The list of logical cores that DTS can use on the node.
> +            It's derived from logical cores present on the node and the test run configuration.
> +        ports: The ports of this node specified in the test run configuration.
> +        virtual_devices: The virtual devices used on the node.
>       """
>   
>       main_session: OSSession
> @@ -52,6 +69,17 @@ class Node(ABC):
>       virtual_devices: list[VirtualDevice]
>   
>       def __init__(self, node_config: NodeConfiguration):
> +        """Connect to the node and gather info during initialization.
> +
> +        Extra gathered information:
> +
> +        * The list of available logical CPUs. This is then filtered by
> +          the ``lcores`` configuration in the YAML test run configuration file,
> +        * Information about ports from the YAML test run configuration file.
> +
> +        Args:
> +            node_config: The node's test run configuration.
> +        """
>           self.config = node_config
>           self.name = node_config.name
>           self._logger = getLogger(self.name)
> @@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
>           self._logger.info(f"Connected to node: {self.name}")
>   
>           self._get_remote_cpus()
> -        # filter the node lcores according to user config
> +        # filter the node lcores according to the test run configuration
>           self.lcores = LogicalCoreListFilter(
>               self.lcores, LogicalCoreList(self.config.lcores)
>           ).filter()
> @@ -76,9 +104,14 @@ def _init_ports(self) -> None:
>               self.configure_port_state(port)
>   
>       def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> -        """
> -        Perform the execution setup that will be done for each execution
> -        this node is part of.
> +        """Execution setup steps.
> +
> +        Configure hugepages and call :meth:`_set_up_execution` where
> +        the rest of the configuration steps (if any) are implemented.
> +
> +        Args:
> +            execution_config: The execution test run configuration according to which
> +                the setup steps will be taken.
>           """
>           self._setup_hugepages()
>           self._set_up_execution(execution_config)
> @@ -87,58 +120,74 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
>               self.virtual_devices.append(VirtualDevice(vdev))
>   
>       def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional execution setup steps for subclasses.
> +
> +        Subclasses should override this if they need to add additional execution setup steps.
>           """
>   
>       def tear_down_execution(self) -> None:
> -        """
> -        Perform the execution teardown that will be done after each execution
> -        this node is part of concludes.
> +        """Execution teardown steps.
> +
> +        There are currently no common execution teardown steps common to all DTS node types.
>           """
>           self.virtual_devices = []
>           self._tear_down_execution()
>   
>       def _tear_down_execution(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional execution teardown steps for subclasses.
> +
> +        Subclasses should override this if they need to add additional execution teardown steps.
>           """
>   
>       def set_up_build_target(
>           self, build_target_config: BuildTargetConfiguration
>       ) -> None:
> -        """
> -        Perform the build target setup that will be done for each build target
> -        tested on this node.
> +        """Build target setup steps.
> +
> +        There are currently no common build target setup steps common to all DTS node types.
> +
> +        Args:
> +            build_target_config: The build target test run configuration according to which
> +                the setup steps will be taken.
>           """
>           self._set_up_build_target(build_target_config)
>   
>       def _set_up_build_target(
>           self, build_target_config: BuildTargetConfiguration
>       ) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional build target setup steps for subclasses.
> +
> +        Subclasses should override this if they need to add additional build target setup steps.
>           """
>   
>       def tear_down_build_target(self) -> None:
> -        """
> -        Perform the build target teardown that will be done after each build target
> -        tested on this node.
> +        """Build target teardown steps.
> +
> +        There are currently no common build target teardown steps common to all DTS node types.
>           """
>           self._tear_down_build_target()
>   
>       def _tear_down_build_target(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived classes and
> -        is not decorated so that the derived class doesn't have to use the decorator.
> +        """Optional additional build target teardown steps for subclasses.
> +
> +        Subclasses should override this if they need to add additional build target teardown steps.
>           """
>   
>       def create_session(self, name: str) -> OSSession:
> -        """
> -        Create and return a new OSSession tailored to the remote OS.
> +        """Create and return a new OS-aware remote session.
> +
> +        The returned session won't be used by the node creating it. The session must be used by
> +        the caller. The session will be maintained for the entire lifecycle of the node object,
> +        at the end of which the session will be cleaned up automatically.
> +
> +        Note:
> +            Any number of these supplementary sessions may be created.
> +
> +        Args:
> +            name: The name of the session.
> +
> +        Returns:
> +            A new OS-aware remote session.
>           """
>           session_name = f"{self.name} {name}"
>           connection = create_session(
> @@ -156,19 +205,19 @@ def create_interactive_shell(
>           privileged: bool = False,
>           app_args: str = "",
>       ) -> InteractiveShellType:
> -        """Create a handler for an interactive session.
> +        """Factory for interactive session handlers.
>   
> -        Instantiate shell_cls according to the remote OS specifics.
> +        Instantiate `shell_cls` according to the remote OS specifics.
>   
>           Args:
>               shell_cls: The class of the shell.
> -            timeout: Timeout for reading output from the SSH channel. If you are
> -                reading from the buffer and don't receive any data within the timeout
> -                it will throw an error.
> +            timeout: Timeout for reading output from the SSH channel. If you are reading from
> +                the buffer and don't receive any data within the timeout it will throw an error.
>               privileged: Whether to run the shell with administrative privileges.
>               app_args: The arguments to be passed to the application.
> +
>           Returns:
> -            Instance of the desired interactive application.
> +            An instance of the desired interactive application shell.
>           """
>           if not shell_cls.dpdk_app:
>               shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
> @@ -185,14 +234,22 @@ def filter_lcores(
>           filter_specifier: LogicalCoreCount | LogicalCoreList,
>           ascending: bool = True,
>       ) -> list[LogicalCore]:
> -        """
> -        Filter the LogicalCores found on the Node according to
> -        a LogicalCoreCount or a LogicalCoreList.
> +        """Filter the node's logical cores that DTS can use.
> +
> +        Logical cores that DTS can use are the ones that are present on the node, but filtered
> +        according to the test run configuration. The `filter_specifier` will filter cores from
> +        those logical cores.
> +
> +        Args:
> +            filter_specifier: Two different filters can be used, one that specifies the number
> +                of logical cores per core, cores per socket and the number of sockets,
> +                and another one that specifies a logical core list.
> +            ascending: If :data:`True`, use cores with the lowest numerical id first and continue
> +                in ascending order. If :data:`False`, start with the highest id and continue
> +                in descending order. This ordering affects which sockets to consider first as well.
>   
> -        If ascending is True, use cores with the lowest numerical id first
> -        and continue in ascending order. If False, start with the highest
> -        id and continue in descending order. This ordering affects which
> -        sockets to consider first as well.
> +        Returns:
> +            The filtered logical cores.
>           """
>           self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
>           return lcore_filter(
> @@ -202,17 +259,14 @@ def filter_lcores(
>           ).filter()
>   
>       def _get_remote_cpus(self) -> None:
> -        """
> -        Scan CPUs in the remote OS and store a list of LogicalCores.
> -        """
> +        """Scan CPUs in the remote OS and store a list of LogicalCores."""
>           self._logger.info("Getting CPU information.")
>           self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
>   
>       def _setup_hugepages(self) -> None:
> -        """
> -        Setup hugepages on the Node. Different architectures can supply different
> -        amounts of memory for hugepages and numa-based hugepage allocation may need
> -        to be considered.
> +        """Setup hugepages on the node.
> +
> +        Configure the hugepages only if they're specified in the node's test run configuration.
>           """
>           if self.config.hugepages:
>               self.main_session.setup_hugepages(
> @@ -220,8 +274,11 @@ def _setup_hugepages(self) -> None:
>               )
>   
>       def configure_port_state(self, port: Port, enable: bool = True) -> None:
> -        """
> -        Enable/disable port.
> +        """Enable/disable `port`.
> +
> +        Args:
> +            port: The port to enable/disable.
> +            enable: :data:`True` to enable, :data:`False` to disable.
>           """
>           self.main_session.configure_port_state(port, enable)
>   
> @@ -231,15 +288,17 @@ def configure_port_ip_address(
>           port: Port,
>           delete: bool = False,
>       ) -> None:
> -        """
> -        Configure the IP address of a port on this node.
> +        """Add an IP address to `port` on this node.
> +
> +        Args:
> +            address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6.
> +            port: The port to which to add the address.
> +            delete: If :data:`True`, will delete the address from the port instead of adding it.
>           """
>           self.main_session.configure_port_ip_address(address, port, delete)
>   
>       def close(self) -> None:
> -        """
> -        Close all connections and free other resources.
> -        """
> +        """Close all connections and free other resources."""
>           if self.main_session:
>               self.main_session.close()
>           for session in self._other_sessions:
> @@ -248,6 +307,11 @@ def close(self) -> None:
>   
>       @staticmethod
>       def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> +        """Skip the decorated function.
> +
> +        The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP`
> +        environment variable enable the decorator.
> +        """
>           if SETTINGS.skip_setup:
>               return lambda *args: None
>           else:
> @@ -257,6 +321,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
>   def create_session(
>       node_config: NodeConfiguration, name: str, logger: DTSLOG
>   ) -> OSSession:
> +    """Factory for OS-aware sessions.
> +
> +    Args:
> +        node_config: The test run configuration of the node to connect to.
> +        name: The name of the session.
> +        logger: The logger instance this session will use.
> +    """
>       match node_config.os:
>           case OS.linux:
>               return LinuxSession(node_config, name, logger)


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 19/21] dts: base traffic generators docstring update
  2023-11-22 11:56                     ` Yoan Picchi
@ 2023-11-22 13:11                       ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 13:11 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Wed, Nov 22, 2023 at 1:05 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/22/23 11:38, Juraj Linkeš wrote:
> > On Tue, Nov 21, 2023 at 5:20 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
> >>
> >> On 11/15/23 13:09, Juraj Linkeš wrote:
> >>> Format according to the Google format and PEP257, with slight
> >>> deviations.
> >>>
> >>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> >>> ---
> >>>    .../traffic_generator/__init__.py             | 22 ++++++++-
> >>>    .../capturing_traffic_generator.py            | 46 +++++++++++--------
> >>>    .../traffic_generator/traffic_generator.py    | 33 +++++++------
> >>>    3 files changed, 68 insertions(+), 33 deletions(-)
> >>>
> >>> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
> >>> index 11bfa1ee0f..51cca77da4 100644
> >>> --- a/dts/framework/testbed_model/traffic_generator/__init__.py
> >>> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> >>> @@ -1,6 +1,19 @@
> >>>    # SPDX-License-Identifier: BSD-3-Clause
> >>>    # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >>>
> >>> +"""DTS traffic generators.
> >>> +
> >>> +A traffic generator is capable of generating traffic and then monitor returning traffic.
> >>> +A traffic generator may just count the number of received packets
> >>> +and it may additionally capture individual packets.
> >>
> >> The sentence feels odd. Isn't it supposed to be "or" here? and no need
> >> for that early of a line break
> >>
> >
> > There are two mays, so there probably should be an or. But I'd like to
> > reword it to this:
> >
> > All traffic generators count the number of received packets, and they
> > may additionally
> > capture individual packets.
> >
> > What do you think?
>
> I think it's better with the new sentence. But I think it'd be even
> better to split into two sentences to highlight the must/may:
> All traffic generators must count the number of received packets. Some
> may additionally capture individual packets.
>

I like this, I'll reword it.

> >
> >>> +
> >>> +A traffic generator may be software running on generic hardware or it could be specialized hardware.
> >>> +
> >>> +The traffic generators that only count the number of received packets are suitable only for
> >>> +performance testing. In functional testing, we need to be able to dissect each arrived packet
> >>> +and a capturing traffic generator is required.
> >>> +"""
> >>> +
> >>>    from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
> >>>    from framework.exception import ConfigurationError
> >>>    from framework.testbed_model.node import Node
> >>> @@ -12,8 +25,15 @@
> >>>    def create_traffic_generator(
> >>>        tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
> >>>    ) -> CapturingTrafficGenerator:
> >>> -    """A factory function for creating traffic generator object from user config."""
> >>> +    """The factory function for creating traffic generator objects from the test run configuration.
> >>> +
> >>> +    Args:
> >>> +        tg_node: The traffic generator node where the created traffic generator will be running.
> >>> +        traffic_generator_config: The traffic generator config.
> >>>
> >>> +    Returns:
> >>> +        A traffic generator capable of capturing received packets.
> >>> +    """
> >>>        match traffic_generator_config.traffic_generator_type:
> >>>            case TrafficGeneratorType.SCAPY:
> >>>                return ScapyTrafficGenerator(tg_node, traffic_generator_config)
> >>> diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> >>> index e521211ef0..b0a43ad003 100644
> >>> --- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> >>> +++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> >>> @@ -23,19 +23,22 @@
> >>>
> >>>
> >>>    def _get_default_capture_name() -> str:
> >>> -    """
> >>> -    This is the function used for the default implementation of capture names.
> >>> -    """
> >>>        return str(uuid.uuid4())
> >>>
> >>>
> >>>    class CapturingTrafficGenerator(TrafficGenerator):
> >>>        """Capture packets after sending traffic.
> >>>
> >>> -    A mixin interface which enables a packet generator to declare that it can capture
> >>> +    The intermediary interface which enables a packet generator to declare that it can capture
> >>>        packets and return them to the user.
> >>>
> >>> +    Similarly to
> >>> +    :class:`~framework.testbed_model.traffic_generator.traffic_generator.TrafficGenerator`,
> >>> +    this class exposes the public methods specific to capturing traffic generators and defines
> >>> +    a private method that must implement the traffic generation and capturing logic in subclasses.
> >>> +
> >>>        The methods of capturing traffic generators obey the following workflow:
> >>> +
> >>>            1. send packets
> >>>            2. capture packets
> >>>            3. write the capture to a .pcap file
> >>> @@ -44,6 +47,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
> >>>
> >>>        @property
> >>>        def is_capturing(self) -> bool:
> >>> +        """This traffic generator can capture traffic."""
> >>>            return True
> >>>
> >>>        def send_packet_and_capture(
> >>> @@ -54,11 +58,12 @@ def send_packet_and_capture(
> >>>            duration: float,
> >>>            capture_name: str = _get_default_capture_name(),
> >>>        ) -> list[Packet]:
> >>> -        """Send a packet, return received traffic.
> >>> +        """Send `packet` and capture received traffic.
> >>> +
> >>> +        Send `packet` on `send_port` and then return all traffic captured
> >>> +        on `receive_port` for the given `duration`.
> >>>
> >>> -        Send a packet on the send_port and then return all traffic captured
> >>> -        on the receive_port for the given duration. Also record the captured traffic
> >>> -        in a pcap file.
> >>> +        The captured traffic is recorded in the `capture_name`.pcap file.
> >>>
> >>>            Args:
> >>>                packet: The packet to send.
> >>> @@ -68,7 +73,7 @@ def send_packet_and_capture(
> >>>                capture_name: The name of the .pcap file where to store the capture.
> >>>
> >>>            Returns:
> >>> -             A list of received packets. May be empty if no packets are captured.
> >>> +             The received packets. May be empty if no packets are captured.
> >>>            """
> >>>            return self.send_packets_and_capture(
> >>>                [packet], send_port, receive_port, duration, capture_name
> >>> @@ -82,11 +87,14 @@ def send_packets_and_capture(
> >>>            duration: float,
> >>>            capture_name: str = _get_default_capture_name(),
> >>>        ) -> list[Packet]:
> >>> -        """Send packets, return received traffic.
> >>> +        """Send `packets` and capture received traffic.
> >>>
> >>> -        Send packets on the send_port and then return all traffic captured
> >>> -        on the receive_port for the given duration. Also record the captured traffic
> >>> -        in a pcap file.
> >>> +        Send `packets` on `send_port` and then return all traffic captured
> >>> +        on `receive_port` for the given `duration`.
> >>> +
> >>> +        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
> >>> +        can be configured with the :option:`--output-dir` command line argument or
> >>> +        the :envvar:`DTS_OUTPUT_DIR` environment variable.
> >>>
> >>>            Args:
> >>>                packets: The packets to send.
> >>> @@ -96,7 +104,7 @@ def send_packets_and_capture(
> >>>                capture_name: The name of the .pcap file where to store the capture.
> >>>
> >>>            Returns:
> >>> -             A list of received packets. May be empty if no packets are captured.
> >>> +             The received packets. May be empty if no packets are captured.
> >>>            """
> >>>            self._logger.debug(get_packet_summaries(packets))
> >>>            self._logger.debug(
> >>> @@ -124,10 +132,12 @@ def _send_packets_and_capture(
> >>>            receive_port: Port,
> >>>            duration: float,
> >>>        ) -> list[Packet]:
> >>> -        """
> >>> -        The extended classes must implement this method which
> >>> -        sends packets on send_port and receives packets on the receive_port
> >>> -        for the specified duration. It must be able to handle no received packets.
> >>> +        """The implementation of :method:`send_packets_and_capture`.
> >>> +
> >>> +        The subclasses must implement this method which sends `packets` on `send_port`
> >>> +        and receives packets on `receive_port` for the specified `duration`.
> >>> +
> >>> +        It must be able to handle no received packets.
> >>
> >> This sentence feels odd too. Maybe "It must be able to handle receiving
> >> no packets."
> >>
> >
> > Right, your suggestion is better.
> >
> >>>            """
> >>>
> >>>        def _write_capture_from_packets(
> >>> diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> >>> index ea7c3963da..ed396c6a2f 100644
> >>> --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> >>> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> >>> @@ -22,7 +22,8 @@
> >>>    class TrafficGenerator(ABC):
> >>>        """The base traffic generator.
> >>>
> >>> -    Defines the few basic methods that each traffic generator must implement.
> >>> +    Exposes the common public methods of all traffic generators and defines private methods
> >>> +    that must implement the traffic generation logic in subclasses.
> >>>        """
> >>>
> >>>        _config: TrafficGeneratorConfig
> >>> @@ -30,6 +31,12 @@ class TrafficGenerator(ABC):
> >>>        _logger: DTSLOG
> >>>
> >>>        def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> >>> +        """Initialize the traffic generator.
> >>> +
> >>> +        Args:
> >>> +            tg_node: The traffic generator node where the created traffic generator will be running.
> >>> +            config: The traffic generator's test run configuration.
> >>> +        """
> >>>            self._config = config
> >>>            self._tg_node = tg_node
> >>>            self._logger = getLogger(
> >>> @@ -37,9 +44,9 @@ def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> >>>            )
> >>>
> >>>        def send_packet(self, packet: Packet, port: Port) -> None:
> >>> -        """Send a packet and block until it is fully sent.
> >>> +        """Send `packet` and block until it is fully sent.
> >>>
> >>> -        What fully sent means is defined by the traffic generator.
> >>> +        Send `packet` on `port`, then wait until `packet` is fully sent.
> >>>
> >>>            Args:
> >>>                packet: The packet to send.
> >>> @@ -48,9 +55,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
> >>>            self.send_packets([packet], port)
> >>>
> >>>        def send_packets(self, packets: list[Packet], port: Port) -> None:
> >>> -        """Send packets and block until they are fully sent.
> >>> +        """Send `packets` and block until they are fully sent.
> >>>
> >>> -        What fully sent means is defined by the traffic generator.
> >>> +        Send `packets` on `port`, then wait until `packets` are fully sent.
> >>>
> >>>            Args:
> >>>                packets: The packets to send.
> >>> @@ -62,19 +69,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
> >>>
> >>>        @abstractmethod
> >>>        def _send_packets(self, packets: list[Packet], port: Port) -> None:
> >>> -        """
> >>> -        The extended classes must implement this method which
> >>> -        sends packets on send_port. The method should block until all packets
> >>> -        are fully sent.
> >>> +        """The implementation of :method:`send_packets`.
> >>> +
> >>> +        The subclasses must implement this method which sends `packets` on `port`.
> >>> +        The method should block until all `packets` are fully sent.
> >>> +
> >>> +        What full sent means is defined by the traffic generator.
> >>
> >> full -> fully
> >>
> >>>            """
> >>>
> >>>        @property
> >>>        def is_capturing(self) -> bool:
> >>> -        """Whether this traffic generator can capture traffic.
> >>> -
> >>> -        Returns:
> >>> -            True if the traffic generator can capture traffic, False otherwise.
> >>> -        """
> >>> +        """This traffic generator can't capture traffic."""
> >>>            return False
> >>>
> >>>        @abstractmethod
> >>
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 18/21] dts: sut and tg nodes docstring update
  2023-11-15 13:09               ` [PATCH v7 18/21] dts: sut and tg nodes " Juraj Linkeš
@ 2023-11-22 13:12                 ` Yoan Picchi
  2023-11-22 13:34                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-22 13:12 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/testbed_model/sut_node.py | 224 ++++++++++++++++--------
>   dts/framework/testbed_model/tg_node.py  |  42 +++--
>   2 files changed, 173 insertions(+), 93 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
> index 17deea06e2..123b16fee0 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -3,6 +3,14 @@
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2023 University of New Hampshire
>   
> +"""System under test (DPDK + hardware) node.
> +
> +A system under test (SUT) is the combination of DPDK
> +and the hardware we're testing with DPDK (NICs, crypto and other devices).
> +An SUT node is where this SUT runs.
> +"""
> +
> +
>   import os
>   import tarfile
>   import time
> @@ -26,6 +34,11 @@
>   
>   
>   class EalParameters(object):
> +    """The environment abstraction layer parameters.
> +
> +    The string representation can be created by converting the instance to a string.
> +    """
> +
>       def __init__(
>           self,
>           lcore_list: LogicalCoreList,
> @@ -35,21 +48,23 @@ def __init__(
>           vdevs: list[VirtualDevice],
>           other_eal_param: str,
>       ):
> -        """
> -        Generate eal parameters character string;
> -        :param lcore_list: the list of logical cores to use.
> -        :param memory_channels: the number of memory channels to use.
> -        :param prefix: set file prefix string, eg:
> -                        prefix='vf'
> -        :param no_pci: switch of disable PCI bus eg:
> -                        no_pci=True
> -        :param vdevs: virtual device list, eg:
> -                        vdevs=[
> -                            VirtualDevice('net_ring0'),
> -                            VirtualDevice('net_ring1')
> -                        ]
> -        :param other_eal_param: user defined DPDK eal parameters, eg:
> -                        other_eal_param='--single-file-segments'
> +        """Initialize the parameters according to inputs.
> +
> +        Process the parameters into the format used on the command line.
> +
> +        Args:
> +            lcore_list: The list of logical cores to use.
> +            memory_channels: The number of memory channels to use.
> +            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
> +            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
> +            vdevs: Virtual devices, e.g.::
> +
> +                vdevs=[
> +                    VirtualDevice('net_ring0'),
> +                    VirtualDevice('net_ring1')
> +                ]
> +            other_eal_param: user defined DPDK EAL parameters, e.g.:
> +                ``other_eal_param='--single-file-segments'``
>           """
>           self._lcore_list = f"-l {lcore_list}"
>           self._memory_channels = f"-n {memory_channels}"
> @@ -61,6 +76,7 @@ def __init__(
>           self._other_eal_param = other_eal_param
>   
>       def __str__(self) -> str:
> +        """Create the EAL string."""
>           return (
>               f"{self._lcore_list} "
>               f"{self._memory_channels} "
> @@ -72,11 +88,21 @@ def __str__(self) -> str:
>   
>   
>   class SutNode(Node):
> -    """
> -    A class for managing connections to the System under Test, providing
> -    methods that retrieve the necessary information about the node (such as
> -    CPU, memory and NIC details) and configuration capabilities.
> -    Another key capability is building DPDK according to given build target.
> +    """The system under test node.
> +
> +    The SUT node extends :class:`Node` with DPDK specific features:
> +
> +        * DPDK build,
> +        * Gathering of DPDK build info,
> +        * The running of DPDK apps, interactively or one-time execution,
> +        * DPDK apps cleanup.
> +
> +    The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL`
> +    environment variable configure the path to the DPDK tarball
> +    or the git commit ID, tag ID or tree ID to test.

I just want to make sure. We use the --tarball option also to set a git 
commit id instead of a tarball as the source?

> +
> +    Attributes:
> +        config: The SUT node configuration
>       """
>   
>       config: SutNodeConfiguration
> @@ -94,6 +120,11 @@ class SutNode(Node):
>       _path_to_devbind_script: PurePath | None
>   
>       def __init__(self, node_config: SutNodeConfiguration):
> +        """Extend the constructor with SUT node specifics.
> +
> +        Args:
> +            node_config: The SUT node's test run configuration.
> +        """
>           super(SutNode, self).__init__(node_config)
>           self._dpdk_prefix_list = []
>           self._build_target_config = None
> @@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration):
>   
>       @property
>       def _remote_dpdk_dir(self) -> PurePath:
> +        """The remote DPDK dir.
> +
> +        This internal property should be set after extracting the DPDK tarball. If it's not set,
> +        that implies the DPDK setup step has been skipped, in which case we can guess where
> +        a previous build was located.
> +        """
>           if self.__remote_dpdk_dir is None:
>               self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
>           return self.__remote_dpdk_dir
> @@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
>   
>       @property
>       def remote_dpdk_build_dir(self) -> PurePath:
> +        """The remote DPDK build directory.
> +
> +        This is the directory where DPDK was built.
> +        We assume it was built in a subdirectory of the extracted tarball.
> +        """
>           if self._build_target_config:
>               return self.main_session.join_remote_path(
>                   self._remote_dpdk_dir, self._build_target_config.name
> @@ -132,6 +174,7 @@ def remote_dpdk_build_dir(self) -> PurePath:
>   
>       @property
>       def dpdk_version(self) -> str:
> +        """Last built DPDK version."""
>           if self._dpdk_version is None:
>               self._dpdk_version = self.main_session.get_dpdk_version(
>                   self._remote_dpdk_dir
> @@ -140,12 +183,14 @@ def dpdk_version(self) -> str:
>   
>       @property
>       def node_info(self) -> NodeInfo:
> +        """Additional node information."""
>           if self._node_info is None:
>               self._node_info = self.main_session.get_node_info()
>           return self._node_info
>   
>       @property
>       def compiler_version(self) -> str:
> +        """The node's compiler version."""
>           if self._compiler_version is None:
>               if self._build_target_config is not None:
>                   self._compiler_version = self.main_session.get_compiler_version(
> @@ -161,6 +206,7 @@ def compiler_version(self) -> str:
>   
>       @property
>       def path_to_devbind_script(self) -> PurePath:
> +        """The path to the dpdk-devbind.py script on the node."""
>           if self._path_to_devbind_script is None:
>               self._path_to_devbind_script = self.main_session.join_remote_path(
>                   self._remote_dpdk_dir, "usertools", "dpdk-devbind.py"
> @@ -168,6 +214,11 @@ def path_to_devbind_script(self) -> PurePath:
>           return self._path_to_devbind_script
>   
>       def get_build_target_info(self) -> BuildTargetInfo:
> +        """Get additional build target information.
> +
> +        Returns:
> +            The build target information,
> +        """
>           return BuildTargetInfo(
>               dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
>           )
> @@ -178,8 +229,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
>       def _set_up_build_target(
>           self, build_target_config: BuildTargetConfiguration
>       ) -> None:
> -        """
> -        Setup DPDK on the SUT node.
> +        """Setup DPDK on the SUT node.
> +
> +        Additional build target setup steps on top of those in :class:`Node`.
>           """
>           # we want to ensure that dpdk_version and compiler_version is reset for new
>           # build targets
> @@ -200,9 +252,7 @@ def _tear_down_build_target(self) -> None:
>       def _configure_build_target(
>           self, build_target_config: BuildTargetConfiguration
>       ) -> None:
> -        """
> -        Populate common environment variables and set build target config.
> -        """
> +        """Populate common environment variables and set build target config."""
>           self._env_vars = {}
>           self._build_target_config = build_target_config
>           self._env_vars.update(
> @@ -217,9 +267,7 @@ def _configure_build_target(
>   
>       @Node.skip_setup
>       def _copy_dpdk_tarball(self) -> None:
> -        """
> -        Copy to and extract DPDK tarball on the SUT node.
> -        """
> +        """Copy to and extract DPDK tarball on the SUT node."""
>           self._logger.info("Copying DPDK tarball to SUT.")
>           self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
>   
> @@ -250,8 +298,9 @@ def _copy_dpdk_tarball(self) -> None:
>   
>       @Node.skip_setup
>       def _build_dpdk(self) -> None:
> -        """
> -        Build DPDK. Uses the already configured target. Assumes that the tarball has
> +        """Build DPDK.
> +
> +        Uses the already configured target. Assumes that the tarball has
>           already been copied to and extracted on the SUT node.
>           """
>           self.main_session.build_dpdk(
> @@ -262,15 +311,19 @@ def _build_dpdk(self) -> None:
>           )
>   
>       def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
> -        """
> -        Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
> -        When app_name is 'all', build all example apps.
> -        When app_name is any other string, tries to build that example app.
> -        Return the directory path of the built app. If building all apps, return
> -        the path to the examples directory (where all apps reside).
> -        The meson_dpdk_args are keyword arguments
> -        found in meson_option.txt in root DPDK directory. Do not use -D with them,
> -        for example: enable_kmods=True.
> +        """Build one or all DPDK apps.
> +
> +        Requires DPDK to be already built on the SUT node.
> +
> +        Args:
> +            app_name: The name of the DPDK app to build.
> +                When `app_name` is ``all``, build all example apps.
> +            meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
> +                Do not use ``-D`` with them.
> +
> +        Returns:
> +            The directory path of the built app. If building all apps, return
> +            the path to the examples directory (where all apps reside).
>           """
>           self.main_session.build_dpdk(
>               self._env_vars,
> @@ -291,9 +344,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
>           )
>   
>       def kill_cleanup_dpdk_apps(self) -> None:
> -        """
> -        Kill all dpdk applications on the SUT. Cleanup hugepages.
> -        """
> +        """Kill all dpdk applications on the SUT, then clean up hugepages."""
>           if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
>               # we can use the session if it exists and responds
>               self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
> @@ -312,33 +363,34 @@ def create_eal_parameters(
>           vdevs: list[VirtualDevice] | None = None,
>           other_eal_param: str = "",
>       ) -> "EalParameters":
> -        """
> -        Generate eal parameters character string;
> -        :param lcore_filter_specifier: a number of lcores/cores/sockets to use
> -                        or a list of lcore ids to use.
> -                        The default will select one lcore for each of two cores
> -                        on one socket, in ascending order of core ids.
> -        :param ascending_cores: True, use cores with the lowest numerical id first
> -                        and continue in ascending order. If False, start with the
> -                        highest id and continue in descending order. This ordering
> -                        affects which sockets to consider first as well.
> -        :param prefix: set file prefix string, eg:
> -                        prefix='vf'
> -        :param append_prefix_timestamp: if True, will append a timestamp to
> -                        DPDK file prefix.
> -        :param no_pci: switch of disable PCI bus eg:
> -                        no_pci=True
> -        :param vdevs: virtual device list, eg:
> -                        vdevs=[
> -                            VirtualDevice('net_ring0'),
> -                            VirtualDevice('net_ring1')
> -                        ]
> -        :param other_eal_param: user defined DPDK eal parameters, eg:
> -                        other_eal_param='--single-file-segments'
> -        :return: eal param string, eg:
> -                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
> -        """
> +        """Compose the EAL parameters.
> +
> +        Process the list of cores and the DPDK prefix and pass that along with
> +        the rest of the arguments.
>   
> +        Args:
> +            lcore_filter_specifier: A number of lcores/cores/sockets to use
> +                or a list of lcore ids to use.
> +                The default will select one lcore for each of two cores
> +                on one socket, in ascending order of core ids.
> +            ascending_cores: Sort cores in ascending order (lowest to highest IDs).
> +                If :data:`False`, sort in descending order.
> +            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
> +            append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
> +            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
> +            vdevs: Virtual devices, e.g.::
> +
> +                vdevs=[
> +                    VirtualDevice('net_ring0'),
> +                    VirtualDevice('net_ring1')
> +                ]
> +            other_eal_param: user defined DPDK EAL parameters, e.g.:
> +                ``other_eal_param='--single-file-segments'``.
> +
> +        Returns:
> +            An EAL param string, such as
> +            ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``.
> +        """
>           lcore_list = LogicalCoreList(
>               self.filter_lcores(lcore_filter_specifier, ascending_cores)
>           )
> @@ -364,14 +416,29 @@ def create_eal_parameters(
>       def run_dpdk_app(
>           self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
>       ) -> CommandResult:
> -        """
> -        Run DPDK application on the remote node.
> +        """Run DPDK application on the remote node.
> +
> +        The application is not run interactively - the command that starts the application
> +        is executed and then the call waits for it to finish execution.
> +
> +        Args:
> +            app_path: The remote path to the DPDK application.
> +            eal_args: EAL parameters to run the DPDK application with.
> +            timeout: Wait at most this long in seconds to execute the command.

confusing timeout

> +
> +        Returns:
> +            The result of the DPDK app execution.
>           """
>           return self.main_session.send_command(
>               f"{app_path} {eal_args}", timeout, privileged=True, verify=True
>           )
>   
>       def configure_ipv4_forwarding(self, enable: bool) -> None:
> +        """Enable/disable IPv4 forwarding on the node.
> +
> +        Args:
> +            enable: If :data:`True`, enable the forwarding, otherwise disable it.
> +        """
>           self.main_session.configure_ipv4_forwarding(enable)
>   
>       def create_interactive_shell(
> @@ -381,9 +448,13 @@ def create_interactive_shell(
>           privileged: bool = False,
>           eal_parameters: EalParameters | str | None = None,
>       ) -> InteractiveShellType:
> -        """Factory method for creating a handler for an interactive session.
> +        """Extend the factory for interactive session handlers.
> +
> +        The extensions are SUT node specific:
>   
> -        Instantiate shell_cls according to the remote OS specifics.
> +            * The default for `eal_parameters`,
> +            * The interactive shell path `shell_cls.path` is prepended with path to the remote
> +              DPDK build directory for DPDK apps.
>   
>           Args:
>               shell_cls: The class of the shell.
> @@ -393,9 +464,10 @@ def create_interactive_shell(
>               privileged: Whether to run the shell with administrative privileges.
>               eal_parameters: List of EAL parameters to use to launch the app. If this
>                   isn't provided or an empty string is passed, it will default to calling
> -                create_eal_parameters().
> +                :meth:`create_eal_parameters`.
> +
>           Returns:
> -            Instance of the desired interactive application.
> +            An instance of the desired interactive application shell.
>           """
>           if not eal_parameters:
>               eal_parameters = self.create_eal_parameters()
> @@ -414,8 +486,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
>           """Bind all ports on the SUT to a driver.
>   
>           Args:
> -            for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk
> -            or, when False, binds to os_driver. Defaults to True.
> +            for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
> +                If :data:`False`, binds to os_driver.
>           """
>           for port in self.ports:
>               driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
> diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
> index 166eb8430e..69eb33ccb1 100644
> --- a/dts/framework/testbed_model/tg_node.py
> +++ b/dts/framework/testbed_model/tg_node.py
> @@ -5,13 +5,8 @@
>   
>   """Traffic generator node.
>   
> -This is the node where the traffic generator resides.
> -The distinction between a node and a traffic generator is as follows:
> -A node is a host that DTS connects to. It could be a baremetal server,
> -a VM or a container.
> -A traffic generator is software running on the node.
> -A traffic generator node is a node running a traffic generator.
> -A node can be a traffic generator node as well as system under test node.
> +A traffic generator (TG) generates traffic that's sent towards the SUT node.
> +A TG node is where the TG runs.
>   """
>   
>   from scapy.packet import Packet  # type: ignore[import]
> @@ -24,13 +19,16 @@
>   
>   
>   class TGNode(Node):
> -    """Manage connections to a node with a traffic generator.
> +    """The traffic generator node.
>   
> -    Apart from basic node management capabilities, the Traffic Generator node has
> -    specialized methods for handling the traffic generator running on it.
> +    The TG node extends :class:`Node` with TG specific features:
>   
> -    Arguments:
> -        node_config: The user configuration of the traffic generator node.
> +        * Traffic generator initialization,
> +        * The sending of traffic and receiving packets,
> +        * The sending of traffic without receiving packets.
> +
> +    Not all traffic generators are capable of capturing traffic, which is why there
> +    must be a way to send traffic without that.
>   
>       Attributes:
>           traffic_generator: The traffic generator running on the node.
> @@ -39,6 +37,13 @@ class TGNode(Node):
>       traffic_generator: CapturingTrafficGenerator
>   
>       def __init__(self, node_config: TGNodeConfiguration):
> +        """Extend the constructor with TG node specifics.
> +
> +        Initialize the traffic generator on the TG node.
> +
> +        Args:
> +            node_config: The TG node's test run configuration.
> +        """
>           super(TGNode, self).__init__(node_config)
>           self.traffic_generator = create_traffic_generator(
>               self, node_config.traffic_generator
> @@ -52,17 +57,17 @@ def send_packet_and_capture(
>           receive_port: Port,
>           duration: float = 1,
>       ) -> list[Packet]:
> -        """Send a packet, return received traffic.
> +        """Send `packet`, return received traffic.
>   
> -        Send a packet on the send_port and then return all traffic captured
> -        on the receive_port for the given duration. Also record the captured traffic
> +        Send `packet` on `send_port` and then return all traffic captured
> +        on `receive_port` for the given duration. Also record the captured traffic
>           in a pcap file.
>   
>           Args:
>               packet: The packet to send.
>               send_port: The egress port on the TG node.
>               receive_port: The ingress port in the TG node.
> -            duration: Capture traffic for this amount of time after sending the packet.
> +            duration: Capture traffic for this amount of time after sending `packet`.
>   
>           Returns:
>                A list of received packets. May be empty if no packets are captured.
> @@ -72,6 +77,9 @@ def send_packet_and_capture(
>           )
>   
>       def close(self) -> None:
> -        """Free all resources used by the node"""
> +        """Free all resources used by the node.
> +
> +        This extends the superclass method with TG cleanup.
> +        """
>           self.traffic_generator.close()
>           super(TGNode, self).close()


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 20/21] dts: scapy tg docstring update
  2023-11-21 16:33                 ` Yoan Picchi
@ 2023-11-22 13:18                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 13:18 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Tue, Nov 21, 2023 at 5:33 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
> >   1 file changed, 54 insertions(+), 37 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
> > index 51864b6e6b..ed4f879925 100644
> > --- a/dts/framework/testbed_model/traffic_generator/scapy.py
> > +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
> > @@ -2,14 +2,15 @@
> >   # Copyright(c) 2022 University of New Hampshire
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >
> > -"""Scapy traffic generator.
> > +"""The Scapy traffic generator.
> >
> > -Traffic generator used for functional testing, implemented using the Scapy library.
> > +A traffic generator used for functional testing, implemented with
> > +`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
> >   The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
> >
> > -The XML-RPC server runs in an interactive remote SSH session running Python console,
> > -where we start the server. The communication with the server is facilitated with
> > -a local server proxy.
> > +The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
> > +in an interactive remote Python SSH session. The communication with the server is facilitated
> > +with a local server proxy from the :mod:`xmlrpc.client` module.
> >   """
> >
> >   import inspect
> > @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
> >       recv_iface: str,
> >       duration: float,
> >   ) -> list[bytes]:
> > -    """RPC function to send and capture packets.
> > +    """The RPC function to send and capture packets.
> >
> > -    The function is meant to be executed on the remote TG node.
> > +    The function is meant to be executed on the remote TG node via the server proxy.
> >
> >       Args:
> >           xmlrpc_packets: The packets to send. These need to be converted to
> > -            xmlrpc.client.Binary before sending to the remote server.
> > +            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
>
> The string is not raw and no \s. As per you explanation a few commits
> earlier this might cause an issue with the tilda ?
> Looking around I see it also happen several time here and also in the
> previous commit.
>

The issue is not with the tilda, but with backticks. When backticks
are followed by certain characters (I mentioned alphanumeric
characters, but there may be others), the character right after the
backtick must be escaped and the raw string results in the right
escaping for Sphinx.

> >           send_iface: The logical name of the egress interface.
> >           recv_iface: The logical name of the ingress interface.
> >           duration: Capture for this amount of time, in seconds.
> >
> >       Returns:
> >           A list of bytes. Each item in the list represents one packet, which needs
> > -            to be converted back upon transfer from the remote node.
> > +        to be converted back upon transfer from the remote node.
> >       """
> >       scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
> >       sniffer = scapy.all.AsyncSniffer(
> > @@ -98,19 +99,15 @@ def scapy_send_packets_and_capture(
> >   def scapy_send_packets(
> >       xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
> >   ) -> None:
> > -    """RPC function to send packets.
> > +    """The RPC function to send packets.
> >
> > -    The function is meant to be executed on the remote TG node.
> > -    It doesn't return anything, only sends packets.
> > +    The function is meant to be executed on the remote TG node via the server proxy.
> > +    It only sends `xmlrpc_packets`, without capturing them.
> >
> >       Args:
> >           xmlrpc_packets: The packets to send. These need to be converted to
> > -            xmlrpc.client.Binary before sending to the remote server.
> > +            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
> >           send_iface: The logical name of the egress interface.
> > -
> > -    Returns:
> > -        A list of bytes. Each item in the list represents one packet, which needs
> > -            to be converted back upon transfer from the remote node.
> >       """
> >       scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
> >       scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
> > @@ -130,11 +127,19 @@ def scapy_send_packets(
> >
> >
> >   class QuittableXMLRPCServer(SimpleXMLRPCServer):
> > -    """Basic XML-RPC server that may be extended
> > -    by functions serializable by the marshal module.
> > +    r"""Basic XML-RPC server.
>
> But you have a raw string here, and I don't see the need why.
>

There is no need here, I'll remove it. It's a remnant from a much
bigger docstring which caused issues when sending the code to the TG
node. I've talked to Jeremy and we'll fix it in a separate patch which
will introduce the full docstring.

> > +
> > +    The server may be augmented by functions serializable by the :mod:`marshal` module.
> >       """
> >
> >       def __init__(self, *args, **kwargs):
> > +        """Extend the XML-RPC server initialization.
> > +
> > +        Args:
> > +            args: The positional arguments that will be passed to the superclass's constructor.
> > +            kwargs: The keyword arguments that will be passed to the superclass's constructor.
> > +                The `allow_none` argument will be set to :data:`True`.
> > +        """
> >           kwargs["allow_none"] = True
> >           super().__init__(*args, **kwargs)
> >           self.register_introspection_functions()
> > @@ -142,13 +147,12 @@ def __init__(self, *args, **kwargs):
> >           self.register_function(self.add_rpc_function)
> >
> >       def quit(self) -> None:
> > +        """Quit the server."""
> >           self._BaseServer__shutdown_request = True
> >           return None
> >
> >       def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
> > -        """Add a function to the server.
> > -
> > -        This is meant to be executed remotely.
> > +        """Add a function to the server from the local server proxy.
> >
> >           Args:
> >                 name: The name of the function.
> > @@ -159,6 +163,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
> >           self.register_function(function)
> >
> >       def serve_forever(self, poll_interval: float = 0.5) -> None:
> > +        """Extend the superclass method with an additional print.
> > +
> > +        Once executed in the local server proxy, the print gives us a clear string to expect
> > +        when starting the server. The print means the function was executed on the XML-RPC server.
> > +        """
> >           print("XMLRPC OK")
> >           super().serve_forever(poll_interval)
> >
> > @@ -166,19 +175,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
> >   class ScapyTrafficGenerator(CapturingTrafficGenerator):
> >       """Provides access to scapy functions via an RPC interface.
> >
> > -    The traffic generator first starts an XML-RPC on the remote TG node.
> > -    Then it populates the server with functions which use the Scapy library
> > -    to send/receive traffic.
> > -
> > -    Any packets sent to the remote server are first converted to bytes.
> > -    They are received as xmlrpc.client.Binary objects on the server side.
> > -    When the server sends the packets back, they are also received as
> > -    xmlrpc.client.Binary object on the client side, are converted back to Scapy
> > -    packets and only then returned from the methods.
> > +    The class extends the base with remote execution of scapy functions.
> >
> > -    Arguments:
> > -        tg_node: The node where the traffic generator resides.
> > -        config: The user configuration of the traffic generator.
> > +    Any packets sent to the remote server are first converted to bytes. They are received as
> > +    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
> > +    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
> > +    converted back to :class:`scapy.packet.Packet` objects and only then returned from the methods.
> >
> >       Attributes:
> >           session: The exclusive interactive remote session created by the Scapy
> > @@ -192,6 +194,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
> >       _config: ScapyTrafficGeneratorConfig
> >
> >       def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
> > +        """Extend the constructor with Scapy TG specifics.
> > +
> > +        The traffic generator first starts an XML-RPC on the remote `tg_node`.
> > +        Then it populates the server with functions which use the Scapy library
> > +        to send/receive traffic:
> > +
> > +            * :func:`scapy_send_packets_and_capture`
> > +            * :func:`scapy_send_packets`
> > +
> > +        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
> > +        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
> > +
> > +        Args:
> > +            tg_node: The node where the traffic generator resides.
> > +            config: The traffic generator's test run configuration.
> > +        """
> >           super().__init__(tg_node, config)
> >
> >           assert (
> > @@ -237,10 +255,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
> >               [line for line in src.splitlines() if not line.isspace() and line != ""]
> >           )
> >
> > -        spacing = "\n" * 4
> > -
> >           # execute it in the python terminal
> > -        self.session.send_command(spacing + src + spacing)
> > +        self.session.send_command(src + "\n")
> >           self.session.send_command(
> >               f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
> >               f"server.serve_forever()",
> > @@ -274,6 +290,7 @@ def _send_packets_and_capture(
> >           return scapy_packets
> >
> >       def close(self) -> None:
> > +        """Close the traffic generator."""
> >           try:
> >               self.rpc_server_proxy.quit()
> >           except ConnectionRefusedError:
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 16/21] dts: posix and linux sessions docstring update
  2023-11-15 13:09               ` [PATCH v7 16/21] dts: posix and linux sessions " Juraj Linkeš
@ 2023-11-22 13:24                 ` Yoan Picchi
  2023-11-22 13:35                   ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-11-22 13:24 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek
  Cc: dev

On 11/15/23 13:09, Juraj Linkeš wrote:
> Format according to the Google format and PEP257, with slight
> deviations.
> 
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>   dts/framework/testbed_model/linux_session.py | 63 ++++++++++-----
>   dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++---
>   2 files changed, 113 insertions(+), 31 deletions(-)
> 
> diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
> index f472bb8f0f..279954ff63 100644
> --- a/dts/framework/testbed_model/linux_session.py
> +++ b/dts/framework/testbed_model/linux_session.py
> @@ -2,6 +2,13 @@
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2023 University of New Hampshire
>   
> +"""Linux OS translator.
> +
> +Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly
> +compliant with POSIX standards, so this module only implements the parts that aren't.
> +This intermediate module implements the common parts of mostly POSIX compliant distributions.
> +"""
> +
>   import json
>   from ipaddress import IPv4Interface, IPv6Interface
>   from typing import TypedDict, Union
> @@ -17,43 +24,51 @@
>   
>   
>   class LshwConfigurationOutput(TypedDict):
> +    """The relevant parts of ``lshw``'s ``configuration`` section."""
> +
> +    #:
>       link: str
>   
>   
>   class LshwOutput(TypedDict):
> -    """
> -    A model of the relevant information from json lshw output, e.g.:
> -    {
> -    ...
> -    "businfo" : "pci@0000:08:00.0",
> -    "logicalname" : "enp8s0",
> -    "version" : "00",
> -    "serial" : "52:54:00:59:e1:ac",
> -    ...
> -    "configuration" : {
> -      ...
> -      "link" : "yes",
> -      ...
> -    },
> -    ...
> +    """A model of the relevant information from ``lshw``'s json output.
> +
> +    e.g.::
> +
> +        {
> +        ...
> +        "businfo" : "pci@0000:08:00.0",
> +        "logicalname" : "enp8s0",
> +        "version" : "00",
> +        "serial" : "52:54:00:59:e1:ac",
> +        ...
> +        "configuration" : {
> +          ...
> +          "link" : "yes",
> +          ...
> +        },
> +        ...
>       """
>   
> +    #:
>       businfo: str
> +    #:
>       logicalname: NotRequired[str]
> +    #:
>       serial: NotRequired[str]
> +    #:
>       configuration: LshwConfigurationOutput
>   
>   
>   class LinuxSession(PosixSession):
> -    """
> -    The implementation of non-Posix compliant parts of Linux remote sessions.
> -    """
> +    """The implementation of non-Posix compliant parts of Linux."""
>   
>       @staticmethod
>       def _get_privileged_command(command: str) -> str:
>           return f"sudo -- sh -c '{command}'"
>   
>       def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
> +        """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
>           cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout
>           lcores = []
>           for cpu_line in cpu_info.splitlines():
> @@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
>           return lcores
>   
>       def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
>           return dpdk_prefix
>   
> -    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
> +    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
>           self._logger.info("Getting Hugepage information.")
>           hugepage_size = self._get_hugepage_size()
>           hugepages_total = self._get_hugepages_total()
>           self._numa_nodes = self._get_numa_nodes()
>   
> -        if force_first_numa or hugepages_total != hugepage_amount:
> +        if force_first_numa or hugepages_total != hugepage_count:
>               # when forcing numa, we need to clear existing hugepages regardless
>               # of size, so they can be moved to the first numa node
> -            self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
> +            self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
>           else:
>               self._logger.info("Hugepages already configured.")
>           self._mount_huge_pages()
> @@ -140,6 +157,7 @@ def _configure_huge_pages(
>           )
>   
>       def update_ports(self, ports: list[Port]) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.update_ports`."""
>           self._logger.debug("Gathering port info.")
>           for port in ports:
>               assert (
> @@ -178,6 +196,7 @@ def _update_port_attr(
>               )
>   
>       def configure_port_state(self, port: Port, enable: bool) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
>           state = "up" if enable else "down"
>           self.send_command(
>               f"ip link set dev {port.logical_name} {state}", privileged=True
> @@ -189,6 +208,7 @@ def configure_port_ip_address(
>           port: Port,
>           delete: bool,
>       ) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`."""
>           command = "del" if delete else "add"
>           self.send_command(
>               f"ip address {command} {address} dev {port.logical_name}",
> @@ -197,5 +217,6 @@ def configure_port_ip_address(
>           )
>   
>       def configure_ipv4_forwarding(self, enable: bool) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`."""
>           state = 1 if enable else 0
>           self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
> diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
> index 1d1d5b1b26..a4824aa274 100644
> --- a/dts/framework/testbed_model/posix_session.py
> +++ b/dts/framework/testbed_model/posix_session.py
> @@ -2,6 +2,15 @@
>   # Copyright(c) 2023 PANTHEON.tech s.r.o.
>   # Copyright(c) 2023 University of New Hampshire
>   
> +"""POSIX compliant OS translator.
> +
> +Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards
> +for portability between Unix operating systems which not all Linux distributions
> +(or the tools most frequently bundled with said distributions) adhere to. Most of Linux
> +distributions are mostly compliant though.
> +This intermediate module implements the common parts of mostly POSIX compliant distributions.
> +"""
> +
>   import re
>   from collections.abc import Iterable
>   from pathlib import PurePath, PurePosixPath
> @@ -15,13 +24,21 @@
>   
>   
>   class PosixSession(OSSession):
> -    """
> -    An intermediary class implementing the Posix compliant parts of
> -    Linux and other OS remote sessions.
> -    """
> +    """An intermediary class implementing the POSIX standard."""
>   
>       @staticmethod
>       def combine_short_options(**opts: bool) -> str:
> +        """Combine shell options into one argument.
> +
> +        These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``.
> +
> +        Args:
> +            opts: The keys are option names (usually one letter) and the bool values indicate
> +                whether to include the option in the resulting argument.
> +
> +        Returns:
> +            The options combined into one argument.
> +        """
>           ret_opts = ""
>           for opt, include in opts.items():
>               if include:
> @@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str:
>           return ret_opts
>   
>       def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
> +        """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`."""
>           remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
>           result = self.send_command(f"ls -d {remote_guess} | tail -1")
>           return PurePosixPath(result.stdout)
>   
>       def get_remote_tmp_dir(self) -> PurePosixPath:
> +        """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`."""
>           return PurePosixPath("/tmp")
>   
>       def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> -        """
> -        Create extra environment variables needed for i686 arch build. Get information
> -        from the node if needed.
> +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`.
> +
> +        Supported architecture: ``i686``.
>           """
>           env_vars = {}
>           if arch == Architecture.i686:
> @@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
>           return env_vars
>   
>       def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
> +        """Overrides :meth:`~.os_session.OSSession.join_remote_path`."""
>           return PurePosixPath(*args)
>   
>       def copy_from(
> @@ -70,6 +90,7 @@ def copy_from(
>           source_file: str | PurePath,
>           destination_file: str | PurePath,
>       ) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.copy_from`."""
>           self.remote_session.copy_from(source_file, destination_file)
>   
>       def copy_to(
> @@ -77,6 +98,7 @@ def copy_to(
>           source_file: str | PurePath,
>           destination_file: str | PurePath,
>       ) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.copy_to`."""
>           self.remote_session.copy_to(source_file, destination_file)
>   
>       def remove_remote_dir(
> @@ -85,6 +107,7 @@ def remove_remote_dir(
>           recursive: bool = True,
>           force: bool = True,
>       ) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
>           opts = PosixSession.combine_short_options(r=recursive, f=force)
>           self.send_command(f"rm{opts} {remote_dir_path}")
>   
> @@ -93,6 +116,7 @@ def extract_remote_tarball(
>           remote_tarball_path: str | PurePath,
>           expected_dir: str | PurePath | None = None,
>       ) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
>           self.send_command(
>               f"tar xfm {remote_tarball_path} "
>               f"-C {PurePosixPath(remote_tarball_path).parent}",
> @@ -110,6 +134,7 @@ def build_dpdk(
>           rebuild: bool = False,
>           timeout: float = SETTINGS.compile_timeout,
>       ) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
>           try:
>               if rebuild:
>                   # reconfigure, then build
> @@ -140,12 +165,14 @@ def build_dpdk(
>               raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
>   
>       def get_dpdk_version(self, build_dir: str | PurePath) -> str:
> +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
>           out = self.send_command(
>               f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
>           )
>           return out.stdout
>   
>       def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
> +        """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`."""
>           self._logger.info("Cleaning up DPDK apps.")
>           dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
>           if dpdk_runtime_dirs:
> @@ -159,6 +186,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
>       def _get_dpdk_runtime_dirs(
>           self, dpdk_prefix_list: Iterable[str]
>       ) -> list[PurePosixPath]:
> +        """Find runtime directories DPDK apps are currently using.
> +
> +        Args:
> +              dpdk_prefix_list: The prefixes DPDK apps were started with.
> +
> +        Returns:
> +            The paths of DPDK apps' runtime dirs.
> +        """
>           prefix = PurePosixPath("/var", "run", "dpdk")
>           if not dpdk_prefix_list:
>               remote_prefixes = self._list_remote_dirs(prefix)
> @@ -170,9 +205,13 @@ def _get_dpdk_runtime_dirs(
>           return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
>   
>       def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
> -        """
> -        Return a list of directories of the remote_dir.
> -        If remote_path doesn't exist, return None.
> +        """Contents of remote_path.
> +
> +        Args:
> +            remote_path: List the contents of this path.
> +
> +        Returns:
> +            The contents of remote_path. If remote_path doesn't exist, return None.
>           """
>           out = self.send_command(
>               f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
> @@ -183,6 +222,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
>               return out.splitlines()
>   
>       def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
> +        """Find PIDs of running DPDK apps.
> +
> +        Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes
> +        that opened those file.
> +
> +        Args:
> +            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
> +
> +        Returns:
> +            The PIDs of running DPDK apps.
> +        """
>           pids = []
>           pid_regex = r"p(\d+)"
>           for dpdk_runtime_dir in dpdk_runtime_dirs:
> @@ -203,6 +253,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool:
>       def _check_dpdk_hugepages(
>           self, dpdk_runtime_dirs: Iterable[str | PurePath]
>       ) -> None:
> +        """Check there aren't any leftover hugepages.
> +
> +        If any hugegapes are found, emit a warning. The hugepages are investigated in the

hugegapes -> hugepages

> +        "hugepage_info" file of dpdk_runtime_dirs.
> +
> +        Args:
> +            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
> +        """
>           for dpdk_runtime_dir in dpdk_runtime_dirs:
>               hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
>               if self._remote_files_exists(hugepage_info):
> @@ -220,9 +278,11 @@ def _remove_dpdk_runtime_dirs(
>               self.remove_remote_dir(dpdk_runtime_dir)
>   
>       def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
>           return ""
>   
>       def get_compiler_version(self, compiler_name: str) -> str:
> +        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
>           match compiler_name:
>               case "gcc":
>                   return self.send_command(
> @@ -240,6 +300,7 @@ def get_compiler_version(self, compiler_name: str) -> str:
>                   raise ValueError(f"Unknown compiler {compiler_name}")
>   
>       def get_node_info(self) -> NodeInfo:
> +        """Overrides :meth:`~.os_session.OSSession.get_node_info`."""
>           os_release_info = self.send_command(
>               "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release",
>               SETTINGS.timeout,


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 15/21] dts: os session docstring update
  2023-11-22 11:50                 ` Yoan Picchi
@ 2023-11-22 13:27                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 13:27 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Wed, Nov 22, 2023 at 12:50 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/testbed_model/os_session.py | 275 ++++++++++++++++------
> >   1 file changed, 208 insertions(+), 67 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
> > index 76e595a518..72b9193a61 100644
> > --- a/dts/framework/testbed_model/os_session.py
> > +++ b/dts/framework/testbed_model/os_session.py
> > @@ -2,6 +2,29 @@
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2023 University of New Hampshire
> >
> > +"""OS-aware remote session.
> > +
> > +DPDK supports multiple different operating systems, meaning it can run on these different operating
> > +systems. This module defines the common API that OS-unaware layers use and translates the API into
> > +OS-aware calls/utility usage.
> > +
> > +Note:
> > +    Running commands with administrative privileges requires OS awareness. This is the only layer
> > +    that's aware of OS differences, so this is where non-privileged command get converted
> > +    to privileged commands.
> > +
> > +Example:
> > +    A user wishes to remove a directory on
> > +    a remote :class:`~framework.testbed_model.sut_node.SutNode`.
> > +    The :class:`~framework.testbed_model.sut_node.SutNode` object isn't aware what OS the node
> > +    is running - it delegates the OS translation logic
> > +    to :attr:`~framework.testbed_model.node.Node.main_session`. The SUT node calls
> > +    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
> > +    the :attr:`~framework.testbed_model.node.Node.main_session` translates that
> > +    to ``rm -rf`` if the node's OS is Linux and other commands for other OSs.
> > +    It also translates the path to match the underlying OS.
> > +"""
> > +
> >   from abc import ABC, abstractmethod
> >   from collections.abc import Iterable
> >   from ipaddress import IPv4Interface, IPv6Interface
> > @@ -28,10 +51,16 @@
> >
> >
> >   class OSSession(ABC):
> > -    """
> > -    The OS classes create a DTS node remote session and implement OS specific
> > +    """OS-unaware to OS-aware translation API definition.
> > +
> > +    The OSSession classes create a remote session to a DTS node and implement OS specific
> >       behavior. There a few control methods implemented by the base class, the rest need
> > -    to be implemented by derived classes.
> > +    to be implemented by subclasses.
> > +
> > +    Attributes:
> > +        name: The name of the session.
> > +        remote_session: The remote session maintaining the connection to the node.
> > +        interactive_session: The interactive remote session maintaining the connection to the node.
> >       """
> >
> >       _config: NodeConfiguration
> > @@ -46,6 +75,15 @@ def __init__(
> >           name: str,
> >           logger: DTSLOG,
> >       ):
> > +        """Initialize the OS-aware session.
> > +
> > +        Connect to the node right away and also create an interactive remote session.
> > +
> > +        Args:
> > +            node_config: The test run configuration of the node to connect to.
> > +            name: The name of the session.
> > +            logger: The logger instance this session will use.
> > +        """
> >           self._config = node_config
> >           self.name = name
> >           self._logger = logger
> > @@ -53,15 +91,15 @@ def __init__(
> >           self.interactive_session = create_interactive_session(node_config, logger)
> >
> >       def close(self, force: bool = False) -> None:
> > -        """
> > -        Close the remote session.
> > +        """Close the underlying remote session.
> > +
> > +        Args:
> > +            force: Force the closure of the connection.
> >           """
> >           self.remote_session.close(force)
> >
> >       def is_alive(self) -> bool:
> > -        """
> > -        Check whether the remote session is still responding.
> > -        """
> > +        """Check whether the underlying remote session is still responding."""
> >           return self.remote_session.is_alive()
> >
> >       def send_command(
> > @@ -72,10 +110,23 @@ def send_command(
> >           verify: bool = False,
> >           env: dict | None = None,
> >       ) -> CommandResult:
> > -        """
> > -        An all-purpose API in case the command to be executed is already
> > -        OS-agnostic, such as when the path to the executed command has been
> > -        constructed beforehand.
> > +        """An all-purpose API for OS-agnostic commands.
> > +
> > +        This can be used for an execution of a portable command that's executed the same way
> > +        on all operating systems, such as Python.
> > +
> > +        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
> > +        environment variable configure the timeout of command execution.
> > +
> > +        Args:
> > +            command: The command to execute.
> > +            timeout: Wait at most this long in seconds to execute the command.
>
> confusing start/end of execution
>

Ack.

> > +            privileged: Whether to run the command with administrative privileges.
> > +            verify: If :data:`True`, will check the exit code of the command.
> > +            env: A dictionary with environment variables to be used with the command execution.
> > +
> > +        Raises:
> > +            RemoteCommandExecutionError: If verify is :data:`True` and the command failed.
> >           """
> >           if privileged:
> >               command = self._get_privileged_command(command)
> > @@ -89,8 +140,20 @@ def create_interactive_shell(
> >           privileged: bool,
> >           app_args: str,
> >       ) -> InteractiveShellType:
> > -        """
> > -        See "create_interactive_shell" in SutNode
> > +        """Factory for interactive session handlers.
> > +
> > +        Instantiate `shell_cls` according to the remote OS specifics.
> > +
> > +        Args:
> > +            shell_cls: The class of the shell.
> > +            timeout: Timeout for reading output from the SSH channel. If you are
> > +                reading from the buffer and don't receive any data within the timeout
> > +                it will throw an error.
> > +            privileged: Whether to run the shell with administrative privileges.
> > +            app_args: The arguments to be passed to the application.
> > +
> > +        Returns:
> > +            An instance of the desired interactive application shell.
> >           """
> >           return shell_cls(
> >               self.interactive_session.session,
> > @@ -114,27 +177,42 @@ def _get_privileged_command(command: str) -> str:
> >
> >       @abstractmethod
> >       def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
> > -        """
> > -        Try to find DPDK remote dir in remote_dir.
> > +        """Try to find DPDK directory in `remote_dir`.
> > +
> > +        The directory is the one which is created after the extraction of the tarball. The files
> > +        are usually extracted into a directory starting with ``dpdk-``.
> > +
> > +        Returns:
> > +            The absolute path of the DPDK remote directory, empty path if not found.
> >           """
> >
> >       @abstractmethod
> >       def get_remote_tmp_dir(self) -> PurePath:
> > -        """
> > -        Get the path of the temporary directory of the remote OS.
> > +        """Get the path of the temporary directory of the remote OS.
> > +
> > +        Returns:
> > +            The absolute path of the temporary directory.
> >           """
> >
> >       @abstractmethod
> >       def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> > -        """
> > -        Create extra environment variables needed for the target architecture. Get
> > -        information from the node if needed.
> > +        """Create extra environment variables needed for the target architecture.
> > +
> > +        Different architectures may require different configuration, such as setting 32-bit CFLAGS.
> > +
> > +        Returns:
> > +            A dictionary with keys as environment variables.
> >           """
> >
> >       @abstractmethod
> >       def join_remote_path(self, *args: str | PurePath) -> PurePath:
> > -        """
> > -        Join path parts using the path separator that fits the remote OS.
> > +        """Join path parts using the path separator that fits the remote OS.
> > +
> > +        Args:
> > +            args: Any number of paths to join.
> > +
> > +        Returns:
> > +            The resulting joined path.
> >           """
> >
> >       @abstractmethod
> > @@ -143,13 +221,13 @@ def copy_from(
> >           source_file: str | PurePath,
> >           destination_file: str | PurePath,
> >       ) -> None:
> > -        """Copy a file from the remote Node to the local filesystem.
> > +        """Copy a file from the remote node to the local filesystem.
> >
> > -        Copy source_file from the remote Node associated with this remote
> > -        session to destination_file on the local filesystem.
> > +        Copy `source_file` from the remote node associated with this remote
> > +        session to `destination_file` on the local filesystem.
> >
> >           Args:
> > -            source_file: the file on the remote Node.
> > +            source_file: the file on the remote node.
> >               destination_file: a file or directory path on the local filesystem.
> >           """
> >
> > @@ -159,14 +237,14 @@ def copy_to(
> >           source_file: str | PurePath,
> >           destination_file: str | PurePath,
> >       ) -> None:
> > -        """Copy a file from local filesystem to the remote Node.
> > +        """Copy a file from local filesystem to the remote node.
> >
> > -        Copy source_file from local filesystem to destination_file
> > -        on the remote Node associated with this remote session.
> > +        Copy `source_file` from local filesystem to `destination_file`
> > +        on the remote node associated with this remote session.
> >
> >           Args:
> >               source_file: the file on the local filesystem.
> > -            destination_file: a file or directory path on the remote Node.
> > +            destination_file: a file or directory path on the remote node.
> >           """
> >
> >       @abstractmethod
> > @@ -176,8 +254,12 @@ def remove_remote_dir(
> >           recursive: bool = True,
> >           force: bool = True,
> >       ) -> None:
> > -        """
> > -        Remove remote directory, by default remove recursively and forcefully.
> > +        """Remove remote directory, by default remove recursively and forcefully.
> > +
> > +        Args:
> > +            remote_dir_path: The path of the directory to remove.
> > +            recursive: If :data:`True`, also remove all contents inside the directory.
> > +            force: If :data:`True`, ignore all warnings and try to remove at all costs.
> >           """
> >
> >       @abstractmethod
> > @@ -186,9 +268,12 @@ def extract_remote_tarball(
> >           remote_tarball_path: str | PurePath,
> >           expected_dir: str | PurePath | None = None,
> >       ) -> None:
> > -        """
> > -        Extract remote tarball in place. If expected_dir is a non-empty string, check
> > -        whether the dir exists after extracting the archive.
> > +        """Extract remote tarball in its remote directory.
> > +
> > +        Args:
> > +            remote_tarball_path: The path of the tarball on the remote node.
> > +            expected_dir: If non-empty, check whether `expected_dir` exists after extracting
> > +                the archive.
> >           """
> >
> >       @abstractmethod
> > @@ -201,69 +286,119 @@ def build_dpdk(
> >           rebuild: bool = False,
> >           timeout: float = SETTINGS.compile_timeout,
> >       ) -> None:
> > -        """
> > -        Build DPDK in the input dir with specified environment variables and meson
> > -        arguments.
> > +        """Build DPDK on the remote node.
> > +
> > +        An extracted DPDK tarball must be present on the node. The build consists of two steps::
> > +
> > +            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
> > +            ninja -C remote_dpdk_build_dir
> > +
> > +        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
> > +        environment variable configure the timeout of DPDK build.
> > +
> > +        Args:
> > +            env_vars: Use these environment variables then building DPDK.
> > +            meson_args: Use these meson arguments when building DPDK.
> > +            remote_dpdk_dir: The directory on the remote node where DPDK will be built.
> > +            remote_dpdk_build_dir: The target build directory on the remote node.
> > +            rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead
> > +                of ``meson setup``.
> > +            timeout: Wait at most this long in seconds for the build to execute.
>
> confusing start/end of execution
>

Ack.

> >           """
> >
> >       @abstractmethod
> >       def get_dpdk_version(self, version_path: str | PurePath) -> str:
> > -        """
> > -        Inspect DPDK version on the remote node from version_path.
> > +        """Inspect the DPDK version on the remote node.
> > +
> > +        Args:
> > +            version_path: The path to the VERSION file containing the DPDK version.
> > +
> > +        Returns:
> > +            The DPDK version.
> >           """
> >
> >       @abstractmethod
> >       def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
> > -        """
> > -        Compose a list of LogicalCores present on the remote node.
> > -        If use_first_core is False, the first physical core won't be used.
> > +        r"""Get the list of :class:`~framework.testbed_model.cpu.LogicalCore`\s on the remote node.
> > +
> > +        Args:
> > +            use_first_core: If :data:`False`, the first physical core won't be used.
> > +
> > +        Returns:
> > +            The logical cores present on the node.
> >           """
> >
> >       @abstractmethod
> >       def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
> > -        """
> > -        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
> > -        dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
> > +        """Kill and cleanup all DPDK apps.
> > +
> > +        Args:
> > +            dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`.
> > +                If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean.
> >           """
> >
> >       @abstractmethod
> >       def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> > -        """
> > -        Get the DPDK file prefix that will be used when running DPDK apps.
> > +        """Make OS-specific modification to the DPDK file prefix.
> > +
> > +        Args:
> > +           dpdk_prefix: The OS-unaware file prefix.
> > +
> > +        Returns:
> > +            The OS-specific file prefix.
> >           """
> >
> >       @abstractmethod
> > -    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
> > -        """
> > -        Get the node's Hugepage Size, configure the specified amount of hugepages
> > +    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> > +        """Configure hugepages on the node.
> > +
> > +        Get the node's Hugepage Size, configure the specified count of hugepages
> >           if needed and mount the hugepages if needed.
> > -        If force_first_numa is True, configure hugepages just on the first socket.
> > +
> > +        Args:
> > +            hugepage_count: Configure this many hugepages.
> > +            force_first_numa:  If :data:`True`, configure hugepages just on the first socket.
>
> force *numa* configures the first *socket* ?
>

Good catch, should be numa node, not socket.

> >           """
> >
> >       @abstractmethod
> >       def get_compiler_version(self, compiler_name: str) -> str:
> > -        """
> > -        Get installed version of compiler used for DPDK
> > +        """Get installed version of compiler used for DPDK.
> > +
> > +        Args:
> > +            compiler_name: The name of the compiler executable.
> > +
> > +        Returns:
> > +            The compiler's version.
> >           """
> >
> >       @abstractmethod
> >       def get_node_info(self) -> NodeInfo:
> > -        """
> > -        Collect information about the node
> > +        """Collect additional information about the node.
> > +
> > +        Returns:
> > +            Node information.
> >           """
> >
> >       @abstractmethod
> >       def update_ports(self, ports: list[Port]) -> None:
> > -        """
> > -        Get additional information about ports:
> > -            Logical name (e.g. enp7s0) if applicable
> > -            Mac address
> > +        """Get additional information about ports from the operating system and update them.
> > +
> > +        The additional information is:
> > +
> > +            * Logical name (e.g. ``enp7s0``) if applicable,
> > +            * Mac address.
> > +
> > +        Args:
> > +            ports: The ports to update.
> >           """
> >
> >       @abstractmethod
> >       def configure_port_state(self, port: Port, enable: bool) -> None:
> > -        """
> > -        Enable/disable port.
> > +        """Enable/disable `port` in the operating system.
> > +
> > +        Args:
> > +            port: The port to configure.
> > +            enable: If :data:`True`, enable the port, otherwise shut it down.
> >           """
> >
> >       @abstractmethod
> > @@ -273,12 +408,18 @@ def configure_port_ip_address(
> >           port: Port,
> >           delete: bool,
> >       ) -> None:
> > -        """
> > -        Configure (add or delete) an IP address of the input port.
> > +        """Configure an IP address on `port` in the operating system.
> > +
> > +        Args:
> > +            address: The address to configure.
> > +            port: The port to configure.
> > +            delete: If :data:`True`, remove the IP address, otherwise configure it.
> >           """
> >
> >       @abstractmethod
> >       def configure_ipv4_forwarding(self, enable: bool) -> None:
> > -        """
> > -        Enable IPv4 forwarding in the underlying OS.
> > +        """Enable IPv4 forwarding in the operating system.
> > +
> > +        Args:
> > +            enable: If :data:`True`, enable the forwarding, otherwise disable it.
> >           """
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 17/21] dts: node docstring update
  2023-11-22 12:18                 ` Yoan Picchi
@ 2023-11-22 13:28                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 13:28 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Wed, Nov 22, 2023 at 1:18 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/testbed_model/node.py | 191 +++++++++++++++++++---------
> >   1 file changed, 131 insertions(+), 60 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
> > index fa5b143cdd..f93b4acecd 100644
> > --- a/dts/framework/testbed_model/node.py
> > +++ b/dts/framework/testbed_model/node.py
> > @@ -3,8 +3,13 @@
> >   # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2022-2023 University of New Hampshire
> >
> > -"""
> > -A node is a generic host that DTS connects to and manages.
> > +"""Common functionality for node management.
> > +
> > +A node is any host/server DTS connects to.
> > +
> > +The base class, :class:`Node`, provides functionality common to all nodes and is supposed
> > +to be extended by subclasses with functionality specific to each node type.
>
> functionality -> functionalities
>

Ack.

> > +The decorator :func:`Node.skip_setup` can be used without subclassing.
> >   """
> >
> >   from abc import ABC
> > @@ -35,10 +40,22 @@
> >
> >
> >   class Node(ABC):
> > -    """
> > -    Basic class for node management. This class implements methods that
> > -    manage a node, such as information gathering (of CPU/PCI/NIC) and
> > -    environment setup.
> > +    """The base class for node management.
> > +
> > +    It shouldn't be instantiated, but rather subclassed.
> > +    It implements common methods to manage any node:
> > +
> > +        * Connection to the node,
> > +        * Hugepages setup.
> > +
> > +    Attributes:
> > +        main_session: The primary OS-aware remote session used to communicate with the node.
> > +        config: The node configuration.
> > +        name: The name of the node.
> > +        lcores: The list of logical cores that DTS can use on the node.
> > +            It's derived from logical cores present on the node and the test run configuration.
> > +        ports: The ports of this node specified in the test run configuration.
> > +        virtual_devices: The virtual devices used on the node.
> >       """
> >
> >       main_session: OSSession
> > @@ -52,6 +69,17 @@ class Node(ABC):
> >       virtual_devices: list[VirtualDevice]
> >
> >       def __init__(self, node_config: NodeConfiguration):
> > +        """Connect to the node and gather info during initialization.
> > +
> > +        Extra gathered information:
> > +
> > +        * The list of available logical CPUs. This is then filtered by
> > +          the ``lcores`` configuration in the YAML test run configuration file,
> > +        * Information about ports from the YAML test run configuration file.
> > +
> > +        Args:
> > +            node_config: The node's test run configuration.
> > +        """
> >           self.config = node_config
> >           self.name = node_config.name
> >           self._logger = getLogger(self.name)
> > @@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
> >           self._logger.info(f"Connected to node: {self.name}")
> >
> >           self._get_remote_cpus()
> > -        # filter the node lcores according to user config
> > +        # filter the node lcores according to the test run configuration
> >           self.lcores = LogicalCoreListFilter(
> >               self.lcores, LogicalCoreList(self.config.lcores)
> >           ).filter()
> > @@ -76,9 +104,14 @@ def _init_ports(self) -> None:
> >               self.configure_port_state(port)
> >
> >       def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> > -        """
> > -        Perform the execution setup that will be done for each execution
> > -        this node is part of.
> > +        """Execution setup steps.
> > +
> > +        Configure hugepages and call :meth:`_set_up_execution` where
> > +        the rest of the configuration steps (if any) are implemented.
> > +
> > +        Args:
> > +            execution_config: The execution test run configuration according to which
> > +                the setup steps will be taken.
> >           """
> >           self._setup_hugepages()
> >           self._set_up_execution(execution_config)
> > @@ -87,58 +120,74 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> >               self.virtual_devices.append(VirtualDevice(vdev))
> >
> >       def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional execution setup steps for subclasses.
> > +
> > +        Subclasses should override this if they need to add additional execution setup steps.
> >           """
> >
> >       def tear_down_execution(self) -> None:
> > -        """
> > -        Perform the execution teardown that will be done after each execution
> > -        this node is part of concludes.
> > +        """Execution teardown steps.
> > +
> > +        There are currently no common execution teardown steps common to all DTS node types.
> >           """
> >           self.virtual_devices = []
> >           self._tear_down_execution()
> >
> >       def _tear_down_execution(self) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional execution teardown steps for subclasses.
> > +
> > +        Subclasses should override this if they need to add additional execution teardown steps.
> >           """
> >
> >       def set_up_build_target(
> >           self, build_target_config: BuildTargetConfiguration
> >       ) -> None:
> > -        """
> > -        Perform the build target setup that will be done for each build target
> > -        tested on this node.
> > +        """Build target setup steps.
> > +
> > +        There are currently no common build target setup steps common to all DTS node types.
> > +
> > +        Args:
> > +            build_target_config: The build target test run configuration according to which
> > +                the setup steps will be taken.
> >           """
> >           self._set_up_build_target(build_target_config)
> >
> >       def _set_up_build_target(
> >           self, build_target_config: BuildTargetConfiguration
> >       ) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional build target setup steps for subclasses.
> > +
> > +        Subclasses should override this if they need to add additional build target setup steps.
> >           """
> >
> >       def tear_down_build_target(self) -> None:
> > -        """
> > -        Perform the build target teardown that will be done after each build target
> > -        tested on this node.
> > +        """Build target teardown steps.
> > +
> > +        There are currently no common build target teardown steps common to all DTS node types.
> >           """
> >           self._tear_down_build_target()
> >
> >       def _tear_down_build_target(self) -> None:
> > -        """
> > -        This method exists to be optionally overwritten by derived classes and
> > -        is not decorated so that the derived class doesn't have to use the decorator.
> > +        """Optional additional build target teardown steps for subclasses.
> > +
> > +        Subclasses should override this if they need to add additional build target teardown steps.
> >           """
> >
> >       def create_session(self, name: str) -> OSSession:
> > -        """
> > -        Create and return a new OSSession tailored to the remote OS.
> > +        """Create and return a new OS-aware remote session.
> > +
> > +        The returned session won't be used by the node creating it. The session must be used by
> > +        the caller. The session will be maintained for the entire lifecycle of the node object,
> > +        at the end of which the session will be cleaned up automatically.
> > +
> > +        Note:
> > +            Any number of these supplementary sessions may be created.
> > +
> > +        Args:
> > +            name: The name of the session.
> > +
> > +        Returns:
> > +            A new OS-aware remote session.
> >           """
> >           session_name = f"{self.name} {name}"
> >           connection = create_session(
> > @@ -156,19 +205,19 @@ def create_interactive_shell(
> >           privileged: bool = False,
> >           app_args: str = "",
> >       ) -> InteractiveShellType:
> > -        """Create a handler for an interactive session.
> > +        """Factory for interactive session handlers.
> >
> > -        Instantiate shell_cls according to the remote OS specifics.
> > +        Instantiate `shell_cls` according to the remote OS specifics.
> >
> >           Args:
> >               shell_cls: The class of the shell.
> > -            timeout: Timeout for reading output from the SSH channel. If you are
> > -                reading from the buffer and don't receive any data within the timeout
> > -                it will throw an error.
> > +            timeout: Timeout for reading output from the SSH channel. If you are reading from
> > +                the buffer and don't receive any data within the timeout it will throw an error.
> >               privileged: Whether to run the shell with administrative privileges.
> >               app_args: The arguments to be passed to the application.
> > +
> >           Returns:
> > -            Instance of the desired interactive application.
> > +            An instance of the desired interactive application shell.
> >           """
> >           if not shell_cls.dpdk_app:
> >               shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
> > @@ -185,14 +234,22 @@ def filter_lcores(
> >           filter_specifier: LogicalCoreCount | LogicalCoreList,
> >           ascending: bool = True,
> >       ) -> list[LogicalCore]:
> > -        """
> > -        Filter the LogicalCores found on the Node according to
> > -        a LogicalCoreCount or a LogicalCoreList.
> > +        """Filter the node's logical cores that DTS can use.
> > +
> > +        Logical cores that DTS can use are the ones that are present on the node, but filtered
> > +        according to the test run configuration. The `filter_specifier` will filter cores from
> > +        those logical cores.
> > +
> > +        Args:
> > +            filter_specifier: Two different filters can be used, one that specifies the number
> > +                of logical cores per core, cores per socket and the number of sockets,
> > +                and another one that specifies a logical core list.
> > +            ascending: If :data:`True`, use cores with the lowest numerical id first and continue
> > +                in ascending order. If :data:`False`, start with the highest id and continue
> > +                in descending order. This ordering affects which sockets to consider first as well.
> >
> > -        If ascending is True, use cores with the lowest numerical id first
> > -        and continue in ascending order. If False, start with the highest
> > -        id and continue in descending order. This ordering affects which
> > -        sockets to consider first as well.
> > +        Returns:
> > +            The filtered logical cores.
> >           """
> >           self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
> >           return lcore_filter(
> > @@ -202,17 +259,14 @@ def filter_lcores(
> >           ).filter()
> >
> >       def _get_remote_cpus(self) -> None:
> > -        """
> > -        Scan CPUs in the remote OS and store a list of LogicalCores.
> > -        """
> > +        """Scan CPUs in the remote OS and store a list of LogicalCores."""
> >           self._logger.info("Getting CPU information.")
> >           self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
> >
> >       def _setup_hugepages(self) -> None:
> > -        """
> > -        Setup hugepages on the Node. Different architectures can supply different
> > -        amounts of memory for hugepages and numa-based hugepage allocation may need
> > -        to be considered.
> > +        """Setup hugepages on the node.
> > +
> > +        Configure the hugepages only if they're specified in the node's test run configuration.
> >           """
> >           if self.config.hugepages:
> >               self.main_session.setup_hugepages(
> > @@ -220,8 +274,11 @@ def _setup_hugepages(self) -> None:
> >               )
> >
> >       def configure_port_state(self, port: Port, enable: bool = True) -> None:
> > -        """
> > -        Enable/disable port.
> > +        """Enable/disable `port`.
> > +
> > +        Args:
> > +            port: The port to enable/disable.
> > +            enable: :data:`True` to enable, :data:`False` to disable.
> >           """
> >           self.main_session.configure_port_state(port, enable)
> >
> > @@ -231,15 +288,17 @@ def configure_port_ip_address(
> >           port: Port,
> >           delete: bool = False,
> >       ) -> None:
> > -        """
> > -        Configure the IP address of a port on this node.
> > +        """Add an IP address to `port` on this node.
> > +
> > +        Args:
> > +            address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6.
> > +            port: The port to which to add the address.
> > +            delete: If :data:`True`, will delete the address from the port instead of adding it.
> >           """
> >           self.main_session.configure_port_ip_address(address, port, delete)
> >
> >       def close(self) -> None:
> > -        """
> > -        Close all connections and free other resources.
> > -        """
> > +        """Close all connections and free other resources."""
> >           if self.main_session:
> >               self.main_session.close()
> >           for session in self._other_sessions:
> > @@ -248,6 +307,11 @@ def close(self) -> None:
> >
> >       @staticmethod
> >       def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> > +        """Skip the decorated function.
> > +
> > +        The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP`
> > +        environment variable enable the decorator.
> > +        """
> >           if SETTINGS.skip_setup:
> >               return lambda *args: None
> >           else:
> > @@ -257,6 +321,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
> >   def create_session(
> >       node_config: NodeConfiguration, name: str, logger: DTSLOG
> >   ) -> OSSession:
> > +    """Factory for OS-aware sessions.
> > +
> > +    Args:
> > +        node_config: The test run configuration of the node to connect to.
> > +        name: The name of the session.
> > +        logger: The logger instance this session will use.
> > +    """
> >       match node_config.os:
> >           case OS.linux:
> >               return LinuxSession(node_config, name, logger)
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 18/21] dts: sut and tg nodes docstring update
  2023-11-22 13:12                 ` Yoan Picchi
@ 2023-11-22 13:34                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 13:34 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Wed, Nov 22, 2023 at 2:13 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/testbed_model/sut_node.py | 224 ++++++++++++++++--------
> >   dts/framework/testbed_model/tg_node.py  |  42 +++--
> >   2 files changed, 173 insertions(+), 93 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
> > index 17deea06e2..123b16fee0 100644
> > --- a/dts/framework/testbed_model/sut_node.py
> > +++ b/dts/framework/testbed_model/sut_node.py
> > @@ -3,6 +3,14 @@
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2023 University of New Hampshire
> >
> > +"""System under test (DPDK + hardware) node.
> > +
> > +A system under test (SUT) is the combination of DPDK
> > +and the hardware we're testing with DPDK (NICs, crypto and other devices).
> > +An SUT node is where this SUT runs.
> > +"""
> > +
> > +
> >   import os
> >   import tarfile
> >   import time
> > @@ -26,6 +34,11 @@
> >
> >
> >   class EalParameters(object):
> > +    """The environment abstraction layer parameters.
> > +
> > +    The string representation can be created by converting the instance to a string.
> > +    """
> > +
> >       def __init__(
> >           self,
> >           lcore_list: LogicalCoreList,
> > @@ -35,21 +48,23 @@ def __init__(
> >           vdevs: list[VirtualDevice],
> >           other_eal_param: str,
> >       ):
> > -        """
> > -        Generate eal parameters character string;
> > -        :param lcore_list: the list of logical cores to use.
> > -        :param memory_channels: the number of memory channels to use.
> > -        :param prefix: set file prefix string, eg:
> > -                        prefix='vf'
> > -        :param no_pci: switch of disable PCI bus eg:
> > -                        no_pci=True
> > -        :param vdevs: virtual device list, eg:
> > -                        vdevs=[
> > -                            VirtualDevice('net_ring0'),
> > -                            VirtualDevice('net_ring1')
> > -                        ]
> > -        :param other_eal_param: user defined DPDK eal parameters, eg:
> > -                        other_eal_param='--single-file-segments'
> > +        """Initialize the parameters according to inputs.
> > +
> > +        Process the parameters into the format used on the command line.
> > +
> > +        Args:
> > +            lcore_list: The list of logical cores to use.
> > +            memory_channels: The number of memory channels to use.
> > +            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
> > +            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
> > +            vdevs: Virtual devices, e.g.::
> > +
> > +                vdevs=[
> > +                    VirtualDevice('net_ring0'),
> > +                    VirtualDevice('net_ring1')
> > +                ]
> > +            other_eal_param: user defined DPDK EAL parameters, e.g.:
> > +                ``other_eal_param='--single-file-segments'``
> >           """
> >           self._lcore_list = f"-l {lcore_list}"
> >           self._memory_channels = f"-n {memory_channels}"
> > @@ -61,6 +76,7 @@ def __init__(
> >           self._other_eal_param = other_eal_param
> >
> >       def __str__(self) -> str:
> > +        """Create the EAL string."""
> >           return (
> >               f"{self._lcore_list} "
> >               f"{self._memory_channels} "
> > @@ -72,11 +88,21 @@ def __str__(self) -> str:
> >
> >
> >   class SutNode(Node):
> > -    """
> > -    A class for managing connections to the System under Test, providing
> > -    methods that retrieve the necessary information about the node (such as
> > -    CPU, memory and NIC details) and configuration capabilities.
> > -    Another key capability is building DPDK according to given build target.
> > +    """The system under test node.
> > +
> > +    The SUT node extends :class:`Node` with DPDK specific features:
> > +
> > +        * DPDK build,
> > +        * Gathering of DPDK build info,
> > +        * The running of DPDK apps, interactively or one-time execution,
> > +        * DPDK apps cleanup.
> > +
> > +    The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL`
> > +    environment variable configure the path to the DPDK tarball
> > +    or the git commit ID, tag ID or tree ID to test.
>
> I just want to make sure. We use the --tarball option also to set a git
> commit id instead of a tarball as the source?
>

Yes. The purpose of the option is to specify the DPDK to test and
there are different accepted formats for the option (a tarball or a
git commit id). There are actually three aliases for the option:
--tarball, --snapshot, --git-ref, but none of the aliases capture the
dichotomous nature of the option.

> > +
> > +    Attributes:
> > +        config: The SUT node configuration
> >       """
> >
> >       config: SutNodeConfiguration
> > @@ -94,6 +120,11 @@ class SutNode(Node):
> >       _path_to_devbind_script: PurePath | None
> >
> >       def __init__(self, node_config: SutNodeConfiguration):
> > +        """Extend the constructor with SUT node specifics.
> > +
> > +        Args:
> > +            node_config: The SUT node's test run configuration.
> > +        """
> >           super(SutNode, self).__init__(node_config)
> >           self._dpdk_prefix_list = []
> >           self._build_target_config = None
> > @@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration):
> >
> >       @property
> >       def _remote_dpdk_dir(self) -> PurePath:
> > +        """The remote DPDK dir.
> > +
> > +        This internal property should be set after extracting the DPDK tarball. If it's not set,
> > +        that implies the DPDK setup step has been skipped, in which case we can guess where
> > +        a previous build was located.
> > +        """
> >           if self.__remote_dpdk_dir is None:
> >               self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
> >           return self.__remote_dpdk_dir
> > @@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
> >
> >       @property
> >       def remote_dpdk_build_dir(self) -> PurePath:
> > +        """The remote DPDK build directory.
> > +
> > +        This is the directory where DPDK was built.
> > +        We assume it was built in a subdirectory of the extracted tarball.
> > +        """
> >           if self._build_target_config:
> >               return self.main_session.join_remote_path(
> >                   self._remote_dpdk_dir, self._build_target_config.name
> > @@ -132,6 +174,7 @@ def remote_dpdk_build_dir(self) -> PurePath:
> >
> >       @property
> >       def dpdk_version(self) -> str:
> > +        """Last built DPDK version."""
> >           if self._dpdk_version is None:
> >               self._dpdk_version = self.main_session.get_dpdk_version(
> >                   self._remote_dpdk_dir
> > @@ -140,12 +183,14 @@ def dpdk_version(self) -> str:
> >
> >       @property
> >       def node_info(self) -> NodeInfo:
> > +        """Additional node information."""
> >           if self._node_info is None:
> >               self._node_info = self.main_session.get_node_info()
> >           return self._node_info
> >
> >       @property
> >       def compiler_version(self) -> str:
> > +        """The node's compiler version."""
> >           if self._compiler_version is None:
> >               if self._build_target_config is not None:
> >                   self._compiler_version = self.main_session.get_compiler_version(
> > @@ -161,6 +206,7 @@ def compiler_version(self) -> str:
> >
> >       @property
> >       def path_to_devbind_script(self) -> PurePath:
> > +        """The path to the dpdk-devbind.py script on the node."""
> >           if self._path_to_devbind_script is None:
> >               self._path_to_devbind_script = self.main_session.join_remote_path(
> >                   self._remote_dpdk_dir, "usertools", "dpdk-devbind.py"
> > @@ -168,6 +214,11 @@ def path_to_devbind_script(self) -> PurePath:
> >           return self._path_to_devbind_script
> >
> >       def get_build_target_info(self) -> BuildTargetInfo:
> > +        """Get additional build target information.
> > +
> > +        Returns:
> > +            The build target information,
> > +        """
> >           return BuildTargetInfo(
> >               dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
> >           )
> > @@ -178,8 +229,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
> >       def _set_up_build_target(
> >           self, build_target_config: BuildTargetConfiguration
> >       ) -> None:
> > -        """
> > -        Setup DPDK on the SUT node.
> > +        """Setup DPDK on the SUT node.
> > +
> > +        Additional build target setup steps on top of those in :class:`Node`.
> >           """
> >           # we want to ensure that dpdk_version and compiler_version is reset for new
> >           # build targets
> > @@ -200,9 +252,7 @@ def _tear_down_build_target(self) -> None:
> >       def _configure_build_target(
> >           self, build_target_config: BuildTargetConfiguration
> >       ) -> None:
> > -        """
> > -        Populate common environment variables and set build target config.
> > -        """
> > +        """Populate common environment variables and set build target config."""
> >           self._env_vars = {}
> >           self._build_target_config = build_target_config
> >           self._env_vars.update(
> > @@ -217,9 +267,7 @@ def _configure_build_target(
> >
> >       @Node.skip_setup
> >       def _copy_dpdk_tarball(self) -> None:
> > -        """
> > -        Copy to and extract DPDK tarball on the SUT node.
> > -        """
> > +        """Copy to and extract DPDK tarball on the SUT node."""
> >           self._logger.info("Copying DPDK tarball to SUT.")
> >           self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
> >
> > @@ -250,8 +298,9 @@ def _copy_dpdk_tarball(self) -> None:
> >
> >       @Node.skip_setup
> >       def _build_dpdk(self) -> None:
> > -        """
> > -        Build DPDK. Uses the already configured target. Assumes that the tarball has
> > +        """Build DPDK.
> > +
> > +        Uses the already configured target. Assumes that the tarball has
> >           already been copied to and extracted on the SUT node.
> >           """
> >           self.main_session.build_dpdk(
> > @@ -262,15 +311,19 @@ def _build_dpdk(self) -> None:
> >           )
> >
> >       def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
> > -        """
> > -        Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
> > -        When app_name is 'all', build all example apps.
> > -        When app_name is any other string, tries to build that example app.
> > -        Return the directory path of the built app. If building all apps, return
> > -        the path to the examples directory (where all apps reside).
> > -        The meson_dpdk_args are keyword arguments
> > -        found in meson_option.txt in root DPDK directory. Do not use -D with them,
> > -        for example: enable_kmods=True.
> > +        """Build one or all DPDK apps.
> > +
> > +        Requires DPDK to be already built on the SUT node.
> > +
> > +        Args:
> > +            app_name: The name of the DPDK app to build.
> > +                When `app_name` is ``all``, build all example apps.
> > +            meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
> > +                Do not use ``-D`` with them.
> > +
> > +        Returns:
> > +            The directory path of the built app. If building all apps, return
> > +            the path to the examples directory (where all apps reside).
> >           """
> >           self.main_session.build_dpdk(
> >               self._env_vars,
> > @@ -291,9 +344,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
> >           )
> >
> >       def kill_cleanup_dpdk_apps(self) -> None:
> > -        """
> > -        Kill all dpdk applications on the SUT. Cleanup hugepages.
> > -        """
> > +        """Kill all dpdk applications on the SUT, then clean up hugepages."""
> >           if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
> >               # we can use the session if it exists and responds
> >               self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
> > @@ -312,33 +363,34 @@ def create_eal_parameters(
> >           vdevs: list[VirtualDevice] | None = None,
> >           other_eal_param: str = "",
> >       ) -> "EalParameters":
> > -        """
> > -        Generate eal parameters character string;
> > -        :param lcore_filter_specifier: a number of lcores/cores/sockets to use
> > -                        or a list of lcore ids to use.
> > -                        The default will select one lcore for each of two cores
> > -                        on one socket, in ascending order of core ids.
> > -        :param ascending_cores: True, use cores with the lowest numerical id first
> > -                        and continue in ascending order. If False, start with the
> > -                        highest id and continue in descending order. This ordering
> > -                        affects which sockets to consider first as well.
> > -        :param prefix: set file prefix string, eg:
> > -                        prefix='vf'
> > -        :param append_prefix_timestamp: if True, will append a timestamp to
> > -                        DPDK file prefix.
> > -        :param no_pci: switch of disable PCI bus eg:
> > -                        no_pci=True
> > -        :param vdevs: virtual device list, eg:
> > -                        vdevs=[
> > -                            VirtualDevice('net_ring0'),
> > -                            VirtualDevice('net_ring1')
> > -                        ]
> > -        :param other_eal_param: user defined DPDK eal parameters, eg:
> > -                        other_eal_param='--single-file-segments'
> > -        :return: eal param string, eg:
> > -                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
> > -        """
> > +        """Compose the EAL parameters.
> > +
> > +        Process the list of cores and the DPDK prefix and pass that along with
> > +        the rest of the arguments.
> >
> > +        Args:
> > +            lcore_filter_specifier: A number of lcores/cores/sockets to use
> > +                or a list of lcore ids to use.
> > +                The default will select one lcore for each of two cores
> > +                on one socket, in ascending order of core ids.
> > +            ascending_cores: Sort cores in ascending order (lowest to highest IDs).
> > +                If :data:`False`, sort in descending order.
> > +            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
> > +            append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
> > +            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
> > +            vdevs: Virtual devices, e.g.::
> > +
> > +                vdevs=[
> > +                    VirtualDevice('net_ring0'),
> > +                    VirtualDevice('net_ring1')
> > +                ]
> > +            other_eal_param: user defined DPDK EAL parameters, e.g.:
> > +                ``other_eal_param='--single-file-segments'``.
> > +
> > +        Returns:
> > +            An EAL param string, such as
> > +            ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``.
> > +        """
> >           lcore_list = LogicalCoreList(
> >               self.filter_lcores(lcore_filter_specifier, ascending_cores)
> >           )
> > @@ -364,14 +416,29 @@ def create_eal_parameters(
> >       def run_dpdk_app(
> >           self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
> >       ) -> CommandResult:
> > -        """
> > -        Run DPDK application on the remote node.
> > +        """Run DPDK application on the remote node.
> > +
> > +        The application is not run interactively - the command that starts the application
> > +        is executed and then the call waits for it to finish execution.
> > +
> > +        Args:
> > +            app_path: The remote path to the DPDK application.
> > +            eal_args: EAL parameters to run the DPDK application with.
> > +            timeout: Wait at most this long in seconds to execute the command.
>
> confusing timeout
>

Ack.

> > +
> > +        Returns:
> > +            The result of the DPDK app execution.
> >           """
> >           return self.main_session.send_command(
> >               f"{app_path} {eal_args}", timeout, privileged=True, verify=True
> >           )
> >
> >       def configure_ipv4_forwarding(self, enable: bool) -> None:
> > +        """Enable/disable IPv4 forwarding on the node.
> > +
> > +        Args:
> > +            enable: If :data:`True`, enable the forwarding, otherwise disable it.
> > +        """
> >           self.main_session.configure_ipv4_forwarding(enable)
> >
> >       def create_interactive_shell(
> > @@ -381,9 +448,13 @@ def create_interactive_shell(
> >           privileged: bool = False,
> >           eal_parameters: EalParameters | str | None = None,
> >       ) -> InteractiveShellType:
> > -        """Factory method for creating a handler for an interactive session.
> > +        """Extend the factory for interactive session handlers.
> > +
> > +        The extensions are SUT node specific:
> >
> > -        Instantiate shell_cls according to the remote OS specifics.
> > +            * The default for `eal_parameters`,
> > +            * The interactive shell path `shell_cls.path` is prepended with path to the remote
> > +              DPDK build directory for DPDK apps.
> >
> >           Args:
> >               shell_cls: The class of the shell.
> > @@ -393,9 +464,10 @@ def create_interactive_shell(
> >               privileged: Whether to run the shell with administrative privileges.
> >               eal_parameters: List of EAL parameters to use to launch the app. If this
> >                   isn't provided or an empty string is passed, it will default to calling
> > -                create_eal_parameters().
> > +                :meth:`create_eal_parameters`.
> > +
> >           Returns:
> > -            Instance of the desired interactive application.
> > +            An instance of the desired interactive application shell.
> >           """
> >           if not eal_parameters:
> >               eal_parameters = self.create_eal_parameters()
> > @@ -414,8 +486,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
> >           """Bind all ports on the SUT to a driver.
> >
> >           Args:
> > -            for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk
> > -            or, when False, binds to os_driver. Defaults to True.
> > +            for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
> > +                If :data:`False`, binds to os_driver.
> >           """
> >           for port in self.ports:
> >               driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
> > diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
> > index 166eb8430e..69eb33ccb1 100644
> > --- a/dts/framework/testbed_model/tg_node.py
> > +++ b/dts/framework/testbed_model/tg_node.py
> > @@ -5,13 +5,8 @@
> >
> >   """Traffic generator node.
> >
> > -This is the node where the traffic generator resides.
> > -The distinction between a node and a traffic generator is as follows:
> > -A node is a host that DTS connects to. It could be a baremetal server,
> > -a VM or a container.
> > -A traffic generator is software running on the node.
> > -A traffic generator node is a node running a traffic generator.
> > -A node can be a traffic generator node as well as system under test node.
> > +A traffic generator (TG) generates traffic that's sent towards the SUT node.
> > +A TG node is where the TG runs.
> >   """
> >
> >   from scapy.packet import Packet  # type: ignore[import]
> > @@ -24,13 +19,16 @@
> >
> >
> >   class TGNode(Node):
> > -    """Manage connections to a node with a traffic generator.
> > +    """The traffic generator node.
> >
> > -    Apart from basic node management capabilities, the Traffic Generator node has
> > -    specialized methods for handling the traffic generator running on it.
> > +    The TG node extends :class:`Node` with TG specific features:
> >
> > -    Arguments:
> > -        node_config: The user configuration of the traffic generator node.
> > +        * Traffic generator initialization,
> > +        * The sending of traffic and receiving packets,
> > +        * The sending of traffic without receiving packets.
> > +
> > +    Not all traffic generators are capable of capturing traffic, which is why there
> > +    must be a way to send traffic without that.
> >
> >       Attributes:
> >           traffic_generator: The traffic generator running on the node.
> > @@ -39,6 +37,13 @@ class TGNode(Node):
> >       traffic_generator: CapturingTrafficGenerator
> >
> >       def __init__(self, node_config: TGNodeConfiguration):
> > +        """Extend the constructor with TG node specifics.
> > +
> > +        Initialize the traffic generator on the TG node.
> > +
> > +        Args:
> > +            node_config: The TG node's test run configuration.
> > +        """
> >           super(TGNode, self).__init__(node_config)
> >           self.traffic_generator = create_traffic_generator(
> >               self, node_config.traffic_generator
> > @@ -52,17 +57,17 @@ def send_packet_and_capture(
> >           receive_port: Port,
> >           duration: float = 1,
> >       ) -> list[Packet]:
> > -        """Send a packet, return received traffic.
> > +        """Send `packet`, return received traffic.
> >
> > -        Send a packet on the send_port and then return all traffic captured
> > -        on the receive_port for the given duration. Also record the captured traffic
> > +        Send `packet` on `send_port` and then return all traffic captured
> > +        on `receive_port` for the given duration. Also record the captured traffic
> >           in a pcap file.
> >
> >           Args:
> >               packet: The packet to send.
> >               send_port: The egress port on the TG node.
> >               receive_port: The ingress port in the TG node.
> > -            duration: Capture traffic for this amount of time after sending the packet.
> > +            duration: Capture traffic for this amount of time after sending `packet`.
> >
> >           Returns:
> >                A list of received packets. May be empty if no packets are captured.
> > @@ -72,6 +77,9 @@ def send_packet_and_capture(
> >           )
> >
> >       def close(self) -> None:
> > -        """Free all resources used by the node"""
> > +        """Free all resources used by the node.
> > +
> > +        This extends the superclass method with TG cleanup.
> > +        """
> >           self.traffic_generator.close()
> >           super(TGNode, self).close()
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 16/21] dts: posix and linux sessions docstring update
  2023-11-22 13:24                 ` Yoan Picchi
@ 2023-11-22 13:35                   ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 13:35 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Wed, Nov 22, 2023 at 2:24 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/15/23 13:09, Juraj Linkeš wrote:
> > Format according to the Google format and PEP257, with slight
> > deviations.
> >
> > Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > ---
> >   dts/framework/testbed_model/linux_session.py | 63 ++++++++++-----
> >   dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++---
> >   2 files changed, 113 insertions(+), 31 deletions(-)
> >
> > diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
> > index f472bb8f0f..279954ff63 100644
> > --- a/dts/framework/testbed_model/linux_session.py
> > +++ b/dts/framework/testbed_model/linux_session.py
> > @@ -2,6 +2,13 @@
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2023 University of New Hampshire
> >
> > +"""Linux OS translator.
> > +
> > +Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly
> > +compliant with POSIX standards, so this module only implements the parts that aren't.
> > +This intermediate module implements the common parts of mostly POSIX compliant distributions.
> > +"""
> > +
> >   import json
> >   from ipaddress import IPv4Interface, IPv6Interface
> >   from typing import TypedDict, Union
> > @@ -17,43 +24,51 @@
> >
> >
> >   class LshwConfigurationOutput(TypedDict):
> > +    """The relevant parts of ``lshw``'s ``configuration`` section."""
> > +
> > +    #:
> >       link: str
> >
> >
> >   class LshwOutput(TypedDict):
> > -    """
> > -    A model of the relevant information from json lshw output, e.g.:
> > -    {
> > -    ...
> > -    "businfo" : "pci@0000:08:00.0",
> > -    "logicalname" : "enp8s0",
> > -    "version" : "00",
> > -    "serial" : "52:54:00:59:e1:ac",
> > -    ...
> > -    "configuration" : {
> > -      ...
> > -      "link" : "yes",
> > -      ...
> > -    },
> > -    ...
> > +    """A model of the relevant information from ``lshw``'s json output.
> > +
> > +    e.g.::
> > +
> > +        {
> > +        ...
> > +        "businfo" : "pci@0000:08:00.0",
> > +        "logicalname" : "enp8s0",
> > +        "version" : "00",
> > +        "serial" : "52:54:00:59:e1:ac",
> > +        ...
> > +        "configuration" : {
> > +          ...
> > +          "link" : "yes",
> > +          ...
> > +        },
> > +        ...
> >       """
> >
> > +    #:
> >       businfo: str
> > +    #:
> >       logicalname: NotRequired[str]
> > +    #:
> >       serial: NotRequired[str]
> > +    #:
> >       configuration: LshwConfigurationOutput
> >
> >
> >   class LinuxSession(PosixSession):
> > -    """
> > -    The implementation of non-Posix compliant parts of Linux remote sessions.
> > -    """
> > +    """The implementation of non-Posix compliant parts of Linux."""
> >
> >       @staticmethod
> >       def _get_privileged_command(command: str) -> str:
> >           return f"sudo -- sh -c '{command}'"
> >
> >       def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
> > +        """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
> >           cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout
> >           lcores = []
> >           for cpu_line in cpu_info.splitlines():
> > @@ -65,18 +80,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
> >           return lcores
> >
> >       def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> > +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
> >           return dpdk_prefix
> >
> > -    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
> > +    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
> >           self._logger.info("Getting Hugepage information.")
> >           hugepage_size = self._get_hugepage_size()
> >           hugepages_total = self._get_hugepages_total()
> >           self._numa_nodes = self._get_numa_nodes()
> >
> > -        if force_first_numa or hugepages_total != hugepage_amount:
> > +        if force_first_numa or hugepages_total != hugepage_count:
> >               # when forcing numa, we need to clear existing hugepages regardless
> >               # of size, so they can be moved to the first numa node
> > -            self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
> > +            self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
> >           else:
> >               self._logger.info("Hugepages already configured.")
> >           self._mount_huge_pages()
> > @@ -140,6 +157,7 @@ def _configure_huge_pages(
> >           )
> >
> >       def update_ports(self, ports: list[Port]) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.update_ports`."""
> >           self._logger.debug("Gathering port info.")
> >           for port in ports:
> >               assert (
> > @@ -178,6 +196,7 @@ def _update_port_attr(
> >               )
> >
> >       def configure_port_state(self, port: Port, enable: bool) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
> >           state = "up" if enable else "down"
> >           self.send_command(
> >               f"ip link set dev {port.logical_name} {state}", privileged=True
> > @@ -189,6 +208,7 @@ def configure_port_ip_address(
> >           port: Port,
> >           delete: bool,
> >       ) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`."""
> >           command = "del" if delete else "add"
> >           self.send_command(
> >               f"ip address {command} {address} dev {port.logical_name}",
> > @@ -197,5 +217,6 @@ def configure_port_ip_address(
> >           )
> >
> >       def configure_ipv4_forwarding(self, enable: bool) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`."""
> >           state = 1 if enable else 0
> >           self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
> > diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
> > index 1d1d5b1b26..a4824aa274 100644
> > --- a/dts/framework/testbed_model/posix_session.py
> > +++ b/dts/framework/testbed_model/posix_session.py
> > @@ -2,6 +2,15 @@
> >   # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >   # Copyright(c) 2023 University of New Hampshire
> >
> > +"""POSIX compliant OS translator.
> > +
> > +Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards
> > +for portability between Unix operating systems which not all Linux distributions
> > +(or the tools most frequently bundled with said distributions) adhere to. Most of Linux
> > +distributions are mostly compliant though.
> > +This intermediate module implements the common parts of mostly POSIX compliant distributions.
> > +"""
> > +
> >   import re
> >   from collections.abc import Iterable
> >   from pathlib import PurePath, PurePosixPath
> > @@ -15,13 +24,21 @@
> >
> >
> >   class PosixSession(OSSession):
> > -    """
> > -    An intermediary class implementing the Posix compliant parts of
> > -    Linux and other OS remote sessions.
> > -    """
> > +    """An intermediary class implementing the POSIX standard."""
> >
> >       @staticmethod
> >       def combine_short_options(**opts: bool) -> str:
> > +        """Combine shell options into one argument.
> > +
> > +        These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``.
> > +
> > +        Args:
> > +            opts: The keys are option names (usually one letter) and the bool values indicate
> > +                whether to include the option in the resulting argument.
> > +
> > +        Returns:
> > +            The options combined into one argument.
> > +        """
> >           ret_opts = ""
> >           for opt, include in opts.items():
> >               if include:
> > @@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str:
> >           return ret_opts
> >
> >       def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
> > +        """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`."""
> >           remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
> >           result = self.send_command(f"ls -d {remote_guess} | tail -1")
> >           return PurePosixPath(result.stdout)
> >
> >       def get_remote_tmp_dir(self) -> PurePosixPath:
> > +        """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`."""
> >           return PurePosixPath("/tmp")
> >
> >       def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> > -        """
> > -        Create extra environment variables needed for i686 arch build. Get information
> > -        from the node if needed.
> > +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`.
> > +
> > +        Supported architecture: ``i686``.
> >           """
> >           env_vars = {}
> >           if arch == Architecture.i686:
> > @@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> >           return env_vars
> >
> >       def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
> > +        """Overrides :meth:`~.os_session.OSSession.join_remote_path`."""
> >           return PurePosixPath(*args)
> >
> >       def copy_from(
> > @@ -70,6 +90,7 @@ def copy_from(
> >           source_file: str | PurePath,
> >           destination_file: str | PurePath,
> >       ) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.copy_from`."""
> >           self.remote_session.copy_from(source_file, destination_file)
> >
> >       def copy_to(
> > @@ -77,6 +98,7 @@ def copy_to(
> >           source_file: str | PurePath,
> >           destination_file: str | PurePath,
> >       ) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.copy_to`."""
> >           self.remote_session.copy_to(source_file, destination_file)
> >
> >       def remove_remote_dir(
> > @@ -85,6 +107,7 @@ def remove_remote_dir(
> >           recursive: bool = True,
> >           force: bool = True,
> >       ) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
> >           opts = PosixSession.combine_short_options(r=recursive, f=force)
> >           self.send_command(f"rm{opts} {remote_dir_path}")
> >
> > @@ -93,6 +116,7 @@ def extract_remote_tarball(
> >           remote_tarball_path: str | PurePath,
> >           expected_dir: str | PurePath | None = None,
> >       ) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
> >           self.send_command(
> >               f"tar xfm {remote_tarball_path} "
> >               f"-C {PurePosixPath(remote_tarball_path).parent}",
> > @@ -110,6 +134,7 @@ def build_dpdk(
> >           rebuild: bool = False,
> >           timeout: float = SETTINGS.compile_timeout,
> >       ) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
> >           try:
> >               if rebuild:
> >                   # reconfigure, then build
> > @@ -140,12 +165,14 @@ def build_dpdk(
> >               raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
> >
> >       def get_dpdk_version(self, build_dir: str | PurePath) -> str:
> > +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
> >           out = self.send_command(
> >               f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True
> >           )
> >           return out.stdout
> >
> >       def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
> > +        """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`."""
> >           self._logger.info("Cleaning up DPDK apps.")
> >           dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
> >           if dpdk_runtime_dirs:
> > @@ -159,6 +186,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
> >       def _get_dpdk_runtime_dirs(
> >           self, dpdk_prefix_list: Iterable[str]
> >       ) -> list[PurePosixPath]:
> > +        """Find runtime directories DPDK apps are currently using.
> > +
> > +        Args:
> > +              dpdk_prefix_list: The prefixes DPDK apps were started with.
> > +
> > +        Returns:
> > +            The paths of DPDK apps' runtime dirs.
> > +        """
> >           prefix = PurePosixPath("/var", "run", "dpdk")
> >           if not dpdk_prefix_list:
> >               remote_prefixes = self._list_remote_dirs(prefix)
> > @@ -170,9 +205,13 @@ def _get_dpdk_runtime_dirs(
> >           return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
> >
> >       def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
> > -        """
> > -        Return a list of directories of the remote_dir.
> > -        If remote_path doesn't exist, return None.
> > +        """Contents of remote_path.
> > +
> > +        Args:
> > +            remote_path: List the contents of this path.
> > +
> > +        Returns:
> > +            The contents of remote_path. If remote_path doesn't exist, return None.
> >           """
> >           out = self.send_command(
> >               f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'"
> > @@ -183,6 +222,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
> >               return out.splitlines()
> >
> >       def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
> > +        """Find PIDs of running DPDK apps.
> > +
> > +        Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes
> > +        that opened those file.
> > +
> > +        Args:
> > +            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
> > +
> > +        Returns:
> > +            The PIDs of running DPDK apps.
> > +        """
> >           pids = []
> >           pid_regex = r"p(\d+)"
> >           for dpdk_runtime_dir in dpdk_runtime_dirs:
> > @@ -203,6 +253,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool:
> >       def _check_dpdk_hugepages(
> >           self, dpdk_runtime_dirs: Iterable[str | PurePath]
> >       ) -> None:
> > +        """Check there aren't any leftover hugepages.
> > +
> > +        If any hugegapes are found, emit a warning. The hugepages are investigated in the
>
> hugegapes -> hugepages
>

Ack.

> > +        "hugepage_info" file of dpdk_runtime_dirs.
> > +
> > +        Args:
> > +            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
> > +        """
> >           for dpdk_runtime_dir in dpdk_runtime_dirs:
> >               hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
> >               if self._remote_files_exists(hugepage_info):
> > @@ -220,9 +278,11 @@ def _remove_dpdk_runtime_dirs(
> >               self.remove_remote_dir(dpdk_runtime_dir)
> >
> >       def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> > +        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
> >           return ""
> >
> >       def get_compiler_version(self, compiler_name: str) -> str:
> > +        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
> >           match compiler_name:
> >               case "gcc":
> >                   return self.send_command(
> > @@ -240,6 +300,7 @@ def get_compiler_version(self, compiler_name: str) -> str:
> >                   raise ValueError(f"Unknown compiler {compiler_name}")
> >
> >       def get_node_info(self) -> NodeInfo:
> > +        """Overrides :meth:`~.os_session.OSSession.get_node_info`."""
> >           os_release_info = self.send_command(
> >               "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release",
> >               SETTINGS.timeout,
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 21/21] dts: test suites docstring update
  2023-11-20 12:50                     ` Yoan Picchi
@ 2023-11-22 13:40                       ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-22 13:40 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek, dev

On Mon, Nov 20, 2023 at 1:50 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
>
> On 11/20/23 10:17, Juraj Linkeš wrote:
> > On Thu, Nov 16, 2023 at 6:36 PM Yoan Picchi <yoan.picchi@foss.arm.com> wrote:
> >>
> >> On 11/15/23 13:09, Juraj Linkeš wrote:
> >>> Format according to the Google format and PEP257, with slight
> >>> deviations.
> >>>
> >>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> >>> ---
> >>>    dts/tests/TestSuite_hello_world.py | 16 +++++----
> >>>    dts/tests/TestSuite_os_udp.py      | 19 +++++++----
> >>>    dts/tests/TestSuite_smoke_tests.py | 53 +++++++++++++++++++++++++++---
> >>>    3 files changed, 70 insertions(+), 18 deletions(-)
> >>>
> >>> diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
> >>> index 7e3d95c0cf..662a8f8726 100644
> >>> --- a/dts/tests/TestSuite_hello_world.py
> >>> +++ b/dts/tests/TestSuite_hello_world.py
> >>> @@ -1,7 +1,8 @@
> >>>    # SPDX-License-Identifier: BSD-3-Clause
> >>>    # Copyright(c) 2010-2014 Intel Corporation
> >>>
> >>> -"""
> >>> +"""The DPDK hello world app test suite.
> >>> +
> >>>    Run the helloworld example app and verify it prints a message for each used core.
> >>>    No other EAL parameters apart from cores are used.
> >>>    """
> >>> @@ -15,22 +16,25 @@
> >>>
> >>>
> >>>    class TestHelloWorld(TestSuite):
> >>> +    """DPDK hello world app test suite."""
> >>> +
> >>>        def set_up_suite(self) -> None:
> >>> -        """
> >>> +        """Set up the test suite.
> >>> +
> >>>            Setup:
> >>>                Build the app we're about to test - helloworld.
> >>>            """
> >>>            self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
> >>>
> >>>        def test_hello_world_single_core(self) -> None:
> >>> -        """
> >>> +        """Single core test case.
> >>> +
> >>>            Steps:
> >>>                Run the helloworld app on the first usable logical core.
> >>>            Verify:
> >>>                The app prints a message from the used core:
> >>>                "hello from core <core_id>"
> >>>            """
> >>> -
> >>>            # get the first usable core
> >>>            lcore_amount = LogicalCoreCount(1, 1, 1)
> >>>            lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
> >>> @@ -44,14 +48,14 @@ def test_hello_world_single_core(self) -> None:
> >>>            )
> >>>
> >>>        def test_hello_world_all_cores(self) -> None:
> >>> -        """
> >>> +        """All cores test case.
> >>> +
> >>>            Steps:
> >>>                Run the helloworld app on all usable logical cores.
> >>>            Verify:
> >>>                The app prints a message from all used cores:
> >>>                "hello from core <core_id>"
> >>>            """
> >>> -
> >>>            # get the maximum logical core number
> >>>            eal_para = self.sut_node.create_eal_parameters(
> >>>                lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
> >>> diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
> >>> index bf6b93deb5..e0c5239612 100644
> >>> --- a/dts/tests/TestSuite_os_udp.py
> >>> +++ b/dts/tests/TestSuite_os_udp.py
> >>> @@ -1,7 +1,8 @@
> >>>    # SPDX-License-Identifier: BSD-3-Clause
> >>>    # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >>>
> >>> -"""
> >>> +"""Basic IPv4 OS routing test suite.
> >>> +
> >>>    Configure SUT node to route traffic from if1 to if2.
> >>>    Send a packet to the SUT node, verify it comes back on the second port on the TG node.
> >>>    """
> >>> @@ -13,24 +14,27 @@
> >>>
> >>>
> >>>    class TestOSUdp(TestSuite):
> >>> +    """IPv4 UDP OS routing test suite."""
> >>> +
> >>>        def set_up_suite(self) -> None:
> >>> -        """
> >>> +        """Set up the test suite.
> >>> +
> >>>            Setup:
> >>> -            Configure SUT ports and SUT to route traffic from if1 to if2.
> >>> +            Bind the SUT ports to the OS driver, configure the ports and configure the SUT
> >>> +            to route traffic from if1 to if2.
> >>>            """
> >>>
> >>> -        # This test uses kernel drivers
> >>>            self.sut_node.bind_ports_to_driver(for_dpdk=False)
> >>>            self.configure_testbed_ipv4()
> >>>
> >>>        def test_os_udp(self) -> None:
> >>> -        """
> >>> +        """Basic UDP IPv4 traffic test case.
> >>> +
> >>>            Steps:
> >>>                Send a UDP packet.
> >>>            Verify:
> >>>                The packet with proper addresses arrives at the other TG port.
> >>>            """
> >>> -
> >>>            packet = Ether() / IP() / UDP()
> >>>
> >>>            received_packets = self.send_packet_and_capture(packet)
> >>> @@ -40,7 +44,8 @@ def test_os_udp(self) -> None:
> >>>            self.verify_packets(expected_packet, received_packets)
> >>>
> >>>        def tear_down_suite(self) -> None:
> >>> -        """
> >>> +        """Tear down the test suite.
> >>> +
> >>>            Teardown:
> >>>                Remove the SUT port configuration configured in setup.
> >>>            """
> >>> diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
> >>> index e8016d1b54..6fae099a0e 100644
> >>> --- a/dts/tests/TestSuite_smoke_tests.py
> >>> +++ b/dts/tests/TestSuite_smoke_tests.py
> >>> @@ -1,6 +1,17 @@
> >>>    # SPDX-License-Identifier: BSD-3-Clause
> >>>    # Copyright(c) 2023 University of New Hampshire
> >>>
> >>> +"""Smoke test suite.
> >>> +
> >>> +Smoke tests are a class of tests which are used for validating a minimal set of important features.
> >>> +These are the most important features without which (or when they're faulty) the software wouldn't
> >>> +work properly. Thus, if any failure occurs while testing these features,
> >>> +there isn't that much of a reason to continue testing, as the software is fundamentally broken.
> >>> +
> >>> +These tests don't have to include only DPDK tests, as the reason for failures could be
> >>> +in the infrastructure (a faulty link between NICs or a misconfiguration).
> >>> +"""
> >>> +
> >>>    import re
> >>>
> >>>    from framework.config import PortConfig
> >>> @@ -11,13 +22,25 @@
> >>>
> >>>
> >>>    class SmokeTests(TestSuite):
> >>> +    """DPDK and infrastructure smoke test suite.
> >>> +
> >>> +    The test cases validate the most basic DPDK functionality needed for all other test suites.
> >>> +    The infrastructure also needs to be tested, as that is also used by all other test suites.
> >>> +
> >>> +    Attributes:
> >>> +        is_blocking: This test suite will block the execution of all other test suites
> >>> +            in the build target after it.
> >>> +        nics_in_node: The NICs present on the SUT node.
> >>> +    """
> >>> +
> >>>        is_blocking = True
> >>>        # dicts in this list are expected to have two keys:
> >>>        # "pci_address" and "current_driver"
> >>>        nics_in_node: list[PortConfig] = []
> >>>
> >>>        def set_up_suite(self) -> None:
> >>> -        """
> >>> +        """Set up the test suite.
> >>> +
> >>>            Setup:
> >>>                Set the build directory path and generate a list of NICs in the SUT node.
> >>>            """
> >>> @@ -25,7 +48,13 @@ def set_up_suite(self) -> None:
> >>>            self.nics_in_node = self.sut_node.config.ports
> >>>
> >>>        def test_unit_tests(self) -> None:
> >>> -        """
> >>> +        """DPDK meson fast-tests unit tests.
> >>> +
> >>> +        The DPDK unit tests are basic tests that indicate regressions and other critical failures.
> >>> +        These need to be addressed before other testing.
> >>> +
> >>> +        The fast-tests unit tests are a subset with only the most basic tests.
> >>> +
> >>>            Test:
> >>>                Run the fast-test unit-test suite through meson.
> >>>            """
> >>> @@ -37,7 +66,14 @@ def test_unit_tests(self) -> None:
> >>>            )
> >>>
> >>>        def test_driver_tests(self) -> None:
> >>> -        """
> >>> +        """DPDK meson driver-tests unit tests.
> >>> +
> >>
> >> Copy paste from the previous unit test in the driver tests. If it is on
> >> purpose as both are considered unit tests, then the previous function is
> >> test_unit_tests and deal with fast-tests
> >>
> >
> > I'm not sure what you mean. The two are separate tests (one with the
> > fast-test, the other one with the driver-test unit test test suites)
> > and the docstring do capture the differences.
>
> I am a little bit confused as to how I deleted it in my reply, but I was
> referencing to this sentence in the patch:
> "The DPDK unit tests are basic tests that indicate regressions and other
> critical failures.
> These need to be addressed before other testing."
> But in any case, reading it again, I agree with you.
>
> >
> >>> +
> >>> +        The driver-tests unit tests are a subset that test only drivers. These may be run
> >>> +        with virtual devices as well.
> >>> +
> >>>            Test:
> >>>                Run the driver-test unit-test suite through meson.
> >>>            """
> >>> @@ -63,7 +99,10 @@ def test_driver_tests(self) -> None:
> >>>            )
> >>>
> >>>        def test_devices_listed_in_testpmd(self) -> None:
> >>> -        """
> >>> +        """Testpmd device discovery.
> >>> +
> >>> +        If the configured devices can't be found in testpmd, they can't be tested.
> >>
> >> Maybe a bit nitpicky. This is more of a statement as to why the test
> >> exist than a description of the test. Suggestion: "Tests that the
> >> configured devices can be found in testpmd. If they aren't, the
> >> configuration might be wrong and tests might be skipped"
> >>
> >
> > This is more of a reason for why this particular test is a smoke test.
> > Since a smoke test failure results in all test suites being blocked,
> > this seemed like key information.
> >
> > We also don't have an exact format of what should be included in a
> > test case/suite documentation. We should use this opportunity to
> > document what we deem important in these test cases at this point in
> > time and improve the docs as we continue adding test cases. We can add
> > more custom sections (such as the "Setup:" and" "Test:" sections,
> > which can be added to Sphinx); I like adding a section with
> > explanation for why a test is a particular type of test (in this case,
> > a smoke test). The regular body could contain a description as you
> > suggested. What do you think?
> >
>
> I'm not really sure what way to go here. The thing I noticed here was
> mainly the lack of consistency between this test's description and the
> previous one. I agree that making it clear it's a smoke test is good,
> but compare it to test_device_bound_to_driver's description for
> instance. Both states clearly that they are a smoke test, but the
> formulation is quite different.
>
> I'm not entirely sure about adding more custom sections. I fear it might
> be more hassle than it's worth. A short guideline on how to write the
> doc and what section to use could be handy though.
>
> Reading the previous test, I think I see what you mean by having a
> section to describe the test type and another for the description.
> In short the type is: DPDK unit tests, test critical failures, needs to
> run first
> and then followed by the test's description
> But I think this type is redundant with the test suite's description? If
> so, only a description would be needed.
>

Ok, let's not complicate this any further. I thought a bit more about
this and I've also come to the conclusion that the test suite
description is enough (because of the redundancy). I'll just try to
make the test case short descriptions more consistent.

> >>> +
> >>>            Test:
> >>>                Uses testpmd driver to verify that devices have been found by testpmd.
> >>>            """
> >>> @@ -79,7 +118,11 @@ def test_devices_listed_in_testpmd(self) -> None:
> >>>                )
> >>>
> >>>        def test_device_bound_to_driver(self) -> None:
> >>> -        """
> >>> +        """Device driver in OS.
> >>> +
> >>> +        The devices must be bound to the proper driver, otherwise they can't be used by DPDK
> >>> +        or the traffic generators.
> >>
> >> Same as the previous comment. It is more of a statement as to why the
> >> test exist than a description of the test
> >>
> >
> > Ack.
> >
> >>> +
> >>>            Test:
> >>>                Ensure that all drivers listed in the config are bound to the correct
> >>>                driver.
>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 00/21] dts: docstrings update
  2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
                                 ` (20 preceding siblings ...)
  2023-11-15 13:09               ` [PATCH v7 21/21] dts: test suites " Juraj Linkeš
@ 2023-11-23 15:13               ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 01/21] dts: code adjustments for doc generation Juraj Linkeš
                                   ` (22 more replies)
  21 siblings, 23 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The first commit makes changes to the code. These code changes mainly
change the structure of the code so that the actual API docs generation
works. There are also some code changes which get reflected in the
documentation, such as making functions/methods/attributes private or
public.

The rest of the commits deal with the actual docstring documentation
(from which the API docs are generated). The format of the docstrings
is the Google format [0] with PEP257 [1] and some guidelines captured
in the last commit of this group covering what the Google format
doesn't.
The docstring updates are split into many commits to make review
possible. When accepted, they may be squashed.
The docstrings have been composed in anticipation of [2], adhering to
maximum line length of 100. We don't have a tool for automatic docstring
formatting, hence the usage of 100 right away to save time.

NOTE: The logger.py module is not fully documented, as it's being
refactored and the refactor will be submitted in the near future.
Documenting it now seems unnecessary.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
[1] https://peps.python.org/pep-0257/
[2] https://patches.dpdk.org/project/dpdk/list/?series=29844

v7:
Split the series into docstrings and api docs generation and addressed
comments.

v8:
Addressed review comments, all of which were pretty minor - small
gramatical changes, a little bit of rewording to remove confusion here
and there, additional explanations and so on.

Juraj Linkeš (21):
  dts: code adjustments for doc generation
  dts: add docstring checker
  dts: add basic developer docs
  dts: exceptions docstring update
  dts: settings docstring update
  dts: logger and utils docstring update
  dts: dts runner and main docstring update
  dts: test suite docstring update
  dts: test result docstring update
  dts: config docstring update
  dts: remote session docstring update
  dts: interactive remote session docstring update
  dts: port and virtual device docstring update
  dts: cpu docstring update
  dts: os session docstring update
  dts: posix and linux sessions docstring update
  dts: node docstring update
  dts: sut and tg nodes docstring update
  dts: base traffic generators docstring update
  dts: scapy tg docstring update
  dts: test suites docstring update

 doc/guides/tools/dts.rst                      |  73 +++
 dts/framework/__init__.py                     |  12 +-
 dts/framework/config/__init__.py              | 375 +++++++++++++---
 dts/framework/config/types.py                 | 132 ++++++
 dts/framework/dts.py                          | 162 +++++--
 dts/framework/exception.py                    | 156 ++++---
 dts/framework/logger.py                       |  72 ++-
 dts/framework/remote_session/__init__.py      |  80 ++--
 .../interactive_remote_session.py             |  36 +-
 .../remote_session/interactive_shell.py       | 150 +++++++
 dts/framework/remote_session/os_session.py    | 284 ------------
 dts/framework/remote_session/python_shell.py  |  32 ++
 .../remote_session/remote/__init__.py         |  27 --
 .../remote/interactive_shell.py               | 131 ------
 .../remote_session/remote/python_shell.py     |  12 -
 .../remote_session/remote/remote_session.py   | 168 -------
 .../remote_session/remote/testpmd_shell.py    |  45 --
 .../remote_session/remote_session.py          | 230 ++++++++++
 .../{remote => }/ssh_session.py               |  28 +-
 dts/framework/remote_session/testpmd_shell.py |  83 ++++
 dts/framework/settings.py                     | 188 ++++++--
 dts/framework/test_result.py                  | 301 ++++++++++---
 dts/framework/test_suite.py                   | 236 +++++++---
 dts/framework/testbed_model/__init__.py       |  29 +-
 dts/framework/testbed_model/{hw => }/cpu.py   | 209 ++++++---
 dts/framework/testbed_model/hw/__init__.py    |  27 --
 dts/framework/testbed_model/hw/port.py        |  60 ---
 .../testbed_model/hw/virtual_device.py        |  16 -
 .../linux_session.py                          |  70 ++-
 dts/framework/testbed_model/node.py           | 214 ++++++---
 dts/framework/testbed_model/os_session.py     | 422 ++++++++++++++++++
 dts/framework/testbed_model/port.py           |  93 ++++
 .../posix_session.py                          |  85 +++-
 dts/framework/testbed_model/sut_node.py       | 238 ++++++----
 dts/framework/testbed_model/tg_node.py        |  69 ++-
 .../testbed_model/traffic_generator.py        |  72 ---
 .../traffic_generator/__init__.py             |  43 ++
 .../capturing_traffic_generator.py            |  49 +-
 .../{ => traffic_generator}/scapy.py          | 110 +++--
 .../traffic_generator/traffic_generator.py    |  85 ++++
 dts/framework/testbed_model/virtual_device.py |  29 ++
 dts/framework/utils.py                        | 122 ++---
 dts/main.py                                   |  19 +-
 dts/poetry.lock                               |  12 +-
 dts/pyproject.toml                            |   6 +-
 dts/tests/TestSuite_hello_world.py            |  16 +-
 dts/tests/TestSuite_os_udp.py                 |  20 +-
 dts/tests/TestSuite_smoke_tests.py            |  61 ++-
 48 files changed, 3506 insertions(+), 1683 deletions(-)
 create mode 100644 dts/framework/config/types.py
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (76%)
 create mode 100644 dts/framework/remote_session/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/os_session.py
 create mode 100644 dts/framework/remote_session/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 delete mode 100644 dts/framework/remote_session/remote/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/remote/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/remote_session.py
 delete mode 100644 dts/framework/remote_session/remote/testpmd_shell.py
 create mode 100644 dts/framework/remote_session/remote_session.py
 rename dts/framework/remote_session/{remote => }/ssh_session.py (82%)
 create mode 100644 dts/framework/remote_session/testpmd_shell.py
 rename dts/framework/testbed_model/{hw => }/cpu.py (50%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 delete mode 100644 dts/framework/testbed_model/hw/port.py
 delete mode 100644 dts/framework/testbed_model/hw/virtual_device.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (77%)
 create mode 100644 dts/framework/testbed_model/os_session.py
 create mode 100644 dts/framework/testbed_model/port.py
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (73%)
 delete mode 100644 dts/framework/testbed_model/traffic_generator.py
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (68%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (71%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/traffic_generator.py
 create mode 100644 dts/framework/testbed_model/virtual_device.py

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 01/21] dts: code adjustments for doc generation
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 02/21] dts: add docstring checker Juraj Linkeš
                                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The standard Python tool for generating API documentation, Sphinx,
imports modules one-by-one when generating the documentation. This
requires code changes:
* properly guarding argument parsing in the if __name__ == '__main__'
  block,
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create log files outside of a DTS run,
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere,
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by reorganizing the code,
* some code reorganization was done because the resulting structure
  makes more sense, improving documentation clarity.

The are some other changes which are documentation related:
* added missing type annotation so they appear in the generated docs,
* reordered arguments in some methods,
* removed superfluous arguments and attributes,
* change private functions/methods/attributes to private and vice-versa.

The above all appear in the generated documentation and the with them,
the documentation is improved.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              |  8 +-
 dts/framework/dts.py                          | 31 +++++--
 dts/framework/exception.py                    | 54 +++++-------
 dts/framework/remote_session/__init__.py      | 41 +++++----
 .../interactive_remote_session.py             |  0
 .../{remote => }/interactive_shell.py         |  0
 .../{remote => }/python_shell.py              |  0
 .../remote_session/remote/__init__.py         | 27 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               | 12 +--
 .../{remote => }/testpmd_shell.py             |  0
 dts/framework/settings.py                     | 85 +++++++++++--------
 dts/framework/test_result.py                  |  4 +-
 dts/framework/test_suite.py                   |  7 +-
 dts/framework/testbed_model/__init__.py       | 12 +--
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py    | 27 ------
 .../linux_session.py                          |  6 +-
 dts/framework/testbed_model/node.py           | 23 +++--
 .../os_session.py                             | 22 ++---
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py                          |  4 +-
 dts/framework/testbed_model/sut_node.py       |  8 +-
 dts/framework/testbed_model/tg_node.py        | 29 +------
 .../traffic_generator/__init__.py             | 23 +++++
 .../capturing_traffic_generator.py            |  4 +-
 .../{ => traffic_generator}/scapy.py          | 19 ++---
 .../traffic_generator.py                      | 14 ++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py                        | 40 +++------
 dts/main.py                                   |  9 +-
 31 files changed, 244 insertions(+), 278 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (98%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (81%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 9b32cf0532..ef25a463c0 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -17,6 +17,7 @@
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
 
@@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict):
+    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
         # This looks useless now, but is designed to allow expansion to traffic
         # generators that require more configuration later.
         match TrafficGeneratorType(d["type"]):
@@ -97,6 +98,8 @@ def from_dict(d: dict):
                 return ScapyTrafficGeneratorConfig(
                     traffic_generator_type=TrafficGeneratorType.SCAPY
                 )
+            case _:
+                raise ConfigurationError(f'Unknown traffic generator type "{d["type"]}".')
 
 
 @dataclass(slots=True, frozen=True)
@@ -314,6 +317,3 @@ def load_config() -> Configuration:
     config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
     config_obj: Configuration = Configuration.from_dict(dict(config))
     return config_obj
-
-
-CONFIGURATION = load_config()
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 25d6942d81..356368ef10 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,19 +6,19 @@
 import sys
 
 from .config import (
-    CONFIGURATION,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     TestSuiteConfig,
+    load_config,
 )
 from .exception import BlockingTestSuiteError
 from .logger import DTSLOG, getLogger
 from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
 from .test_suite import get_test_suites
 from .testbed_model import SutNode, TGNode
-from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG = None  # type: ignore[assignment]
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -30,14 +30,18 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
-    check_dts_python_version()
+    _check_dts_python_version()
 
     sut_nodes: dict[str, SutNode] = {}
     tg_nodes: dict[str, TGNode] = {}
     try:
         # for all Execution sections
-        for execution in CONFIGURATION.executions:
+        for execution in load_config().executions:
             sut_node = sut_nodes.get(execution.system_under_test_node.name)
             tg_node = tg_nodes.get(execution.traffic_generator_node.name)
 
@@ -82,6 +86,23 @@ def run_all() -> None:
     _exit_dts()
 
 
+def _check_dts_python_version() -> None:
+    def RED(text: str) -> str:
+        return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+    if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10):
+        print(
+            RED(
+                (
+                    "WARNING: DTS execution node's python version is lower than"
+                    "python 3.10, is deprecated and will not work in future releases."
+                )
+            ),
+            file=sys.stderr,
+        )
+        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+
+
 def _run_execution(
     sut_node: SutNode,
     tg_node: TGNode,
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index b362e42924..151e4d3aa9 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
     Command execution timeout.
     """
 
-    command: str
-    output: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _command: str
 
-    def __init__(self, command: str, output: str):
-        self.command = command
-        self.output = output
+    def __init__(self, command: str):
+        self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self.command}"
-
-    def get_output(self) -> str:
-        return self.output
+        return f"TIMEOUT on {self._command}"
 
 
 class SSHConnectionError(DTSError):
@@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
     SSH connection error.
     """
 
-    host: str
-    errors: list[str]
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
+    _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
-        self.host = host
-        self.errors = [] if errors is None else errors
+        self._host = host
+        self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
-        message = f"Error trying to connect with {self.host}."
-        if self.errors:
-            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
+        message = f"Error trying to connect with {self._host}."
+        if self._errors:
+            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
 
         return message
 
@@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
     It can no longer be used.
     """
 
-    host: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
 
     def __init__(self, host: str):
-        self.host = host
+        self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self.host} has died"
+        return f"SSH session with {self._host} has died"
 
 
 class ConfigurationError(DTSError):
@@ -107,16 +102,16 @@ class RemoteCommandExecutionError(DTSError):
     Raised when a command executed on a Node returns a non-zero exit status.
     """
 
-    command: str
-    command_return_code: int
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    command: str
+    _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
         self.command = command
-        self.command_return_code = command_return_code
+        self._command_return_code = command_return_code
 
     def __str__(self) -> str:
-        return f"Command {self.command} returned a non-zero exit code: {self.command_return_code}"
+        return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}"
 
 
 class RemoteDirectoryExistsError(DTSError):
@@ -140,22 +135,15 @@ class TestCaseVerifyError(DTSError):
     Used in test cases to verify the expected behavior.
     """
 
-    value: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
-    def __init__(self, value: str):
-        self.value = value
-
-    def __str__(self) -> str:
-        return repr(self.value)
-
 
 class BlockingTestSuiteError(DTSError):
-    suite_name: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
+    _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
-        self.suite_name = suite_name
+        self._suite_name = suite_name
 
     def __str__(self) -> str:
-        return f"Blocking suite {self.suite_name} failed."
+        return f"Blocking suite {self._suite_name} failed."
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 6124417bd7..5e7ddb2b05 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -12,27 +12,24 @@
 
 # pylama:ignore=W0611
 
-from framework.config import OS, NodeConfiguration
-from framework.exception import ConfigurationError
+from framework.config import NodeConfiguration
 from framework.logger import DTSLOG
 
-from .linux_session import LinuxSession
-from .os_session import InteractiveShellType, OSSession
-from .remote import (
-    CommandResult,
-    InteractiveRemoteSession,
-    InteractiveShell,
-    PythonShell,
-    RemoteSession,
-    SSHSession,
-    TestPmdDevice,
-    TestPmdShell,
-)
-
-
-def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession:
-    match node_config.os:
-        case OS.linux:
-            return LinuxSession(node_config, name, logger)
-        case _:
-            raise ConfigurationError(f"Unsupported OS {node_config.os}")
+from .interactive_remote_session import InteractiveRemoteSession
+from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
+from .testpmd_shell import TestPmdShell
+
+
+def create_remote_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
+
+
+def create_interactive_session(
+    node_config: NodeConfiguration, logger: DTSLOG
+) -> InteractiveRemoteSession:
+    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_remote_session.py
rename to dts/framework/remote_session/interactive_remote_session.py
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_shell.py
rename to dts/framework/remote_session/interactive_shell.py
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/python_shell.py
rename to dts/framework/remote_session/python_shell.py
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 06403691a5..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-# Copyright(c) 2023 University of New Hampshire
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .interactive_remote_session import InteractiveRemoteSession
-from .interactive_shell import InteractiveShell
-from .python_shell import PythonShell
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-from .testpmd_shell import TestPmdDevice, TestPmdShell
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
-
-
-def create_interactive_session(
-    node_config: NodeConfiguration, logger: DTSLOG
-) -> InteractiveRemoteSession:
-    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
index 1a7ee649ab..a467033a13 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -18,9 +18,7 @@
     SSHException,
 )
 
-from framework.config import NodeConfiguration
 from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
-from framework.logger import DTSLOG
 
 from .remote_session import CommandResult, RemoteSession
 
@@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
 
     session: Connection
 
-    def __init__(
-        self,
-        node_config: NodeConfiguration,
-        session_name: str,
-        logger: DTSLOG,
-    ):
-        super(SSHSession, self).__init__(node_config, session_name, logger)
-
     def _connect(self) -> None:
         errors = []
         retry_attempts = 10
@@ -111,7 +101,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma
 
         except CommandTimedOut as e:
             self._logger.exception(e)
-            raise SSHTimeoutError(command, e.result.stderr) from e
+            raise SSHTimeoutError(command) from e
 
         return CommandResult(self.name, command, output.stdout, output.stderr, output.return_code)
 
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/testpmd_shell.py
rename to dts/framework/remote_session/testpmd_shell.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 974793a11a..25b5dcff22 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -22,8 +22,8 @@ def __init__(
             option_strings: Sequence[str],
             dest: str,
             nargs: str | int | None = None,
-            const: str | None = None,
-            default: str = None,
+            const: bool | None = None,
+            default: Any = None,
             type: Callable[[str], _T | argparse.FileType | None] = None,
             choices: Iterable[_T] | None = None,
             required: bool = False,
@@ -32,6 +32,12 @@ def __init__(
         ) -> None:
             env_var_value = os.environ.get(env_var)
             default = env_var_value or default
+            if const is not None:
+                nargs = 0
+                default = const if env_var_value else default
+                type = None
+                choices = None
+                metavar = None
             super(_EnvironmentArgument, self).__init__(
                 option_strings,
                 dest,
@@ -52,22 +58,28 @@ def __call__(
             values: Any,
             option_string: str = None,
         ) -> None:
-            setattr(namespace, self.dest, values)
+            if self.const is not None:
+                setattr(namespace, self.dest, self.const)
+            else:
+                setattr(namespace, self.dest, values)
 
     return _EnvironmentArgument
 
 
-@dataclass(slots=True, frozen=True)
-class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+@dataclass(slots=True)
+class Settings:
+    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: Settings = Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -80,7 +92,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets.",
     )
 
@@ -88,7 +101,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -96,7 +109,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.",
     )
@@ -105,8 +118,9 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
-        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
+        default=SETTINGS.verbose,
+        const=True,
+        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
         "to the console.",
     )
 
@@ -114,8 +128,8 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
-        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+        const=True,
+        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
     )
 
     parser.add_argument(
@@ -123,7 +137,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--snapshot",
         "--git-ref",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
         "tag ID or tree ID to test. To test local changes, first commit them, "
@@ -133,7 +147,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -150,7 +164,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -159,21 +173,20 @@ def _get_parser() -> argparse.ArgumentParser:
     return parser
 
 
-def _get_settings() -> _Settings:
+def get_settings() -> Settings:
     parsed_args = _get_parser().parse_args()
-    return _Settings(
+    return Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
-        verbose=(parsed_args.verbose == "Y"),
-        skip_setup=(parsed_args.skip_setup == "Y"),
-        dpdk_tarball_path=Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
-        if not os.path.exists(parsed_args.tarball)
-        else Path(parsed_args.tarball),
+        verbose=parsed_args.verbose,
+        skip_setup=parsed_args.skip_setup,
+        dpdk_tarball_path=Path(
+            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
+            if not os.path.exists(parsed_args.tarball)
+            else Path(parsed_args.tarball)
+        ),
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=(parsed_args.test_cases.split(",") if parsed_args.test_cases else []),
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 4c2e7e2418..57090feb04 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -246,7 +246,7 @@ def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTarge
         self._inner_results.append(build_target_result)
         return build_target_result
 
-    def add_sut_info(self, sut_info: NodeInfo):
+    def add_sut_info(self, sut_info: NodeInfo) -> None:
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
@@ -289,7 +289,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
         self._inner_results.append(execution_result)
         return execution_result
 
-    def add_error(self, error) -> None:
+    def add_error(self, error: Exception) -> None:
         self._errors.append(error)
 
     def process(self) -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 4a7907ec33..f9e66e814a 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -11,7 +11,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Union
+from typing import Any, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -26,8 +26,7 @@
 from .logger import DTSLOG, getLogger
 from .settings import SETTINGS
 from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode, TGNode
-from .testbed_model.hw.port import Port, PortLink
+from .testbed_model import Port, PortLink, SutNode, TGNode
 from .utils import get_packet_summaries
 
 
@@ -426,7 +425,7 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
-    def is_test_suite(object) -> bool:
+    def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
                 return True
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 5cbb859e47..8ced05653b 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,15 +9,9 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
+from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from .node import Node
+from .port import Port, PortLink
 from .sut_node import SutNode
 from .tg_node import TGNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/testbed_model/cpu.py
index cbc5fe7fff..1b392689f5 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -262,3 +262,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
similarity index 97%
rename from dts/framework/remote_session/linux_session.py
rename to dts/framework/testbed_model/linux_session.py
index fd877fbfae..055765ba2d 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -9,10 +9,10 @@
 from typing_extensions import NotRequired
 
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
 from framework.utils import expand_range
 
+from .cpu import LogicalCore
+from .port import Port
 from .posix_session import PosixSession
 
 
@@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
             lcores.append(LogicalCore(lcore, core, socket, node))
         return lcores
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return dpdk_prefix
 
     def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index ef700d8114..b313b5ad54 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,23 +12,26 @@
 from typing import Any, Callable, Type, Union
 
 from framework.config import (
+    OS,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     NodeConfiguration,
 )
+from framework.exception import ConfigurationError
 from framework.logger import DTSLOG, getLogger
-from framework.remote_session import InteractiveShellType, OSSession, create_session
 from framework.settings import SETTINGS
 
-from .hw import (
+from .cpu import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
-    VirtualDevice,
     lcore_filter,
 )
-from .hw.port import Port
+from .linux_session import LinuxSession
+from .os_session import InteractiveShellType, OSSession
+from .port import Port
+from .virtual_device import VirtualDevice
 
 
 class Node(ABC):
@@ -168,9 +171,9 @@ def create_interactive_shell(
 
         return self.main_session.create_interactive_shell(
             shell_cls,
-            app_args,
             timeout,
             privileged,
+            app_args,
         )
 
     def filter_lcores(
@@ -201,7 +204,7 @@ def _get_remote_cpus(self) -> None:
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
-    def _setup_hugepages(self):
+    def _setup_hugepages(self) -> None:
         """
         Setup hugepages on the Node. Different architectures can supply different
         amounts of memory for hugepages and numa-based hugepage allocation may need
@@ -245,3 +248,11 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
             return lambda *args: None
         else:
             return func
+
+
+def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession:
+    match node_config.os:
+        case OS.linux:
+            return LinuxSession(node_config, name, logger)
+        case _:
+            raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
similarity index 95%
rename from dts/framework/remote_session/os_session.py
rename to dts/framework/testbed_model/os_session.py
index 8a709eac1c..76e595a518 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -10,19 +10,19 @@
 
 from framework.config import Architecture, NodeConfiguration, NodeInfo
 from framework.logger import DTSLOG
-from framework.remote_session.remote import InteractiveShell
-from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
-from framework.utils import MesonArgs
-
-from .remote import (
+from framework.remote_session import (
     CommandResult,
     InteractiveRemoteSession,
+    InteractiveShell,
     RemoteSession,
     create_interactive_session,
     create_remote_session,
 )
+from framework.settings import SETTINGS
+from framework.utils import MesonArgs
+
+from .cpu import LogicalCore
+from .port import Port
 
 InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
 
@@ -85,9 +85,9 @@ def send_command(
     def create_interactive_shell(
         self,
         shell_cls: Type[InteractiveShellType],
-        eal_parameters: str,
         timeout: float,
         privileged: bool,
+        app_args: str,
     ) -> InteractiveShellType:
         """
         See "create_interactive_shell" in SutNode
@@ -96,7 +96,7 @@ def create_interactive_shell(
             self.interactive_session.session,
             self._logger,
             self._get_privileged_command if privileged else None,
-            eal_parameters,
+            app_args,
             timeout,
         )
 
@@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
         """
 
     @abstractmethod
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
         """
         Try to find DPDK remote dir in remote_dir.
         """
@@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
         """
 
     @abstractmethod
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
         Get the DPDK file prefix that will be used when running DPDK apps.
         """
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
similarity index 100%
rename from dts/framework/testbed_model/hw/port.py
rename to dts/framework/testbed_model/port.py
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
similarity index 98%
rename from dts/framework/remote_session/posix_session.py
rename to dts/framework/testbed_model/posix_session.py
index a29e2e8280..5657cc0bc9 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
 
         return ret_opts
 
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
@@ -207,7 +207,7 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath])
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 7f75043bd3..5ce9446dba 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -15,12 +15,14 @@
     NodeInfo,
     SutNodeConfiguration,
 )
-from framework.remote_session import CommandResult, InteractiveShellType, OSSession
+from framework.remote_session import CommandResult
 from framework.settings import SETTINGS
 from framework.utils import MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
+from .cpu import LogicalCoreCount, LogicalCoreList
 from .node import Node
+from .os_session import InteractiveShellType, OSSession
+from .virtual_device import VirtualDevice
 
 
 class EalParameters(object):
@@ -293,7 +295,7 @@ def create_eal_parameters(
         prefix: str = "dpdk",
         append_prefix_timestamp: bool = True,
         no_pci: bool = False,
-        vdevs: list[VirtualDevice] = None,
+        vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
         """
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 79a55663b5..8a8f0019f3 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -16,16 +16,11 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.config import (
-    ScapyTrafficGeneratorConfig,
-    TGNodeConfiguration,
-    TrafficGeneratorType,
-)
-from framework.exception import ConfigurationError
-
-from .capturing_traffic_generator import CapturingTrafficGenerator
-from .hw.port import Port
+from framework.config import TGNodeConfiguration
+
 from .node import Node
+from .port import Port
+from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
 
 
 class TGNode(Node):
@@ -78,19 +73,3 @@ def close(self) -> None:
         """Free all resources used by the node"""
         self.traffic_generator.close()
         super(TGNode, self).close()
-
-
-def create_traffic_generator(
-    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
-) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
-
-    from .scapy import ScapyTrafficGenerator
-
-    match traffic_generator_config.traffic_generator_type:
-        case TrafficGeneratorType.SCAPY:
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
-        case _:
-            raise ConfigurationError(
-                f"Unknown traffic generator: {traffic_generator_config.traffic_generator_type}"
-            )
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
new file mode 100644
index 0000000000..52888d03fa
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
+from framework.exception import ConfigurationError
+from framework.testbed_model.node import Node
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .scapy import ScapyTrafficGenerator
+
+
+def create_traffic_generator(
+    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+    """A factory function for creating traffic generator object from user config."""
+
+    match traffic_generator_config.traffic_generator_type:
+        case TrafficGeneratorType.SCAPY:
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+        case _:
+            raise ConfigurationError(
+                "Unknown traffic generator: {traffic_generator_config.traffic_generator_type}"
+            )
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
similarity index 98%
rename from dts/framework/testbed_model/capturing_traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index e6512061d7..1fc7f98c05 100644
--- a/dts/framework/testbed_model/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -16,9 +16,9 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.settings import SETTINGS
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
 from .traffic_generator import TrafficGenerator
 
 
@@ -127,7 +127,7 @@ def _send_packets_and_capture(
         for the specified duration. It must be able to handle no received packets.
         """
 
-    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None:
         file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
         self._logger.debug(f"Writing packets to {file_name}.")
         scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
similarity index 95%
rename from dts/framework/testbed_model/scapy.py
rename to dts/framework/testbed_model/traffic_generator/scapy.py
index 9083e92b3d..c88cf28369 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -24,16 +24,15 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import DTSLOG, getLogger
 from framework.remote_session import PythonShell
 from framework.settings import SETTINGS
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 
 from .capturing_traffic_generator import (
     CapturingTrafficGenerator,
     _get_default_capture_name,
 )
-from .hw.port import Port
-from .tg_node import TGNode
 
 """
 ========= BEGIN RPC FUNCTIONS =========
@@ -144,7 +143,7 @@ def quit(self) -> None:
         self._BaseServer__shutdown_request = True
         return None
 
-    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
         """Add a function to the server.
 
         This is meant to be executed remotely.
@@ -189,13 +188,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     session: PythonShell
     rpc_server_proxy: xmlrpc.client.ServerProxy
     _config: ScapyTrafficGeneratorConfig
-    _tg_node: TGNode
-    _logger: DTSLOG
 
-    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
-        self._config = config
-        self._tg_node = tg_node
-        self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}")
+    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        super().__init__(tg_node, config)
 
         assert (
             self._tg_node.config.os == OS.linux
@@ -229,7 +224,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
             function_bytes = marshal.dumps(function.__code__)
             self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
 
-    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
+    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
         # load the source of the function
         src = inspect.getsource(QuittableXMLRPCServer)
         # Lines with only whitespace break the repl if in the middle of a function
@@ -271,7 +266,7 @@ def _send_packets_and_capture(
         scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
         return scapy_packets
 
-    def close(self):
+    def close(self) -> None:
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
similarity index 81%
rename from dts/framework/testbed_model/traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 28c35d3ce4..0d9902ddb7 100644
--- a/dts/framework/testbed_model/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -12,11 +12,12 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.logger import DTSLOG
+from framework.config import TrafficGeneratorConfig
+from framework.logger import DTSLOG, getLogger
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
-
 
 class TrafficGenerator(ABC):
     """The base traffic generator.
@@ -24,8 +25,15 @@ class TrafficGenerator(ABC):
     Defines the few basic methods that each traffic generator must implement.
     """
 
+    _config: TrafficGeneratorConfig
+    _tg_node: Node
     _logger: DTSLOG
 
+    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        self._config = config
+        self._tg_node = tg_node
+        self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}")
+
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send a packet and block until it is fully sent.
 
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index d098d364ff..a0f2173949 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -7,7 +7,6 @@
 import json
 import os
 import subprocess
-import sys
 from enum import Enum
 from pathlib import Path
 from subprocess import SubprocessError
@@ -16,31 +15,7 @@
 
 from .exception import ConfigurationError
 
-
-class StrEnum(Enum):
-    @staticmethod
-    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
-        return name
-
-    def __str__(self) -> str:
-        return self.name
-
-
-REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
-
-
-def check_dts_python_version() -> None:
-    if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10):
-        print(
-            RED(
-                (
-                    "WARNING: DTS execution node's python version is lower than"
-                    "python 3.10, is deprecated and will not work in future releases."
-                )
-            ),
-            file=sys.stderr,
-        )
-        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
 
 
 def expand_range(range_str: str) -> list[int]:
@@ -61,7 +36,7 @@ def expand_range(range_str: str) -> list[int]:
     return expanded_range
 
 
-def get_packet_summaries(packets: list[Packet]):
+def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -69,8 +44,13 @@ def get_packet_summaries(packets: list[Packet]):
     return f"Packet contents: \n{packet_summaries}"
 
 
-def RED(text: str) -> str:
-    return f"\u001B[31;1m{str(text)}\u001B[0m"
+class StrEnum(Enum):
+    @staticmethod
+    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
+        return name
+
+    def __str__(self) -> str:
+        return self.name
 
 
 class MesonArgs(object):
@@ -215,5 +195,5 @@ def _delete_tarball(self) -> None:
         if self._tarball_path and os.path.exists(self._tarball_path):
             os.remove(self._tarball_path)
 
-    def __fspath__(self):
+    def __fspath__(self) -> str:
         return str(self._tarball_path)
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..5d4714b0c3 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,17 @@
 
 import logging
 
-from framework import dts
+from framework import settings
 
 
 def main() -> None:
+    """Set DTS settings, then run DTS.
+
+    The DTS settings are taken from the command line arguments and the environment variables.
+    """
+    settings.SETTINGS = settings.get_settings()
+    from framework import dts
+
     dts.run_all()
 
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 02/21] dts: add docstring checker
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 01/21] dts: code adjustments for doc generation Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 03/21] dts: add basic developer docs Juraj Linkeš
                                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Python docstrings are the in-code way to document the code. The
docstring checker of choice is pydocstyle which we're executing from
Pylama, but the current latest versions are not complatible due to [0],
so pin the pydocstyle version to the latest working version.

[0] https://github.com/klen/pylama/issues/232

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 12 ++++++------
 dts/pyproject.toml |  6 +++++-
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..a734fa71f0 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -489,20 +489,20 @@ files = [
 
 [[package]]
 name = "pydocstyle"
-version = "6.3.0"
+version = "6.1.1"
 description = "Python docstring style checker"
 optional = false
 python-versions = ">=3.6"
 files = [
-    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
-    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
+    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
 ]
 
 [package.dependencies]
-snowballstemmer = ">=2.2.0"
+snowballstemmer = "*"
 
 [package.extras]
-toml = ["tomli (>=1.2.3)"]
+toml = ["toml"]
 
 [[package]]
 name = "pyflakes"
@@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 980ac3c7db..37a692d655 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -25,6 +25,7 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
+pydocstyle = "6.1.1"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^0.961"
@@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
 [tool.pylama]
-linters = "mccabe,pycodestyle,pyflakes"
+linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
 max_line_length = 100
 
+[tool.pylama.linter.pydocstyle]
+convention = "google"
+
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 03/21] dts: add basic developer docs
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 01/21] dts: code adjustments for doc generation Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 02/21] dts: add docstring checker Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 04/21] dts: exceptions docstring update Juraj Linkeš
                                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Expand the framework contribution guidelines and add how to document the
code with Python docstrings.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..cd771a428c 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
 The results contain basic statistics of passed/failed test cases and DPDK version.
 
 
+Contributing to DTS
+-------------------
+
+There are two areas of contribution: The DTS framework and DTS test suites.
+
+The framework contains the logic needed to run test cases, such as connecting to nodes,
+running DPDK apps and collecting results.
+
+The test cases call APIs from the framework to test their scenarios. Adding test cases may
+require adding code to the framework as well.
+
+
+Framework Coding Guidelines
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When adding code to the DTS framework, pay attention to the rest of the code
+and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
+warnings when some of the basics are not met.
+
+The code must be properly documented with docstrings. The style must conform to
+the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
+See an example of the style
+`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
+For cases which are not covered by the Google style, refer
+to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
+the two style guides, where we deviate or where some additional clarification is helpful:
+
+   * The __init__() methods of classes are documented separately from the docstring of the class
+     itself.
+   * The docstrigs of implemented abstract methods should refer to the superclass's definition
+     if there's no deviation.
+   * Instance variables/attributes should be documented in the docstring of the class
+     in the ``Attributes:`` section.
+   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
+     attributes which result in instance variables/attributes should also be recorded
+     in the ``Attributes:`` section.
+   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
+     the type annotated line. The description may be omitted if the meaning is obvious.
+   * The Enum and TypedDict also process the attributes in particular ways and should be documented
+     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
+   * When referencing a parameter of a function or a method in their docstring, don't use
+     any articles and put the parameter into single backticks. This mimics the style of
+     `Python's documentation <https://docs.python.org/3/index.html>`_.
+   * When specifying a value, use double backticks::
+
+        def foo(greet: bool) -> None:
+            """Demonstration of single and double backticks.
+
+            `greet` controls whether ``Hello World`` is printed.
+
+            Args:
+               greet: Whether to print the ``Hello World`` message.
+            """
+            if greet:
+               print(f"Hello World")
+
+   * The docstring maximum line length is the same as the code maximum line length.
+
+
 How To Write a Test Suite
 -------------------------
 
@@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
    | These methods don't need to be implemented if there's no need for them in a test suite.
      In that case, nothing will happen when they're is executed.
 
+#. **Configuration, traffic and other logic**
+
+   The ``TestSuite`` class contains a variety of methods for anything that
+   a test suite setup, a teardown, or a test case may need to do.
+
+   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
+   and use the interactive shell instances directly.
+
+   These are the two main ways to call the framework logic in test suites. If there's any
+   functionality or logic missing from the framework, it should be implemented so that
+   the test suites can use one of these two ways.
+
 #. **Test case verification**
 
    Test case verification should be done with the ``verify`` method, which records the result.
@@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
    and used by the test suite via the ``sut_node`` field.
 
 
+.. _dts_dev_tools:
+
 DTS Developer Tools
 -------------------
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 04/21] dts: exceptions docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (2 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 03/21] dts: add basic developer docs Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 05/21] dts: settings " Juraj Linkeš
                                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/__init__.py  |  12 ++++-
 dts/framework/exception.py | 106 +++++++++++++++++++++++++------------
 2 files changed, 83 insertions(+), 35 deletions(-)

diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
index d551ad4bf0..662e6ccad2 100644
--- a/dts/framework/__init__.py
+++ b/dts/framework/__init__.py
@@ -1,3 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
+
+"""Libraries and utilities for running DPDK Test Suite (DTS).
+
+The various modules in the DTS framework offer:
+
+* Connections to nodes, both interactive and non-interactive,
+* A straightforward way to add support for different operating systems of remote nodes,
+* Test suite setup, execution and teardown, along with test case setup, execution and teardown,
+* Pre-test suite setup and post-test suite teardown.
+"""
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 151e4d3aa9..658eee2c38 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -3,8 +3,10 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-User-defined exceptions used across the framework.
+"""DTS exceptions.
+
+The exceptions all have different severities expressed as an integer.
+The highest severity of all raised exceptions is used as the exit code of DTS.
 """
 
 from enum import IntEnum, unique
@@ -13,59 +15,79 @@
 
 @unique
 class ErrorSeverity(IntEnum):
-    """
-    The severity of errors that occur during DTS execution.
+    """The severity of errors that occur during DTS execution.
+
     All exceptions are caught and the most severe error is used as return code.
     """
 
+    #:
     NO_ERR = 0
+    #:
     GENERIC_ERR = 1
+    #:
     CONFIG_ERR = 2
+    #:
     REMOTE_CMD_EXEC_ERR = 3
+    #:
     SSH_ERR = 4
+    #:
     DPDK_BUILD_ERR = 10
+    #:
     TESTCASE_VERIFY_ERR = 20
+    #:
     BLOCKING_TESTSUITE_ERR = 25
 
 
 class DTSError(Exception):
-    """
-    The base exception from which all DTS exceptions are derived.
-    Stores error severity.
+    """The base exception from which all DTS exceptions are subclassed.
+
+    Do not use this exception, only use subclassed exceptions.
     """
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
 class SSHTimeoutError(DTSError):
-    """
-    Command execution timeout.
-    """
+    """The SSH execution of a command timed out."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _command: str
 
     def __init__(self, command: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            command: The executed command.
+        """
         self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self._command}"
+        """Add some context to the string representation."""
+        return f"{self._command} execution timed out."
 
 
 class SSHConnectionError(DTSError):
-    """
-    SSH connection error.
-    """
+    """An unsuccessful SSH connection."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
     _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            host: The hostname to which we're trying to connect.
+            errors: Any errors that occurred during the connection attempt.
+        """
         self._host = host
         self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
+        """Include the errors in the string representation."""
         message = f"Error trying to connect with {self._host}."
         if self._errors:
             message += f" Errors encountered while retrying: {', '.join(self._errors)}"
@@ -74,76 +96,92 @@ def __str__(self) -> str:
 
 
 class SSHSessionDeadError(DTSError):
-    """
-    SSH session is not alive.
-    It can no longer be used.
-    """
+    """The SSH session is no longer alive."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
 
     def __init__(self, host: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            host: The hostname of the disconnected node.
+        """
         self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self._host} has died"
+        """Add some context to the string representation."""
+        return f"SSH session with {self._host} has died."
 
 
 class ConfigurationError(DTSError):
-    """
-    Raised when an invalid configuration is encountered.
-    """
+    """An invalid configuration."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
 
 
 class RemoteCommandExecutionError(DTSError):
-    """
-    Raised when a command executed on a Node returns a non-zero exit status.
-    """
+    """An unsuccessful execution of a remote command."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    #: The executed command.
     command: str
     _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            command: The executed command.
+            command_return_code: The return code of the executed command.
+        """
         self.command = command
         self._command_return_code = command_return_code
 
     def __str__(self) -> str:
+        """Include both the command and return code in the string representation."""
         return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}"
 
 
 class RemoteDirectoryExistsError(DTSError):
-    """
-    Raised when a remote directory to be created already exists.
-    """
+    """A directory that exists on a remote node."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
 
 
 class DPDKBuildError(DTSError):
-    """
-    Raised when DPDK build fails for any reason.
-    """
+    """A DPDK build failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
 
 
 class TestCaseVerifyError(DTSError):
-    """
-    Used in test cases to verify the expected behavior.
-    """
+    """A test case failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
 
 class BlockingTestSuiteError(DTSError):
+    """A failure in a blocking test suite."""
+
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
     _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
+        """Define the meaning of the first argument.
+
+        Args:
+            suite_name: The blocking test suite.
+        """
         self._suite_name = suite_name
 
     def __str__(self) -> str:
+        """Add some context to the string representation."""
         return f"Blocking suite {self._suite_name} failed."
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 05/21] dts: settings docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (3 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 04/21] dts: exceptions docstring update Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 06/21] dts: logger and utils " Juraj Linkeš
                                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/settings.py | 103 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 102 insertions(+), 1 deletion(-)

diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 25b5dcff22..41f98e8519 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -3,6 +3,72 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
+"""Environment variables and command line arguments parsing.
+
+This is a simple module utilizing the built-in argparse module to parse command line arguments,
+augment them with values from environment variables and make them available across the framework.
+
+The command line value takes precedence, followed by the environment variable value,
+followed by the default value defined in this module.
+
+The command line arguments along with the supported environment variables are:
+
+.. option:: --config-file
+.. envvar:: DTS_CFG_FILE
+
+    The path to the YAML test run configuration file.
+
+.. option:: --output-dir, --output
+.. envvar:: DTS_OUTPUT_DIR
+
+    The directory where DTS logs and results are saved.
+
+.. option:: --compile-timeout
+.. envvar:: DTS_COMPILE_TIMEOUT
+
+    The timeout for compiling DPDK.
+
+.. option:: -t, --timeout
+.. envvar:: DTS_TIMEOUT
+
+    The timeout for all DTS operation except for compiling DPDK.
+
+.. option:: -v, --verbose
+.. envvar:: DTS_VERBOSE
+
+    Set to any value to enable logging everything to the console.
+
+.. option:: -s, --skip-setup
+.. envvar:: DTS_SKIP_SETUP
+
+    Set to any value to skip building DPDK.
+
+.. option:: --tarball, --snapshot, --git-ref
+.. envvar:: DTS_DPDK_TARBALL
+
+    The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
+
+.. option:: --test-cases
+.. envvar:: DTS_TESTCASES
+
+    A comma-separated list of test cases to execute. Unknown test cases will be silently ignored.
+
+.. option:: --re-run, --re_run
+.. envvar:: DTS_RERUN
+
+    Re-run each test case this many times in case of a failure.
+
+The module provides one key module-level variable:
+
+Attributes:
+    SETTINGS: The module level variable storing framework-wide DTS settings.
+
+Typical usage example::
+
+  from framework.settings import SETTINGS
+  foo = SETTINGS.foo
+"""
+
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
@@ -16,6 +82,23 @@
 
 
 def _env_arg(env_var: str) -> Any:
+    """A helper method augmenting the argparse Action with environment variables.
+
+    If the supplied environment variable is defined, then the default value
+    of the argument is modified. This satisfies the priority order of
+    command line argument > environment variable > default value.
+
+    Arguments with no values (flags) should be defined using the const keyword argument
+    (True or False). When the argument is specified, it will be set to const, if not specified,
+    the default will be stored (possibly modified by the corresponding environment variable).
+
+    Other arguments work the same as default argparse arguments, that is using
+    the default 'store' action.
+
+    Returns:
+          The modified argparse.Action.
+    """
+
     class _EnvironmentArgument(argparse.Action):
         def __init__(
             self,
@@ -68,14 +151,28 @@ def __call__(
 
 @dataclass(slots=True)
 class Settings:
+    """Default framework-wide user settings.
+
+    The defaults may be modified at the start of the run.
+    """
+
+    #:
     config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    #:
     output_dir: str = "output"
+    #:
     timeout: float = 15
+    #:
     verbose: bool = False
+    #:
     skip_setup: bool = False
+    #:
     dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    #:
     compile_timeout: float = 1200
+    #:
     test_cases: list[str] = field(default_factory=list)
+    #:
     re_run: int = 0
 
 
@@ -166,7 +263,7 @@ def _get_parser() -> argparse.ArgumentParser:
         action=_env_arg("DTS_RERUN"),
         default=SETTINGS.re_run,
         type=int,
-        help="[DTS_RERUN] Re-run each test case the specified amount of times "
+        help="[DTS_RERUN] Re-run each test case the specified number of times "
         "if a test failure occurs",
     )
 
@@ -174,6 +271,10 @@ def _get_parser() -> argparse.ArgumentParser:
 
 
 def get_settings() -> Settings:
+    """Create new settings with inputs from the user.
+
+    The inputs are taken from the command line and from environment variables.
+    """
     parsed_args = _get_parser().parse_args()
     return Settings(
         config_file_path=parsed_args.config_file,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 06/21] dts: logger and utils docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (4 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 05/21] dts: settings " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 07/21] dts: dts runner and main " Juraj Linkeš
                                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/logger.py | 72 ++++++++++++++++++++++-----------
 dts/framework/utils.py  | 88 +++++++++++++++++++++++++++++------------
 2 files changed, 113 insertions(+), 47 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index bb2991e994..cfa6e8cd72 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -3,9 +3,9 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-DTS logger module with several log level. DTS framework and TestSuite logs
-are saved in different log files.
+"""DTS logger module.
+
+DTS framework and TestSuite logs are saved in different log files.
 """
 
 import logging
@@ -18,19 +18,21 @@
 stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 
 
-class LoggerDictType(TypedDict):
-    logger: "DTSLOG"
-    name: str
-    node: str
-
+class DTSLOG(logging.LoggerAdapter):
+    """DTS logger adapter class for framework and testsuites.
 
-# List for saving all using loggers
-Loggers: list[LoggerDictType] = []
+    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
+    variable control the verbosity of output. If enabled, all messages will be emitted to the
+    console.
 
+    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+    variable modify the directory where the logs will be stored.
 
-class DTSLOG(logging.LoggerAdapter):
-    """
-    DTS log class for framework and testsuite.
+    Attributes:
+        node: The additional identifier. Currently unused.
+        sh: The handler which emits logs to console.
+        fh: The handler which emits logs to a file.
+        verbose_fh: Just as fh, but logs with a different, more verbose, format.
     """
 
     _logger: logging.Logger
@@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
     verbose_fh: logging.FileHandler
 
     def __init__(self, logger: logging.Logger, node: str = "suite"):
+        """Extend the constructor with additional handlers.
+
+        One handler logs to the console, the other one to a file, with either a regular or verbose
+        format.
+
+        Args:
+            logger: The logger from which to create the logger adapter.
+            node: An additional identifier. Currently unused.
+        """
         self._logger = logger
         # 1 means log everything, this will be used by file handlers if their level
         # is not set
@@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
         super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
 
     def logger_exit(self) -> None:
-        """
-        Remove stream handler and logfile handler.
-        """
+        """Remove the stream handler and the logfile handler."""
         for handler in (self.sh, self.fh, self.verbose_fh):
             handler.flush()
             self._logger.removeHandler(handler)
 
 
+class _LoggerDictType(TypedDict):
+    logger: DTSLOG
+    name: str
+    node: str
+
+
+# List for saving all loggers in use
+_Loggers: list[_LoggerDictType] = []
+
+
 def getLogger(name: str, node: str = "suite") -> DTSLOG:
+    """Get DTS logger adapter identified by name and node.
+
+    An existing logger will be returned if one with the exact name and node already exists.
+    A new one will be created and stored otherwise.
+
+    Args:
+        name: The name of the logger.
+        node: An additional identifier for the logger.
+
+    Returns:
+        A logger uniquely identified by both name and node.
     """
-    Get logger handler and if there's no handler for specified Node will create one.
-    """
-    global Loggers
+    global _Loggers
     # return saved logger
-    logger: LoggerDictType
-    for logger in Loggers:
+    logger: _LoggerDictType
+    for logger in _Loggers:
         if logger["name"] == name and logger["node"] == node:
             return logger["logger"]
 
     # return new logger
     dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
-    Loggers.append({"logger": dts_logger, "name": name, "node": node})
+    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
     return dts_logger
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index a0f2173949..cc5e458cc8 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -3,6 +3,16 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Various utility classes and functions.
+
+These are used in multiple modules across the framework. They're here because
+they provide some non-specific functionality, greatly simplify imports or just don't
+fit elsewhere.
+
+Attributes:
+    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
+"""
+
 import atexit
 import json
 import os
@@ -19,12 +29,20 @@
 
 
 def expand_range(range_str: str) -> list[int]:
-    """
-    Process range string into a list of integers. There are two possible formats:
-    n - a single integer
-    n-m - a range of integers
+    """Process `range_str` into a list of integers.
+
+    There are two possible formats of `range_str`:
+
+        * ``n`` - a single integer,
+        * ``n-m`` - a range of integers.
 
-    The returned range includes both n and m. Empty string returns an empty list.
+    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
+
+    Args:
+        range_str: The range to expand.
+
+    Returns:
+        All the numbers from the range.
     """
     expanded_range: list[int] = []
     if range_str:
@@ -37,6 +55,14 @@ def expand_range(range_str: str) -> list[int]:
 
 
 def get_packet_summaries(packets: list[Packet]) -> str:
+    """Format a string summary from `packets`.
+
+    Args:
+        packets: The packets to format.
+
+    Returns:
+        The summary of `packets`.
+    """
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -45,27 +71,36 @@ def get_packet_summaries(packets: list[Packet]) -> str:
 
 
 class StrEnum(Enum):
+    """Enum with members stored as strings."""
+
     @staticmethod
     def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
         return name
 
     def __str__(self) -> str:
+        """The string representation is the name of the member."""
         return self.name
 
 
 class MesonArgs(object):
-    """
-    Aggregate the arguments needed to build DPDK:
-    default_library: Default library type, Meson allows "shared", "static" and "both".
-               Defaults to None, in which case the argument won't be used.
-    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
-               Do not use -D with them, for example:
-               meson_args = MesonArgs(enable_kmods=True).
-    """
+    """Aggregate the arguments needed to build DPDK."""
 
     _default_library: str
 
     def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+        """Initialize the meson arguments.
+
+        Args:
+            default_library: The default library type, Meson supports ``shared``, ``static`` and
+                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
+            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Example:
+            ::
+
+                meson_args = MesonArgs(enable_kmods=True).
+        """
         self._default_library = f"--default-library={default_library}" if default_library else ""
         self._dpdk_args = " ".join(
             (
@@ -75,6 +110,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
         )
 
     def __str__(self) -> str:
+        """The actual args."""
         return " ".join(f"{self._default_library} {self._dpdk_args}".split())
 
 
@@ -96,24 +132,14 @@ class _TarCompressionFormat(StrEnum):
 
 
 class DPDKGitTarball(object):
-    """Create a compressed tarball of DPDK from the repository.
-
-    The DPDK version is specified with git object git_ref.
-    The tarball will be compressed with _TarCompressionFormat,
-    which must be supported by the DTS execution environment.
-    The resulting tarball will be put into output_dir.
+    """Compressed tarball of DPDK from the repository.
 
-    The class supports the os.PathLike protocol,
+    The class supports the :class:`os.PathLike` protocol,
     which is used to get the Path of the tarball::
 
         from pathlib import Path
         tarball = DPDKGitTarball("HEAD", "output")
         tarball_path = Path(tarball)
-
-    Arguments:
-        git_ref: A git commit ID, tag ID or tree ID.
-        output_dir: The directory where to put the resulting tarball.
-        tar_compression_format: The compression format to use.
     """
 
     _git_ref: str
@@ -128,6 +154,17 @@ def __init__(
         output_dir: str,
         tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
     ):
+        """Create the tarball during initialization.
+
+        The DPDK version is specified with `git_ref`. The tarball will be compressed with
+        `tar_compression_format`, which must be supported by the DTS execution environment.
+        The resulting tarball will be put into `output_dir`.
+
+        Args:
+            git_ref: A git commit ID, tag ID or tree ID.
+            output_dir: The directory where to put the resulting tarball.
+            tar_compression_format: The compression format to use.
+        """
         self._git_ref = git_ref
         self._tar_compression_format = tar_compression_format
 
@@ -196,4 +233,5 @@ def _delete_tarball(self) -> None:
             os.remove(self._tarball_path)
 
     def __fspath__(self) -> str:
+        """The os.PathLike protocol implementation."""
         return str(self._tarball_path)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 07/21] dts: dts runner and main docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (5 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 06/21] dts: logger and utils " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 08/21] dts: test suite " Juraj Linkeš
                                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/dts.py | 131 ++++++++++++++++++++++++++++++++++++-------
 dts/main.py          |  10 ++--
 2 files changed, 116 insertions(+), 25 deletions(-)

diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 356368ef10..e16d4578a0 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,33 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+r"""Test suite runner module.
+
+A DTS run is split into stages:
+
+    #. Execution stage,
+    #. Build target stage,
+    #. Test suite stage,
+    #. Test case stage.
+
+The module is responsible for running tests on testbeds defined in the test run configuration.
+Each setup or teardown of each stage is recorded in a :class:`~.test_result.DTSResult` or
+one of its subclasses. The test case results are also recorded.
+
+If an error occurs, the current stage is aborted, the error is recorded and the run continues in
+the next iteration of the same stage. The return code is the highest `severity` of all
+:class:`~.exception.DTSError`\s.
+
+Example:
+    An error occurs in a build target setup. The current build target is aborted and the run
+    continues with the next build target. If the errored build target was the last one in the given
+    execution, the next execution begins.
+
+Attributes:
+    dts_logger: The logger instance used in this module.
+    result: The top level result used in the module.
+"""
+
 import sys
 
 from .config import (
@@ -23,9 +50,38 @@
 
 
 def run_all() -> None:
-    """
-    The main process of DTS. Runs all build targets in all executions from the main
-    config file.
+    """Run all build targets in all executions from the test run configuration.
+
+    Before running test suites, executions and build targets are first set up.
+    The executions and build targets defined in the test run configuration are iterated over.
+    The executions define which tests to run and where to run them and build targets define
+    the DPDK build setup.
+
+    The tests suites are set up for each execution/build target tuple and each scheduled
+    test case within the test suite is set up, executed and torn down. After all test cases
+    have been executed, the test suite is torn down and the next build target will be tested.
+
+    All the nested steps look like this:
+
+        #. Execution setup
+
+            #. Build target setup
+
+                #. Test suite setup
+
+                    #. Test case setup
+                    #. Test case logic
+                    #. Test case teardown
+
+                #. Test suite teardown
+
+            #. Build target teardown
+
+        #. Execution teardown
+
+    The test cases are filtered according to the specification in the test run configuration and
+    the :option:`--test-cases` command line argument or
+    the :envvar:`DTS_TESTCASES` environment variable.
     """
     global dts_logger
     global result
@@ -87,6 +143,8 @@ def run_all() -> None:
 
 
 def _check_dts_python_version() -> None:
+    """Check the required Python version - v3.10."""
+
     def RED(text: str) -> str:
         return f"\u001B[31;1m{str(text)}\u001B[0m"
 
@@ -109,9 +167,16 @@ def _run_execution(
     execution: ExecutionConfiguration,
     result: DTSResult,
 ) -> None:
-    """
-    Run the given execution. This involves running the execution setup as well as
-    running all build targets in the given execution.
+    """Run the given execution.
+
+    This involves running the execution setup as well as running all build targets
+    in the given execution. After that, execution teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: An execution's test run configuration.
+        result: The top level result object.
     """
     dts_logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.")
     execution_result = result.add_execution(sut_node.config)
@@ -144,8 +209,18 @@ def _run_build_target(
     execution: ExecutionConfiguration,
     execution_result: ExecutionResult,
 ) -> None:
-    """
-    Run the given build target.
+    """Run the given build target.
+
+    This involves running the build target setup as well as running all test suites
+    in the given execution the build target is defined in.
+    After that, build target teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        build_target: A build target's test run configuration.
+        execution: The build target's execution's test run configuration.
+        execution_result: The execution level result object associated with the execution.
     """
     dts_logger.info(f"Running build target '{build_target.name}'.")
     build_target_result = execution_result.add_build_target(build_target)
@@ -177,10 +252,20 @@ def _run_all_suites(
     execution: ExecutionConfiguration,
     build_target_result: BuildTargetResult,
 ) -> None:
-    """
-    Use the given build_target to run execution's test suites
-    with possibly only a subset of test cases.
-    If no subset is specified, run all test cases.
+    """Run the execution's (possibly a subset) test suites using the current build target.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
+
+    If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites
+    in the current build target won't be executed.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
     """
     end_build_target = False
     if not execution.skip_smoke_tests:
@@ -206,16 +291,22 @@ def _run_single_suite(
     build_target_result: BuildTargetResult,
     test_suite_config: TestSuiteConfig,
 ) -> None:
-    """Runs a single test suite.
+    """Run all test suite in a single test suite module.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
 
     Args:
-        sut_node: Node to run tests on.
-        execution: Execution the test case belongs to.
-        build_target_result: Build target configuration test case is run on
-        test_suite_config: Test suite configuration
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
+        test_suite_config: Test suite test run configuration specifying the test suite module
+            and possibly a subset of test cases of test suites in that module.
 
     Raises:
-        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
+        BlockingTestSuiteError: If a blocking test suite fails.
     """
     try:
         full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
@@ -239,9 +330,7 @@ def _run_single_suite(
 
 
 def _exit_dts() -> None:
-    """
-    Process all errors and exit with the proper exit code.
-    """
+    """Process all errors and exit with the proper exit code."""
     result.process()
 
     if dts_logger:
diff --git a/dts/main.py b/dts/main.py
index 5d4714b0c3..b856ba86be 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -1,12 +1,10 @@
 #!/usr/bin/env python3
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
-"""
-A test framework for testing DPDK.
-"""
+"""The DTS executable."""
 
 import logging
 
@@ -17,6 +15,10 @@ def main() -> None:
     """Set DTS settings, then run DTS.
 
     The DTS settings are taken from the command line arguments and the environment variables.
+    The settings object is stored in the module-level variable settings.SETTINGS which the entire
+    framework uses. After importing the module (or the variable), any changes to the variable are
+    not going to be reflected without a re-import. This means that the SETTINGS variable must
+    be modified before the settings module is imported anywhere else in the framework.
     """
     settings.SETTINGS = settings.get_settings()
     from framework import dts
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 08/21] dts: test suite docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (6 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 07/21] dts: dts runner and main " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 09/21] dts: test result " Juraj Linkeš
                                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_suite.py | 231 +++++++++++++++++++++++++++---------
 1 file changed, 175 insertions(+), 56 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index f9e66e814a..dfb391ffbd 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -2,8 +2,19 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Base class for creating DTS test cases.
+"""Features common to all test suites.
+
+The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such
+must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics
+needed by subclasses:
+
+    * Test suite and test case execution flow,
+    * Testbed (SUT, TG) configuration,
+    * Packet sending and verification,
+    * Test case verification.
+
+The module also defines a function, :func:`get_test_suites`,
+for gathering test suites from a Python module.
 """
 
 import importlib
@@ -11,7 +22,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Any, Union
+from typing import Any, ClassVar, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -31,25 +42,44 @@
 
 
 class TestSuite(object):
-    """
-    The base TestSuite class provides methods for handling basic flow of a test suite:
-    * test case filtering and collection
-    * test suite setup/cleanup
-    * test setup/cleanup
-    * test case execution
-    * error handling and results storage
-    Test cases are implemented by derived classes. Test cases are all methods
-    starting with test_, further divided into performance test cases
-    (starting with test_perf_) and functional test cases (all other test cases).
-    By default, all test cases will be executed. A list of testcase str names
-    may be specified in conf.yaml or on the command line
-    to filter which test cases to run.
-    The methods named [set_up|tear_down]_[suite|test_case] should be overridden
-    in derived classes if the appropriate suite/test case fixtures are needed.
+    """The base class with methods for handling the basic flow of a test suite.
+
+        * Test case filtering and collection,
+        * Test suite setup/cleanup,
+        * Test setup/cleanup,
+        * Test case execution,
+        * Error handling and results storage.
+
+    Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``,
+    further divided into performance test cases (starting with ``test_perf_``)
+    and functional test cases (all other test cases).
+
+    By default, all test cases will be executed. A list of testcase names may be specified
+    in the YAML test run configuration file and in the :option:`--test-cases` command line argument
+    or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run.
+    The union of both lists will be used. Any unknown test cases from the latter lists
+    will be silently ignored.
+
+    If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable
+    is set, in case of a test case failure, the test case will be executed again until it passes
+    or it fails that many times in addition of the first failure.
+
+    The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses
+    if the appropriate test suite/test case fixtures are needed.
+
+    The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
+    properly choose the IP addresses and other configuration that must be tailored to the testbed.
+
+    Attributes:
+        sut_node: The SUT node where the test suite is running.
+        tg_node: The TG node where the test suite is running.
     """
 
     sut_node: SutNode
-    is_blocking = False
+    tg_node: TGNode
+    #: Whether the test suite is blocking. A failure of a blocking test suite
+    #: will block the execution of all subsequent test suites in the current build target.
+    is_blocking: ClassVar[bool] = False
     _logger: DTSLOG
     _test_cases_to_run: list[str]
     _func: bool
@@ -72,6 +102,20 @@ def __init__(
         func: bool,
         build_target_result: BuildTargetResult,
     ):
+        """Initialize the test suite testbed information and basic configuration.
+
+        Process what test cases to run, create the associated
+        :class:`~.test_result.TestSuiteResult`, find links between ports
+        and set up default IP addresses to be used when configuring them.
+
+        Args:
+            sut_node: The SUT node where the test suite will run.
+            tg_node: The TG node where the test suite will run.
+            test_cases: The list of test cases to execute.
+                If empty, all test cases will be executed.
+            func: Whether to run functional tests.
+            build_target_result: The build target result this test suite is run in.
+        """
         self.sut_node = sut_node
         self.tg_node = tg_node
         self._logger = getLogger(self.__class__.__name__)
@@ -95,6 +139,7 @@ def __init__(
         self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
 
     def _process_links(self) -> None:
+        """Construct links between SUT and TG ports."""
         for sut_port in self.sut_node.ports:
             for tg_port in self.tg_node.ports:
                 if (sut_port.identifier, sut_port.peer) == (
@@ -104,27 +149,42 @@ def _process_links(self) -> None:
                     self._port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port))
 
     def set_up_suite(self) -> None:
-        """
-        Set up test fixtures common to all test cases; this is done before
-        any test case is run.
+        """Set up test fixtures common to all test cases.
+
+        This is done before any test case has been run.
         """
 
     def tear_down_suite(self) -> None:
-        """
-        Tear down the previously created test fixtures common to all test cases.
+        """Tear down the previously created test fixtures common to all test cases.
+
+        This is done after all test have been run.
         """
 
     def set_up_test_case(self) -> None:
-        """
-        Set up test fixtures before each test case.
+        """Set up test fixtures before each test case.
+
+        This is done before *each* test case.
         """
 
     def tear_down_test_case(self) -> None:
-        """
-        Tear down the previously created test fixtures after each test case.
+        """Tear down the previously created test fixtures after each test case.
+
+        This is done after *each* test case.
         """
 
     def configure_testbed_ipv4(self, restore: bool = False) -> None:
+        """Configure IPv4 addresses on all testbed ports.
+
+        The configured ports are:
+
+        * SUT ingress port,
+        * SUT egress port,
+        * TG ingress port,
+        * TG egress port.
+
+        Args:
+            restore: If :data:`True`, will remove the configuration instead.
+        """
         delete = True if restore else False
         enable = False if restore else True
         self._configure_ipv4_forwarding(enable)
@@ -149,11 +209,17 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None:
         self.sut_node.configure_ipv4_forwarding(enable)
 
     def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[Packet]:
-        """
-        Send a packet through the appropriate interface and
-        receive on the appropriate interface.
-        Modify the packet with l3/l2 addresses corresponding
-        to the testbed and desired traffic.
+        """Send and receive `packet` using the associated TG.
+
+        Send `packet` through the appropriate interface and receive on the appropriate interface.
+        Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic.
+
+        Args:
+            packet: The packet to send.
+            duration: Capture traffic for this amount of time after sending `packet`.
+
+        Returns:
+            A list of received packets.
         """
         packet = self._adjust_addresses(packet)
         return self.tg_node.send_packet_and_capture(
@@ -161,13 +227,26 @@ def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[P
         )
 
     def get_expected_packet(self, packet: Packet) -> Packet:
+        """Inject the proper L2/L3 addresses into `packet`.
+
+        Args:
+            packet: The packet to modify.
+
+        Returns:
+            `packet` with injected L2/L3 addresses.
+        """
         return self._adjust_addresses(packet, expected=True)
 
     def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
-        """
+        """L2 and L3 address additions in both directions.
+
         Assumptions:
-            Two links between SUT and TG, one link is TG -> SUT,
-            the other SUT -> TG.
+            Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG.
+
+        Args:
+            packet: The packet to modify.
+            expected: If :data:`True`, the direction is SUT -> TG,
+                otherwise the direction is TG -> SUT.
         """
         if expected:
             # The packet enters the TG from SUT
@@ -193,6 +272,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
         return Ether(packet.build())
 
     def verify(self, condition: bool, failure_description: str) -> None:
+        """Verify `condition` and handle failures.
+
+        When `condition` is :data:`False`, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            condition: The condition to check.
+            failure_description: A short description of the failure
+                that will be stored in the raised exception.
+
+        Raises:
+            TestCaseVerifyError: `condition` is :data:`False`.
+        """
         if not condition:
             self._fail_test_case_verify(failure_description)
 
@@ -206,6 +298,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None:
         raise TestCaseVerifyError(failure_description)
 
     def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
+        """Verify that `expected_packet` has been received.
+
+        Go through `received_packets` and check that `expected_packet` is among them.
+        If not, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            expected_packet: The packet we're expecting to receive.
+            received_packets: The packets where we're looking for `expected_packet`.
+
+        Raises:
+            TestCaseVerifyError: `expected_packet` is not among `received_packets`.
+        """
         for received_packet in received_packets:
             if self._compare_packets(expected_packet, received_packet):
                 break
@@ -280,10 +385,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         return True
 
     def run(self) -> None:
-        """
-        Setup, execute and teardown the whole suite.
-        Suite execution consists of running all test cases scheduled to be executed.
-        A test cast run consists of setup, execution and teardown of said test case.
+        """Set up, execute and tear down the whole suite.
+
+        Test suite execution consists of running all test cases scheduled to be executed.
+        A test case run consists of setup, execution and teardown of said test case.
+
+        Record the setup and the teardown and handle failures.
+
+        The list of scheduled test cases is constructed when creating the :class:`TestSuite` object.
         """
         test_suite_name = self.__class__.__name__
 
@@ -315,9 +424,7 @@ def run(self) -> None:
                 raise BlockingTestSuiteError(test_suite_name)
 
     def _execute_test_suite(self) -> None:
-        """
-        Execute all test cases scheduled to be executed in this suite.
-        """
+        """Execute all test cases scheduled to be executed in this suite."""
         if self._func:
             for test_case_method in self._get_functional_test_cases():
                 test_case_name = test_case_method.__name__
@@ -334,14 +441,18 @@ def _execute_test_suite(self) -> None:
                     self._run_test_case(test_case_method, test_case_result)
 
     def _get_functional_test_cases(self) -> list[MethodType]:
-        """
-        Get all functional test cases.
+        """Get all functional test cases defined in this TestSuite.
+
+        Returns:
+            The list of functional test cases of this TestSuite.
         """
         return self._get_test_cases(r"test_(?!perf_)")
 
     def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
-        """
-        Return a list of test cases matching test_case_regex.
+        """Return a list of test cases matching test_case_regex.
+
+        Returns:
+            The list of test cases matching test_case_regex of this TestSuite.
         """
         self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
         filtered_test_cases = []
@@ -353,9 +464,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
         return filtered_test_cases
 
     def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
-        """
-        Check whether the test case should be executed.
-        """
+        """Check whether the test case should be scheduled to be executed."""
         match = bool(re.match(test_case_regex, test_case_name))
         if self._test_cases_to_run:
             return match and test_case_name in self._test_cases_to_run
@@ -365,9 +474,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
     def _run_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Setup, execute and teardown a test case in this suite.
-        Exceptions are caught and recorded in logs and results.
+        """Setup, execute and teardown a test case in this suite.
+
+        Record the result of the setup and the teardown and handle failures.
         """
         test_case_name = test_case_method.__name__
 
@@ -402,9 +511,7 @@ def _run_test_case(
     def _execute_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Execute one test case and handle failures.
-        """
+        """Execute one test case, record the result and handle failures."""
         test_case_name = test_case_method.__name__
         try:
             self._logger.info(f"Starting test case execution: {test_case_name}")
@@ -425,6 +532,18 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+    r"""Find all :class:`TestSuite`\s in a Python module.
+
+    Args:
+        testsuite_module_path: The path to the Python module.
+
+    Returns:
+        The list of :class:`TestSuite`\s found within the Python module.
+
+    Raises:
+        ConfigurationError: The test suite module was not found.
+    """
+
     def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 09/21] dts: test result docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (7 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 08/21] dts: test suite " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 10/21] dts: config " Juraj Linkeš
                                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_result.py | 297 ++++++++++++++++++++++++++++-------
 1 file changed, 239 insertions(+), 58 deletions(-)

diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 57090feb04..4467749a9d 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -2,8 +2,25 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-Generic result container and reporters
+r"""Record and process DTS results.
+
+The results are recorded in a hierarchical manner:
+
+    * :class:`DTSResult` contains
+    * :class:`ExecutionResult` contains
+    * :class:`BuildTargetResult` contains
+    * :class:`TestSuiteResult` contains
+    * :class:`TestCaseResult`
+
+Each result may contain multiple lower level results, e.g. there are multiple
+:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
+The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`,
+which also defines some common behaviors in its methods.
+
+Each result class has its own idiosyncrasies which they implement in overridden methods.
+
+The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+variable modify the directory where the files with results will be stored.
 """
 
 import os.path
@@ -26,26 +43,34 @@
 
 
 class Result(Enum):
-    """
-    An Enum defining the possible states that
-    a setup, a teardown or a test case may end up in.
-    """
+    """The possible states that a setup, a teardown or a test case may end up in."""
 
+    #:
     PASS = auto()
+    #:
     FAIL = auto()
+    #:
     ERROR = auto()
+    #:
     SKIP = auto()
 
     def __bool__(self) -> bool:
+        """Only PASS is True."""
         return self is self.PASS
 
 
 class FixtureResult(object):
-    """
-    A record that stored the result of a setup or a teardown.
-    The default is FAIL because immediately after creating the object
-    the setup of the corresponding stage will be executed, which also guarantees
-    the execution of teardown.
+    """A record that stores the result of a setup or a teardown.
+
+    :attr:`~Result.FAIL` is a sensible default since it prevents false positives (which could happen
+    if the default was :attr:`~Result.PASS`).
+
+    Preventing false positives or other false results is preferable since a failure
+    is mostly likely to be investigated (the other false results may not be investigated at all).
+
+    Attributes:
+        result: The associated result.
+        error: The error in case of a failure.
     """
 
     result: Result
@@ -56,21 +81,37 @@ def __init__(
         result: Result = Result.FAIL,
         error: Exception | None = None,
     ):
+        """Initialize the constructor with the fixture result and store a possible error.
+
+        Args:
+            result: The result to store.
+            error: The error which happened when a failure occurred.
+        """
         self.result = result
         self.error = error
 
     def __bool__(self) -> bool:
+        """A wrapper around the stored :class:`Result`."""
         return bool(self.result)
 
 
 class Statistics(dict):
-    """
-    A helper class used to store the number of test cases by its result
-    along a few other basic information.
-    Using a dict provides a convenient way to format the data.
+    """How many test cases ended in which result state along some other basic information.
+
+    Subclassing :class:`dict` provides a convenient way to format the data.
+
+    The data are stored in the following keys:
+
+    * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases.
+    * **DPDK VERSION** (:class:`str`) -- The tested DPDK version.
     """
 
     def __init__(self, dpdk_version: str | None):
+        """Extend the constructor with keys in which the data are stored.
+
+        Args:
+            dpdk_version: The version of tested DPDK.
+        """
         super(Statistics, self).__init__()
         for result in Result:
             self[result.name] = 0
@@ -78,8 +119,17 @@ def __init__(self, dpdk_version: str | None):
         self["DPDK VERSION"] = dpdk_version
 
     def __iadd__(self, other: Result) -> "Statistics":
-        """
-        Add a Result to the final count.
+        """Add a Result to the final count.
+
+        Example:
+            stats: Statistics = Statistics()  # empty Statistics
+            stats += Result.PASS  # add a Result to `stats`
+
+        Args:
+            other: The Result to add to this statistics object.
+
+        Returns:
+            The modified statistics object.
         """
         self[other.name] += 1
         self["PASS RATE"] = (
@@ -88,9 +138,7 @@ def __iadd__(self, other: Result) -> "Statistics":
         return self
 
     def __str__(self) -> str:
-        """
-        Provide a string representation of the data.
-        """
+        """Each line contains the formatted key = value pair."""
         stats_str = ""
         for key, value in self.items():
             stats_str += f"{key:<12} = {value}\n"
@@ -100,10 +148,16 @@ def __str__(self) -> str:
 
 
 class BaseResult(object):
-    """
-    The Base class for all results. Stores the results of
-    the setup and teardown portions of the corresponding stage
-    and a list of results from each inner stage in _inner_results.
+    """Common data and behavior of DTS results.
+
+    Stores the results of the setup and teardown portions of the corresponding stage.
+    The hierarchical nature of DTS results is captured recursively in an internal list.
+    A stage is each level in this particular hierarchy (pre-execution or the top-most level,
+    execution, build target, test suite and test case.)
+
+    Attributes:
+        setup_result: The result of the setup of the particular stage.
+        teardown_result: The results of the teardown of the particular stage.
     """
 
     setup_result: FixtureResult
@@ -111,15 +165,28 @@ class BaseResult(object):
     _inner_results: MutableSequence["BaseResult"]
 
     def __init__(self):
+        """Initialize the constructor."""
         self.setup_result = FixtureResult()
         self.teardown_result = FixtureResult()
         self._inner_results = []
 
     def update_setup(self, result: Result, error: Exception | None = None) -> None:
+        """Store the setup result.
+
+        Args:
+            result: The result of the setup.
+            error: The error that occurred in case of a failure.
+        """
         self.setup_result.result = result
         self.setup_result.error = error
 
     def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+        """Store the teardown result.
+
+        Args:
+            result: The result of the teardown.
+            error: The error that occurred in case of a failure.
+        """
         self.teardown_result.result = result
         self.teardown_result.error = error
 
@@ -137,27 +204,55 @@ def _get_inner_errors(self) -> list[Exception]:
         ]
 
     def get_errors(self) -> list[Exception]:
+        """Compile errors from the whole result hierarchy.
+
+        Returns:
+            The errors from setup, teardown and all errors found in the whole result hierarchy.
+        """
         return self._get_setup_teardown_errors() + self._get_inner_errors()
 
     def add_stats(self, statistics: Statistics) -> None:
+        """Collate stats from the whole result hierarchy.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be collated.
+        """
         for inner_result in self._inner_results:
             inner_result.add_stats(statistics)
 
 
 class TestCaseResult(BaseResult, FixtureResult):
-    """
-    The test case specific result.
-    Stores the result of the actual test case.
-    Also stores the test case name.
+    r"""The test case specific result.
+
+    Stores the result of the actual test case. This is done by adding an extra superclass
+    in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and
+    the class is itself a record of the test case.
+
+    Attributes:
+        test_case_name: The test case name.
     """
 
     test_case_name: str
 
     def __init__(self, test_case_name: str):
+        """Extend the constructor with `test_case_name`.
+
+        Args:
+            test_case_name: The test case's name.
+        """
         super(TestCaseResult, self).__init__()
         self.test_case_name = test_case_name
 
     def update(self, result: Result, error: Exception | None = None) -> None:
+        """Update the test case result.
+
+        This updates the result of the test case itself and doesn't affect
+        the results of the setup and teardown steps in any way.
+
+        Args:
+            result: The result of the test case.
+            error: The error that occurred in case of a failure.
+        """
         self.result = result
         self.error = error
 
@@ -167,36 +262,64 @@ def _get_inner_errors(self) -> list[Exception]:
         return []
 
     def add_stats(self, statistics: Statistics) -> None:
+        r"""Add the test case result to statistics.
+
+        The base method goes through the hierarchy recursively and this method is here to stop
+        the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be added.
+        """
         statistics += self.result
 
     def __bool__(self) -> bool:
+        """The test case passed only if setup, teardown and the test case itself passed."""
         return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
 
 
 class TestSuiteResult(BaseResult):
-    """
-    The test suite specific result.
-    The _inner_results list stores results of test cases in a given test suite.
-    Also stores the test suite name.
+    """The test suite specific result.
+
+    The internal list stores the results of all test cases in a given test suite.
+
+    Attributes:
+        suite_name: The test suite name.
     """
 
     suite_name: str
 
     def __init__(self, suite_name: str):
+        """Extend the constructor with `suite_name`.
+
+        Args:
+            suite_name: The test suite's name.
+        """
         super(TestSuiteResult, self).__init__()
         self.suite_name = suite_name
 
     def add_test_case(self, test_case_name: str) -> TestCaseResult:
+        """Add and return the inner result (test case).
+
+        Returns:
+            The test case's result.
+        """
         test_case_result = TestCaseResult(test_case_name)
         self._inner_results.append(test_case_result)
         return test_case_result
 
 
 class BuildTargetResult(BaseResult):
-    """
-    The build target specific result.
-    The _inner_results list stores results of test suites in a given build target.
-    Also stores build target specifics, such as compiler used to build DPDK.
+    """The build target specific result.
+
+    The internal list stores the results of all test suites in a given build target.
+
+    Attributes:
+        arch: The DPDK build target architecture.
+        os: The DPDK build target operating system.
+        cpu: The DPDK build target CPU.
+        compiler: The DPDK build target compiler.
+        compiler_version: The DPDK build target compiler version.
+        dpdk_version: The built DPDK version.
     """
 
     arch: Architecture
@@ -207,6 +330,11 @@ class BuildTargetResult(BaseResult):
     dpdk_version: str | None
 
     def __init__(self, build_target: BuildTargetConfiguration):
+        """Extend the constructor with the `build_target`'s build target config.
+
+        Args:
+            build_target: The build target's test run configuration.
+        """
         super(BuildTargetResult, self).__init__()
         self.arch = build_target.arch
         self.os = build_target.os
@@ -216,20 +344,35 @@ def __init__(self, build_target: BuildTargetConfiguration):
         self.dpdk_version = None
 
     def add_build_target_info(self, versions: BuildTargetInfo) -> None:
+        """Add information about the build target gathered at runtime.
+
+        Args:
+            versions: The additional information.
+        """
         self.compiler_version = versions.compiler_version
         self.dpdk_version = versions.dpdk_version
 
     def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+        """Add and return the inner result (test suite).
+
+        Returns:
+            The test suite's result.
+        """
         test_suite_result = TestSuiteResult(test_suite_name)
         self._inner_results.append(test_suite_result)
         return test_suite_result
 
 
 class ExecutionResult(BaseResult):
-    """
-    The execution specific result.
-    The _inner_results list stores results of build targets in a given execution.
-    Also stores the SUT node configuration.
+    """The execution specific result.
+
+    The internal list stores the results of all build targets in a given execution.
+
+    Attributes:
+        sut_node: The SUT node used in the execution.
+        sut_os_name: The operating system of the SUT node.
+        sut_os_version: The operating system version of the SUT node.
+        sut_kernel_version: The operating system kernel version of the SUT node.
     """
 
     sut_node: NodeConfiguration
@@ -238,34 +381,53 @@ class ExecutionResult(BaseResult):
     sut_kernel_version: str
 
     def __init__(self, sut_node: NodeConfiguration):
+        """Extend the constructor with the `sut_node`'s config.
+
+        Args:
+            sut_node: The SUT node's test run configuration used in the execution.
+        """
         super(ExecutionResult, self).__init__()
         self.sut_node = sut_node
 
     def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTargetResult:
+        """Add and return the inner result (build target).
+
+        Args:
+            build_target: The build target's test run configuration.
+
+        Returns:
+            The build target's result.
+        """
         build_target_result = BuildTargetResult(build_target)
         self._inner_results.append(build_target_result)
         return build_target_result
 
     def add_sut_info(self, sut_info: NodeInfo) -> None:
+        """Add SUT information gathered at runtime.
+
+        Args:
+            sut_info: The additional SUT node information.
+        """
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
 
 
 class DTSResult(BaseResult):
-    """
-    Stores environment information and test results from a DTS run, which are:
-    * Execution level information, such as SUT and TG hardware.
-    * Build target level information, such as compiler, target OS and cpu.
-    * Test suite results.
-    * All errors that are caught and recorded during DTS execution.
+    """Stores environment information and test results from a DTS run.
 
-    The information is stored in nested objects.
+        * Execution level information, such as testbed and the test suite list,
+        * Build target level information, such as compiler, target OS and cpu,
+        * Test suite and test case results,
+        * All errors that are caught and recorded during DTS execution.
 
-    The class is capable of computing the return code used to exit DTS with
-    from the stored error.
+    The information is stored hierarchically. This is the first level of the hierarchy
+    and as such is where the data form the whole hierarchy is collated or processed.
 
-    It also provides a brief statistical summary of passed/failed test cases.
+    The internal list stores the results of all executions.
+
+    Attributes:
+        dpdk_version: The DPDK version to record.
     """
 
     dpdk_version: str | None
@@ -276,6 +438,11 @@ class DTSResult(BaseResult):
     _stats_filename: str
 
     def __init__(self, logger: DTSLOG):
+        """Extend the constructor with top-level specifics.
+
+        Args:
+            logger: The logger instance the whole result will use.
+        """
         super(DTSResult, self).__init__()
         self.dpdk_version = None
         self._logger = logger
@@ -285,21 +452,33 @@ def __init__(self, logger: DTSLOG):
         self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
 
     def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+        """Add and return the inner result (execution).
+
+        Args:
+            sut_node: The SUT node's test run configuration.
+
+        Returns:
+            The execution's result.
+        """
         execution_result = ExecutionResult(sut_node)
         self._inner_results.append(execution_result)
         return execution_result
 
     def add_error(self, error: Exception) -> None:
+        """Record an error that occurred outside any execution.
+
+        Args:
+            error: The exception to record.
+        """
         self._errors.append(error)
 
     def process(self) -> None:
-        """
-        Process the data after a DTS run.
-        The data is added to nested objects during runtime and this parent object
-        is not updated at that time. This requires us to process the nested data
-        after it's all been gathered.
+        """Process the data after a whole DTS run.
+
+        The data is added to inner objects during runtime and this object is not updated
+        at that time. This requires us to process the inner data after it's all been gathered.
 
-        The processing gathers all errors and the result statistics of test cases.
+        The processing gathers all errors and the statistics of test case results.
         """
         self._errors += self.get_errors()
         if self._errors and self._logger:
@@ -313,8 +492,10 @@ def process(self) -> None:
             stats_file.write(str(self._stats_result))
 
     def get_return_code(self) -> int:
-        """
-        Go through all stored Exceptions and return the highest error code found.
+        """Go through all stored Exceptions and return the final DTS error code.
+
+        Returns:
+            The highest error code found.
         """
         for error in self._errors:
             error_return_code = ErrorSeverity.GENERIC_ERR
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 10/21] dts: config docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (8 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 09/21] dts: test result " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 11/21] dts: remote session " Juraj Linkeš
                                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py | 369 ++++++++++++++++++++++++++-----
 dts/framework/config/types.py    | 132 +++++++++++
 2 files changed, 444 insertions(+), 57 deletions(-)
 create mode 100644 dts/framework/config/types.py

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ef25a463c0..62eded7f04 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,8 +3,34 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Yaml config parsing methods
+"""Testbed configuration and test suite specification.
+
+This package offers classes that hold real-time information about the testbed, hold test run
+configuration describing the tested testbed and a loader function, :func:`load_config`, which loads
+the YAML test run configuration file
+and validates it according to :download:`the schema <conf_yaml_schema.json>`.
+
+The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout
+this package. The allowed keys and types inside this dictionary are defined in
+the :doc:`types <framework.config.types>` module.
+
+The test run configuration has two main sections:
+
+    * The :class:`ExecutionConfiguration` which defines what tests are going to be run
+      and how DPDK will be built. It also references the testbed where these tests and DPDK
+      are going to be run,
+    * The nodes of the testbed are defined in the other section,
+      a :class:`list` of :class:`NodeConfiguration` objects.
+
+The real-time information about testbed is supposed to be gathered at runtime.
+
+The classes defined in this package make heavy use of :mod:`dataclasses`.
+All of them use slots and are frozen:
+
+    * Slots enables some optimizations, by pre-allocating space for the defined
+      attributes in the underlying data structure,
+    * Frozen makes the object immutable. This enables further optimizations,
+      and makes it thread safe should we ever want to move in that direction.
 """
 
 import json
@@ -12,11 +38,20 @@
 import pathlib
 from dataclasses import dataclass
 from enum import auto, unique
-from typing import Any, TypedDict, Union
+from typing import Union
 
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.config.types import (
+    BuildTargetConfigDict,
+    ConfigurationDict,
+    ExecutionConfigDict,
+    NodeConfigDict,
+    PortConfigDict,
+    TestSuiteConfigDict,
+    TrafficGeneratorConfigDict,
+)
 from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
@@ -24,55 +59,97 @@
 
 @unique
 class Architecture(StrEnum):
+    r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     i686 = auto()
+    #:
     x86_64 = auto()
+    #:
     x86_32 = auto()
+    #:
     arm64 = auto()
+    #:
     ppc64le = auto()
 
 
 @unique
 class OS(StrEnum):
+    r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     linux = auto()
+    #:
     freebsd = auto()
+    #:
     windows = auto()
 
 
 @unique
 class CPUType(StrEnum):
+    r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     native = auto()
+    #:
     armv8a = auto()
+    #:
     dpaa2 = auto()
+    #:
     thunderx = auto()
+    #:
     xgene1 = auto()
 
 
 @unique
 class Compiler(StrEnum):
+    r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     gcc = auto()
+    #:
     clang = auto()
+    #:
     icc = auto()
+    #:
     msvc = auto()
 
 
 @unique
 class TrafficGeneratorType(StrEnum):
+    """The supported traffic generators."""
+
+    #:
     SCAPY = auto()
 
 
-# Slots enables some optimizations, by pre-allocating space for the defined
-# attributes in the underlying data structure.
-#
-# Frozen makes the object immutable. This enables further optimizations,
-# and makes it thread safe should we every want to move in that direction.
 @dataclass(slots=True, frozen=True)
 class HugepageConfiguration:
+    r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        amount: The number of hugepages.
+        force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node.
+    """
+
     amount: int
     force_first_numa: bool
 
 
 @dataclass(slots=True, frozen=True)
 class PortConfig:
+    r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        node: The :class:`~framework.testbed_model.node.Node` where this port exists.
+        pci: The PCI address of the port.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK.
+        os_driver: The operating system driver name when the operating system controls the port.
+        peer_node: The :class:`~framework.testbed_model.node.Node` of the port
+            connected to this port.
+        peer_pci: The PCI address of the port connected to this port.
+    """
+
     node: str
     pci: str
     os_driver_for_dpdk: str
@@ -81,18 +158,44 @@ class PortConfig:
     peer_pci: str
 
     @staticmethod
-    def from_dict(node: str, d: dict) -> "PortConfig":
+    def from_dict(node: str, d: PortConfigDict) -> "PortConfig":
+        """A convenience method that creates the object from fewer inputs.
+
+        Args:
+            node: The node where this port exists.
+            d: The configuration dictionary.
+
+        Returns:
+            The port configuration instance.
+        """
         return PortConfig(node=node, **d)
 
 
 @dataclass(slots=True, frozen=True)
 class TrafficGeneratorConfig:
+    """The configuration of traffic generators.
+
+    The class will be expanded when more configuration is needed.
+
+    Attributes:
+        traffic_generator_type: The type of the traffic generator.
+    """
+
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
-        # This looks useless now, but is designed to allow expansion to traffic
-        # generators that require more configuration later.
+    def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig":
+        """A convenience method that produces traffic generator config of the proper type.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The traffic generator configuration instance.
+
+        Raises:
+            ConfigurationError: An unknown traffic generator type was encountered.
+        """
         match TrafficGeneratorType(d["type"]):
             case TrafficGeneratorType.SCAPY:
                 return ScapyTrafficGeneratorConfig(
@@ -104,11 +207,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
 
 @dataclass(slots=True, frozen=True)
 class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+    """Scapy traffic generator specific configuration."""
+
     pass
 
 
 @dataclass(slots=True, frozen=True)
 class NodeConfiguration:
+    r"""The configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        name: The name of the :class:`~framework.testbed_model.node.Node`.
+        hostname: The hostname of the :class:`~framework.testbed_model.node.Node`.
+            Can be an IP or a domain name.
+        user: The name of the user used to connect to
+            the :class:`~framework.testbed_model.node.Node`.
+        password: The password of the user. The use of passwords is heavily discouraged.
+            Please use keys instead.
+        arch: The architecture of the :class:`~framework.testbed_model.node.Node`.
+        os: The operating system of the :class:`~framework.testbed_model.node.Node`.
+        lcores: A comma delimited list of logical cores to use when running DPDK.
+        use_first_core: If :data:`True`, the first logical core won't be used.
+        hugepages: An optional hugepage configuration.
+        ports: The ports that can be used in testing.
+    """
+
     name: str
     hostname: str
     user: str
@@ -121,55 +244,89 @@ class NodeConfiguration:
     ports: list[PortConfig]
 
     @staticmethod
-    def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
-        hugepage_config = d.get("hugepages")
-        if hugepage_config:
-            if "force_first_numa" not in hugepage_config:
-                hugepage_config["force_first_numa"] = False
-            hugepage_config = HugepageConfiguration(**hugepage_config)
-
-        common_config = {
-            "name": d["name"],
-            "hostname": d["hostname"],
-            "user": d["user"],
-            "password": d.get("password"),
-            "arch": Architecture(d["arch"]),
-            "os": OS(d["os"]),
-            "lcores": d.get("lcores", "1"),
-            "use_first_core": d.get("use_first_core", False),
-            "hugepages": hugepage_config,
-            "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
-        }
-
+    def from_dict(
+        d: NodeConfigDict,
+    ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
+        """A convenience method that processes the inputs before creating a specialized instance.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            Either an SUT or TG configuration instance.
+        """
+        hugepage_config = None
+        if "hugepages" in d:
+            hugepage_config_dict = d["hugepages"]
+            if "force_first_numa" not in hugepage_config_dict:
+                hugepage_config_dict["force_first_numa"] = False
+            hugepage_config = HugepageConfiguration(**hugepage_config_dict)
+
+        # The calls here contain duplicated code which is here because Mypy doesn't
+        # properly support dictionary unpacking with TypedDicts
         if "traffic_generator" in d:
             return TGNodeConfiguration(
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
                 traffic_generator=TrafficGeneratorConfig.from_dict(d["traffic_generator"]),
-                **common_config,
             )
         else:
             return SutNodeConfiguration(
-                memory_channels=d.get("memory_channels", 1), **common_config
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
+                memory_channels=d.get("memory_channels", 1),
             )
 
 
 @dataclass(slots=True, frozen=True)
 class SutNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration.
+
+    Attributes:
+        memory_channels: The number of memory channels to use when running DPDK.
+    """
+
     memory_channels: int
 
 
 @dataclass(slots=True, frozen=True)
 class TGNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration.
+
+    Attributes:
+        traffic_generator: The configuration of the traffic generator present on the TG node.
+    """
+
     traffic_generator: ScapyTrafficGeneratorConfig
 
 
 @dataclass(slots=True, frozen=True)
 class NodeInfo:
-    """Class to hold important versions within the node.
-
-    This class, unlike the NodeConfiguration class, cannot be generated at the start.
-    This is because we need to initialize a connection with the node before we can
-    collect the information needed in this class. Therefore, it cannot be a part of
-    the configuration class above.
+    """Supplemental node information.
+
+    Attributes:
+        os_name: The name of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        os_version: The version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        kernel_version: The kernel version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
     """
 
     os_name: str
@@ -179,6 +336,20 @@ class NodeInfo:
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetConfiguration:
+    """DPDK build configuration.
+
+    The configuration used for building DPDK.
+
+    Attributes:
+        arch: The target architecture to build for.
+        os: The target os to build for.
+        cpu: The target CPU to build for.
+        compiler: The compiler executable to use.
+        compiler_wrapper: This string will be put in front of the compiler when
+            executing the build. Useful for adding wrapper commands, such as ``ccache``.
+        name: The name of the compiler.
+    """
+
     arch: Architecture
     os: OS
     cpu: CPUType
@@ -187,7 +358,18 @@ class BuildTargetConfiguration:
     name: str
 
     @staticmethod
-    def from_dict(d: dict) -> "BuildTargetConfiguration":
+    def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration":
+        r"""A convenience method that processes the inputs before creating an instance.
+
+        `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and
+        `name` is constructed from `arch`, `os`, `cpu` and `compiler`.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The build target configuration instance.
+        """
         return BuildTargetConfiguration(
             arch=Architecture(d["arch"]),
             os=OS(d["os"]),
@@ -200,23 +382,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetInfo:
-    """Class to hold important versions within the build target.
+    """Various versions and other information about a build target.
 
-    This is very similar to the NodeInfo class, it just instead holds information
-    for the build target.
+    Attributes:
+        dpdk_version: The DPDK version that was built.
+        compiler_version: The version of the compiler used to build DPDK.
     """
 
     dpdk_version: str
     compiler_version: str
 
 
-class TestSuiteConfigDict(TypedDict):
-    suite: str
-    cases: list[str]
-
-
 @dataclass(slots=True, frozen=True)
 class TestSuiteConfig:
+    """Test suite configuration.
+
+    Information about a single test suite to be executed.
+
+    Attributes:
+        test_suite: The name of the test suite module without the starting ``TestSuite_``.
+        test_cases: The names of test cases from this test suite to execute.
+            If empty, all test cases will be executed.
+    """
+
     test_suite: str
     test_cases: list[str]
 
@@ -224,6 +412,14 @@ class TestSuiteConfig:
     def from_dict(
         entry: str | TestSuiteConfigDict,
     ) -> "TestSuiteConfig":
+        """Create an instance from two different types.
+
+        Args:
+            entry: Either a suite name or a dictionary containing the config.
+
+        Returns:
+            The test suite configuration instance.
+        """
         if isinstance(entry, str):
             return TestSuiteConfig(test_suite=entry, test_cases=[])
         elif isinstance(entry, dict):
@@ -234,19 +430,49 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class ExecutionConfiguration:
+    """The configuration of an execution.
+
+    The configuration contains testbed information, what tests to execute
+    and with what DPDK build.
+
+    Attributes:
+        build_targets: A list of DPDK builds to test.
+        perf: Whether to run performance tests.
+        func: Whether to run functional tests.
+        skip_smoke_tests: Whether to skip smoke tests.
+        test_suites: The names of test suites and/or test cases to execute.
+        system_under_test_node: The SUT node to use in this execution.
+        traffic_generator_node: The TG node to use in this execution.
+        vdevs: The names of virtual devices to test.
+    """
+
     build_targets: list[BuildTargetConfiguration]
     perf: bool
     func: bool
+    skip_smoke_tests: bool
     test_suites: list[TestSuiteConfig]
     system_under_test_node: SutNodeConfiguration
     traffic_generator_node: TGNodeConfiguration
     vdevs: list[str]
-    skip_smoke_tests: bool
 
     @staticmethod
     def from_dict(
-        d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
+        d: ExecutionConfigDict,
+        node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]],
     ) -> "ExecutionConfiguration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        The build target and the test suite config are transformed into their respective objects.
+        SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes
+        are just stored.
+
+        Args:
+            d: The configuration dictionary.
+            node_map: A dictionary mapping node names to their config objects.
+
+        Returns:
+            The execution configuration instance.
+        """
         build_targets: list[BuildTargetConfiguration] = list(
             map(BuildTargetConfiguration.from_dict, d["build_targets"])
         )
@@ -283,10 +509,31 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class Configuration:
+    """DTS testbed and test configuration.
+
+    The node configuration is not stored in this object. Rather, all used node configurations
+    are stored inside the execution configuration where the nodes are actually used.
+
+    Attributes:
+        executions: Execution configurations.
+    """
+
     executions: list[ExecutionConfiguration]
 
     @staticmethod
-    def from_dict(d: dict) -> "Configuration":
+    def from_dict(d: ConfigurationDict) -> "Configuration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        Build target and test suite config are transformed into their respective objects.
+        SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes
+        are just stored.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The whole configuration instance.
+        """
         nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
             map(NodeConfiguration.from_dict, d["nodes"])
         )
@@ -303,9 +550,17 @@ def from_dict(d: dict) -> "Configuration":
 
 
 def load_config() -> Configuration:
-    """
-    Loads the configuration file and the configuration file schema,
-    validates the configuration file, and creates a configuration object.
+    """Load DTS test run configuration from a file.
+
+    Load the YAML test run configuration file
+    and :download:`the configuration file schema <conf_yaml_schema.json>`,
+    validate the test run configuration file, and create a test run configuration object.
+
+    The YAML test run configuration file is specified in the :option:`--config-file` command line
+    argument or the :envvar:`DTS_CFG_FILE` environment variable.
+
+    Returns:
+        The parsed test run configuration.
     """
     with open(SETTINGS.config_file_path, "r") as f:
         config_data = yaml.safe_load(f)
@@ -314,6 +569,6 @@ def load_config() -> Configuration:
 
     with open(schema_path, "r") as f:
         schema = json.load(f)
-    config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
-    config_obj: Configuration = Configuration.from_dict(dict(config))
+    config = warlock.model_factory(schema, name="_Config")(config_data)
+    config_obj: Configuration = Configuration.from_dict(dict(config))  # type: ignore[arg-type]
     return config_obj
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
new file mode 100644
index 0000000000..1927910d88
--- /dev/null
+++ b/dts/framework/config/types.py
@@ -0,0 +1,132 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Configuration dictionary contents specification.
+
+These type definitions serve as documentation of the configuration dictionary contents.
+
+The definitions use the built-in :class:`~typing.TypedDict` construct.
+"""
+
+from typing import TypedDict
+
+
+class PortConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    pci: str
+    #:
+    os_driver_for_dpdk: str
+    #:
+    os_driver: str
+    #:
+    peer_node: str
+    #:
+    peer_pci: str
+
+
+class TrafficGeneratorConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    type: str
+
+
+class HugepageConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    amount: int
+    #:
+    force_first_numa: bool
+
+
+class NodeConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    hugepages: HugepageConfigurationDict
+    #:
+    name: str
+    #:
+    hostname: str
+    #:
+    user: str
+    #:
+    password: str
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    lcores: str
+    #:
+    use_first_core: bool
+    #:
+    ports: list[PortConfigDict]
+    #:
+    memory_channels: int
+    #:
+    traffic_generator: TrafficGeneratorConfigDict
+
+
+class BuildTargetConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    cpu: str
+    #:
+    compiler: str
+    #:
+    compiler_wrapper: str
+
+
+class TestSuiteConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    suite: str
+    #:
+    cases: list[str]
+
+
+class ExecutionSUTConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    node_name: str
+    #:
+    vdevs: list[str]
+
+
+class ExecutionConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    build_targets: list[BuildTargetConfigDict]
+    #:
+    perf: bool
+    #:
+    func: bool
+    #:
+    skip_smoke_tests: bool
+    #:
+    test_suites: TestSuiteConfigDict
+    #:
+    system_under_test_node: ExecutionSUTConfigDict
+    #:
+    traffic_generator_node: str
+
+
+class ConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    nodes: list[NodeConfigDict]
+    #:
+    executions: list[ExecutionConfigDict]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 11/21] dts: remote session docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (9 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 10/21] dts: config " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 12/21] dts: interactive " Juraj Linkeš
                                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/remote_session/__init__.py      |  39 +++++-
 .../remote_session/remote_session.py          | 130 +++++++++++++-----
 dts/framework/remote_session/ssh_session.py   |  16 +--
 3 files changed, 137 insertions(+), 48 deletions(-)

diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 5e7ddb2b05..51a01d6b5e 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -2,12 +2,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-The package provides modules for managing remote connections to a remote host (node),
-differentiated by OS.
-The package provides a factory function, create_session, that returns the appropriate
-remote connection based on the passed configuration. The differences are in the
-underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""Remote interactive and non-interactive sessions.
+
+This package provides modules for managing remote connections to a remote host (node).
+
+The non-interactive sessions send commands and return their output and exit code.
+
+The interactive sessions open an interactive shell which is continuously open,
+allowing it to send and receive data within that particular shell.
 """
 
 # pylama:ignore=W0611
@@ -26,10 +28,35 @@
 def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> RemoteSession:
+    """Factory for non-interactive remote sessions.
+
+    The function returns an SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The SSH remote session.
+    """
     return SSHSession(node_config, name, logger)
 
 
 def create_interactive_session(
     node_config: NodeConfiguration, logger: DTSLOG
 ) -> InteractiveRemoteSession:
+    """Factory for interactive remote sessions.
+
+    The function returns an interactive SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The interactive SSH remote session.
+    """
     return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 719f7d1ef7..2059f9a981 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -3,6 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Base remote session.
+
+This module contains the abstract base class for remote sessions and defines
+the structure of the result of a command execution.
+"""
+
+
 import dataclasses
 from abc import ABC, abstractmethod
 from pathlib import PurePath
@@ -15,8 +22,14 @@
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class CommandResult:
-    """
-    The result of remote execution of a command.
+    """The result of remote execution of a command.
+
+    Attributes:
+        name: The name of the session that executed the command.
+        command: The executed command.
+        stdout: The standard output the command produced.
+        stderr: The standard error output the command produced.
+        return_code: The return code the command exited with.
     """
 
     name: str
@@ -26,6 +39,7 @@ class CommandResult:
     return_code: int
 
     def __str__(self) -> str:
+        """Format the command outputs."""
         return (
             f"stdout: '{self.stdout}'\n"
             f"stderr: '{self.stderr}'\n"
@@ -34,13 +48,24 @@ def __str__(self) -> str:
 
 
 class RemoteSession(ABC):
-    """
-    The base class for defining which methods must be implemented in order to connect
-    to a remote host (node) and maintain a remote session. The derived classes are
-    supposed to implement/use some underlying transport protocol (e.g. SSH) to
-    implement the methods. On top of that, it provides some basic services common to
-    all derived classes, such as keeping history and logging what's being executed
-    on the remote node.
+    """Non-interactive remote session.
+
+    The abstract methods must be implemented in order to connect to a remote host (node)
+    and maintain a remote session.
+    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
+    to implement the methods. On top of that, it provides some basic services common to all
+    subclasses, such as keeping history and logging what's being executed on the remote node.
+
+    Attributes:
+        name: The name of the session.
+        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
+            or a domain name.
+        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
+        port: The port of the node, if given in `hostname`.
+        username: The username used in the connection.
+        password: The password used in the connection. Most frequently empty,
+            as the use of passwords is discouraged.
+        history: The executed commands during this session.
     """
 
     name: str
@@ -59,6 +84,16 @@ def __init__(
         session_name: str,
         logger: DTSLOG,
     ):
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            session_name: The name of the session.
+            logger: The logger instance this session will use.
+
+        Raises:
+            SSHConnectionError: If the connection to the node was not successful.
+        """
         self._node_config = node_config
 
         self.name = session_name
@@ -79,8 +114,13 @@ def __init__(
 
     @abstractmethod
     def _connect(self) -> None:
-        """
-        Create connection to assigned node.
+        """Create a connection to the node.
+
+        The implementation must assign the established session to self.session.
+
+        The implementation must except all exceptions and convert them to an SSHConnectionError.
+
+        The implementation may optionally implement retry attempts.
         """
 
     def send_command(
@@ -90,11 +130,24 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        Send a command to the connected node using optional env vars
-        and return CommandResult.
-        If verify is True, check the return code of the executed command
-        and raise a RemoteCommandExecutionError if the command failed.
+        """Send `command` to the connected node.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds for `command` execution to complete.
+            verify: If :data:`True`, will check the exit code of `command`.
+            env: A dictionary with environment variables to be used with `command` execution.
+
+        Raises:
+            SSHSessionDeadError: If the session isn't alive when sending `command`.
+            SSHTimeoutError: If `command` execution timed out.
+            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
+
+        Returns:
+            The output of the command along with the return code.
         """
         self._logger.info(f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else ""))
         result = self._send_command(command, timeout, env)
@@ -111,29 +164,38 @@ def send_command(
 
     @abstractmethod
     def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
-        """
-        Use the underlying protocol to execute the command using optional env vars
-        and return CommandResult.
+        """Send a command to the connected node.
+
+        The implementation must execute the command remotely with `env` environment variables
+        and return the result.
+
+        The implementation must except all exceptions and raise:
+
+            * SSHSessionDeadError if the session is not alive,
+            * SSHTimeoutError if the command execution times out.
         """
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session and free all used resources.
+        """Close the remote session and free all used resources.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
         """
         self._logger.logger_exit()
         self._close(force)
 
     @abstractmethod
     def _close(self, force: bool = False) -> None:
-        """
-        Execute protocol specific steps needed to close the session properly.
+        """Protocol specific steps needed to close the session properly.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
+                This doesn't have to be implemented in the overloaded method.
         """
 
     @abstractmethod
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the remote session is still responding."""
 
     @abstractmethod
     def copy_from(
@@ -143,12 +205,12 @@ def copy_from(
     ) -> None:
         """Copy a file from the remote Node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote Node associated with this remote session
+        to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
-            destination_file: a file or directory path on the local filesystem.
+            source_file: The file on the remote Node.
+            destination_file: A file or directory path on the local filesystem.
         """
 
     @abstractmethod
@@ -159,10 +221,10 @@ def copy_to(
     ) -> None:
         """Copy a file from local filesystem to the remote Node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file` on the remote Node
+        associated with this remote session.
 
         Args:
-            source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            source_file: The file on the local filesystem.
+            destination_file: A file or directory path on the remote Node.
         """
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index a467033a13..782220092c 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""SSH remote session."""
+
 import socket
 import traceback
 from pathlib import PurePath
@@ -26,13 +28,8 @@
 class SSHSession(RemoteSession):
     """A persistent SSH connection to a remote Node.
 
-    The connection is implemented with the Fabric Python library.
-
-    Args:
-        node_config: The configuration of the Node to connect to.
-        session_name: The name of the session.
-        logger: The logger used for logging.
-            This should be passed from the parent OSSession.
+    The connection is implemented with
+    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
 
     Attributes:
         session: The underlying Fabric SSH connection.
@@ -78,6 +75,7 @@ def _connect(self) -> None:
             raise SSHConnectionError(self.hostname, errors)
 
     def is_alive(self) -> bool:
+        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
         return self.session.is_connected
 
     def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
@@ -85,7 +83,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma
 
         Args:
             command: The command to execute.
-            timeout: Wait at most this many seconds for the execution to complete.
+            timeout: Wait at most this long in seconds for the command execution to complete.
             env: Extra environment variables that will be used in command execution.
 
         Raises:
@@ -110,6 +108,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
         self.session.get(str(destination_file), str(source_file))
 
     def copy_to(
@@ -117,6 +116,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
         self.session.put(str(source_file), str(destination_file))
 
     def _close(self, force: bool = False) -> None:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 12/21] dts: interactive remote session docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (10 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 11/21] dts: remote session " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-30 21:49                   ` Jeremy Spewock
  2023-11-23 15:13                 ` [PATCH v8 13/21] dts: port and virtual device " Juraj Linkeš
                                   ` (10 subsequent siblings)
  22 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../interactive_remote_session.py             | 36 +++----
 .../remote_session/interactive_shell.py       | 99 +++++++++++--------
 dts/framework/remote_session/python_shell.py  | 26 ++++-
 dts/framework/remote_session/testpmd_shell.py | 58 +++++++++--
 4 files changed, 149 insertions(+), 70 deletions(-)

diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
index 098ded1bb0..1cc82e3377 100644
--- a/dts/framework/remote_session/interactive_remote_session.py
+++ b/dts/framework/remote_session/interactive_remote_session.py
@@ -22,27 +22,23 @@
 class InteractiveRemoteSession:
     """SSH connection dedicated to interactive applications.
 
-    This connection is created using paramiko and is a persistent connection to the
-    host. This class defines methods for connecting to the node and configures this
-    connection to send "keep alive" packets every 30 seconds. Because paramiko attempts
-    to use SSH keys to establish a connection first, providing a password is optional.
-    This session is utilized by InteractiveShells and cannot be interacted with
-    directly.
-
-    Arguments:
-        node_config: Configuration class for the node you are connecting to.
-        _logger: Desired logger for this session to use.
+    The connection is created using `paramiko <https://docs.paramiko.org/en/latest/>`_
+    and is a persistent connection to the host. This class defines the methods for connecting
+    to the node and configures the connection to send "keep alive" packets every 30 seconds.
+    Because paramiko attempts to use SSH keys to establish a connection first, providing
+    a password is optional. This session is utilized by InteractiveShells
+    and cannot be interacted with directly.
 
     Attributes:
-        hostname: Hostname that will be used to initialize a connection to the node.
-        ip: A subsection of hostname that removes the port for the connection if there
+        hostname: The hostname that will be used to initialize a connection to the node.
+        ip: A subsection of `hostname` that removes the port for the connection if there
             is one. If there is no port, this will be the same as hostname.
-        port: Port to use for the ssh connection. This will be extracted from the
-            hostname if there is a port included, otherwise it will default to 22.
+        port: Port to use for the ssh connection. This will be extracted from `hostname`
+            if there is a port included, otherwise it will default to ``22``.
         username: User to connect to the node with.
         password: Password of the user connecting to the host. This will default to an
             empty string if a password is not provided.
-        session: Underlying paramiko connection.
+        session: The underlying paramiko connection.
 
     Raises:
         SSHConnectionError: There is an error creating the SSH connection.
@@ -58,9 +54,15 @@ class InteractiveRemoteSession:
     _node_config: NodeConfiguration
     _transport: Transport | None
 
-    def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None:
+    def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            logger: The logger instance this session will use.
+        """
         self._node_config = node_config
-        self._logger = _logger
+        self._logger = logger
         self.hostname = node_config.hostname
         self.username = node_config.user
         self.password = node_config.password if node_config.password else ""
diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
index 4db19fb9b3..b158f963b6 100644
--- a/dts/framework/remote_session/interactive_shell.py
+++ b/dts/framework/remote_session/interactive_shell.py
@@ -3,18 +3,20 @@
 
 """Common functionality for interactive shell handling.
 
-This base class, InteractiveShell, is meant to be extended by other classes that
-contain functionality specific to that shell type. These derived classes will often
-modify things like the prompt to expect or the arguments to pass into the application,
-but still utilize the same method for sending a command and collecting output. How
-this output is handled however is often application specific. If an application needs
-elevated privileges to start it is expected that the method for gaining those
-privileges is provided when initializing the class.
+The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain
+functionality specific to that shell type. These subclasses will often modify things like
+the prompt to expect or the arguments to pass into the application, but still utilize
+the same method for sending a command and collecting output. How this output is handled however
+is often application specific. If an application needs elevated privileges to start it is expected
+that the method for gaining those privileges is provided when initializing the class.
+
+The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+environment variable configure the timeout of getting the output from command execution.
 """
 
 from abc import ABC
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from paramiko import Channel, SSHClient, channel  # type: ignore[import]
 
@@ -30,28 +32,6 @@ class InteractiveShell(ABC):
     and collecting input until reaching a certain prompt. All interactive applications
     will use the same SSH connection, but each will create their own channel on that
     session.
-
-    Arguments:
-        interactive_session: The SSH session dedicated to interactive shells.
-        logger: Logger used for displaying information in the console.
-        get_privileged_command: Method for modifying a command to allow it to use
-            elevated privileges. If this is None, the application will not be started
-            with elevated privileges.
-        app_args: Command line arguments to be passed to the application on startup.
-        timeout: Timeout used for the SSH channel that is dedicated to this interactive
-            shell. This timeout is for collecting output, so if reading from the buffer
-            and no output is gathered within the timeout, an exception is thrown.
-
-    Attributes
-        _default_prompt: Prompt to expect at the end of output when sending a command.
-            This is often overridden by derived classes.
-        _command_extra_chars: Extra characters to add to the end of every command
-            before sending them. This is often overridden by derived classes and is
-            most commonly an additional newline character.
-        path: Path to the executable to start the interactive application.
-        dpdk_app: Whether this application is a DPDK app. If it is, the build
-            directory for DPDK on the node will be prepended to the path to the
-            executable.
     """
 
     _interactive_session: SSHClient
@@ -61,10 +41,22 @@ class InteractiveShell(ABC):
     _logger: DTSLOG
     _timeout: float
     _app_args: str
-    _default_prompt: str = ""
-    _command_extra_chars: str = ""
-    path: PurePath
-    dpdk_app: bool = False
+
+    #: Prompt to expect at the end of output when sending a command.
+    #: This is often overridden by subclasses.
+    _default_prompt: ClassVar[str] = ""
+
+    #: Extra characters to add to the end of every command
+    #: before sending them. This is often overridden by subclasses and is
+    #: most commonly an additional newline character.
+    _command_extra_chars: ClassVar[str] = ""
+
+    #: Path to the executable to start the interactive application.
+    path: ClassVar[PurePath]
+
+    #: Whether this application is a DPDK app. If it is, the build directory
+    #: for DPDK on the node will be prepended to the path to the executable.
+    dpdk_app: ClassVar[bool] = False
 
     def __init__(
         self,
@@ -74,6 +66,19 @@ def __init__(
         app_args: str = "",
         timeout: float = SETTINGS.timeout,
     ) -> None:
+        """Create an SSH channel during initialization.
+
+        Args:
+            interactive_session: The SSH session dedicated to interactive shells.
+            logger: The logger instance this session will use.
+            get_privileged_command: A method for modifying a command to allow it to use
+                elevated privileges. If :data:`None`, the application will not be started
+                with elevated privileges.
+            app_args: The command line arguments to be passed to the application on startup.
+            timeout: The timeout used for the SSH channel that is dedicated to this interactive
+                shell. This timeout is for collecting output, so if reading from the buffer
+                and no output is gathered within the timeout, an exception is thrown.
+        """
         self._interactive_session = interactive_session
         self._ssh_channel = self._interactive_session.invoke_shell()
         self._stdin = self._ssh_channel.makefile_stdin("w")
@@ -90,6 +95,10 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None
 
         This method is often overridden by subclasses as their process for
         starting may look different.
+
+        Args:
+            get_privileged_command: A function (but could be any callable) that produces
+                the version of the command with elevated privileges.
         """
         start_command = f"{self.path} {self._app_args}"
         if get_privileged_command is not None:
@@ -97,16 +106,24 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None
         self.send_command(start_command)
 
     def send_command(self, command: str, prompt: str | None = None) -> str:
-        """Send a command and get all output before the expected ending string.
+        """Send `command` and get all output before the expected ending string.
 
         Lines that expect input are not included in the stdout buffer, so they cannot
-        be used for expect. For example, if you were prompted to log into something
-        with a username and password, you cannot expect "username:" because it won't
-        yet be in the stdout buffer. A workaround for this could be consuming an
-        extra newline character to force the current prompt into the stdout buffer.
+        be used for expect.
+
+        Example:
+            If you were prompted to log into something with a username and password,
+            you cannot expect ``username:`` because it won't yet be in the stdout buffer.
+            A workaround for this could be consuming an extra newline character to force
+            the current `prompt` into the stdout buffer.
+
+        Args:
+            command: The command to send.
+            prompt: After sending the command, `send_command` will be expecting this string.
+                If :data:`None`, will use the class's default prompt.
 
         Returns:
-            All output in the buffer before expected string
+            All output in the buffer before expected string.
         """
         self._logger.info(f"Sending: '{command}'")
         if prompt is None:
@@ -124,8 +141,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str:
         return out
 
     def close(self) -> None:
+        """Properly free all resources."""
         self._stdin.close()
         self._ssh_channel.close()
 
     def __del__(self) -> None:
+        """Make sure the session is properly closed before deleting the object."""
         self.close()
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index cc3ad48a68..ccfd3783e8 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -1,12 +1,32 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Python interactive shell.
+
+Typical usage example in a TestSuite::
+
+    from framework.remote_session import PythonShell
+    python_shell = self.tg_node.create_interactive_shell(
+        PythonShell, timeout=5, privileged=True
+    )
+    python_shell.send_command("print('Hello World')")
+    python_shell.close()
+"""
+
 from pathlib import PurePath
+from typing import ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class PythonShell(InteractiveShell):
-    _default_prompt: str = ">>>"
-    _command_extra_chars: str = "\n"
-    path: PurePath = PurePath("python3")
+    """Python interactive shell."""
+
+    #: Python's prompt.
+    _default_prompt: ClassVar[str] = ">>>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
+
+    #: The Python executable.
+    path: ClassVar[PurePath] = PurePath("python3")
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 08ac311016..79481e845c 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -1,41 +1,79 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Testpmd interactive shell.
+
+Typical usage example in a TestSuite::
+
+    testpmd_shell = self.sut_node.create_interactive_shell(
+            TestPmdShell, privileged=True
+        )
+    devices = testpmd_shell.get_devices()
+    for device in devices:
+        print(device)
+    testpmd_shell.close()
+"""
+
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class TestPmdDevice(object):
+    """The data of a device that testpmd can recognize.
+
+    Attributes:
+        pci_address: The PCI address of the device.
+    """
+
     pci_address: str
 
     def __init__(self, pci_address_line: str):
+        """Initialize the device from the testpmd output line string.
+
+        Args:
+            pci_address_line: A line of testpmd output that contains a device.
+        """
         self.pci_address = pci_address_line.strip().split(": ")[1].strip()
 
     def __str__(self) -> str:
+        """The PCI address captures what the device is."""
         return self.pci_address
 
 
 class TestPmdShell(InteractiveShell):
-    path: PurePath = PurePath("app", "dpdk-testpmd")
-    dpdk_app: bool = True
-    _default_prompt: str = "testpmd>"
-    _command_extra_chars: str = "\n"  # We want to append an extra newline to every command
+    """Testpmd interactive shell.
+
+    The testpmd shell users should never use
+    the :meth:`~.interactive_shell.InteractiveShell.send_command` method directly, but rather
+    call specialized methods. If there isn't one that satisfies a need, it should be added.
+    """
+
+    #: The path to the testpmd executable.
+    path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd")
+
+    #: Flag this as a DPDK app so that it's clear this is not a system app and
+    #: needs to be looked in a specific path.
+    dpdk_app: ClassVar[bool] = True
+
+    #: The testpmd's prompt.
+    _default_prompt: ClassVar[str] = "testpmd>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
 
     def _start_application(self, get_privileged_command: Callable[[str], str] | None) -> None:
-        """See "_start_application" in InteractiveShell."""
         self._app_args += " -- -i"
         super()._start_application(get_privileged_command)
 
     def get_devices(self) -> list[TestPmdDevice]:
-        """Get a list of device names that are known to testpmd
+        """Get a list of device names that are known to testpmd.
 
-        Uses the device info listed in testpmd and then parses the output to
-        return only the names of the devices.
+        Uses the device info listed in testpmd and then parses the output.
 
         Returns:
-            A list of strings representing device names (e.g. 0000:14:00.1)
+            A list of devices.
         """
         dev_info: str = self.send_command("show device info all")
         dev_list: list[TestPmdDevice] = []
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 13/21] dts: port and virtual device docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (11 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 12/21] dts: interactive " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 14/21] dts: cpu " Juraj Linkeš
                                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/__init__.py       | 17 ++++--
 dts/framework/testbed_model/port.py           | 53 +++++++++++++++----
 dts/framework/testbed_model/virtual_device.py | 17 +++++-
 3 files changed, 72 insertions(+), 15 deletions(-)

diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ced05653b..6086512ca2 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,9 +2,20 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-This package contains the classes used to model the physical traffic generator,
-system under test and any other components that need to be interacted with.
+"""Testbed modelling.
+
+This package defines the testbed elements DTS works with:
+
+    * A system under test node: :class:`~.sut_node.SutNode`,
+    * A traffic generator node: :class:`~.tg_node.TGNode`,
+    * The ports of network interface cards (NICs) present on nodes: :class:`~.port.Port`,
+    * The logical cores of CPUs present on nodes: :class:`~.cpu.LogicalCore`,
+    * The virtual devices that can be created on nodes: :class:`~.virtual_device.VirtualDevice`,
+    * The operating systems running on nodes: :class:`~.linux_session.LinuxSession`
+      and :class:`~.posix_session.PosixSession`.
+
+DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes
+to properly build and test DPDK.
 """
 
 # pylama:ignore=W0611
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 680c29bfe3..817405bea4 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""NIC port model.
+
+Basic port information, such as location (the port are identified by their PCI address on a node),
+drivers and address.
+"""
+
+
 from dataclasses import dataclass
 
 from framework.config import PortConfig
@@ -9,24 +16,35 @@
 
 @dataclass(slots=True, frozen=True)
 class PortIdentifier:
+    """The port identifier.
+
+    Attributes:
+        node: The node where the port resides.
+        pci: The PCI address of the port on `node`.
+    """
+
     node: str
     pci: str
 
 
 @dataclass(slots=True)
 class Port:
-    """
-    identifier: The PCI address of the port on a node.
-
-    os_driver: The driver used by this port when the OS is controlling it.
-        Example: i40e
-    os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it,
-        Example: vfio-pci.
+    """Physical port on a node.
 
-    Note: os_driver and os_driver_for_dpdk may be the same thing.
-        Example: mlx5_core
+    The ports are identified by the node they're on and their PCI addresses. The port on the other
+    side of the connection is also captured here.
+    Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
+    and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
 
-    peer: The identifier of a port this port is connected with.
+    Attributes:
+        identifier: The PCI address of the port on a node.
+        os_driver: The operating system driver name when the operating system controls the port,
+            e.g.: ``i40e``.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
+        peer: The identifier of a port this port is connected with.
+            The `peer` is on a different node.
+        mac_address: The MAC address of the port.
+        logical_name: The logical name of the port. Must be discovered.
     """
 
     identifier: PortIdentifier
@@ -37,6 +55,12 @@ class Port:
     logical_name: str = ""
 
     def __init__(self, node_name: str, config: PortConfig):
+        """Initialize the port from `node_name` and `config`.
+
+        Args:
+            node_name: The name of the port's node.
+            config: The test run configuration of the port.
+        """
         self.identifier = PortIdentifier(
             node=node_name,
             pci=config.pci,
@@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig):
 
     @property
     def node(self) -> str:
+        """The node where the port resides."""
         return self.identifier.node
 
     @property
     def pci(self) -> str:
+        """The PCI address of the port."""
         return self.identifier.pci
 
 
 @dataclass(slots=True, frozen=True)
 class PortLink:
+    """The physical, cabled connection between the ports.
+
+    Attributes:
+        sut_port: The port on the SUT node connected to `tg_port`.
+        tg_port: The port on the TG node connected to `sut_port`.
+    """
+
     sut_port: Port
     tg_port: Port
diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
index eb664d9f17..e9b5e9c3be 100644
--- a/dts/framework/testbed_model/virtual_device.py
+++ b/dts/framework/testbed_model/virtual_device.py
@@ -1,16 +1,29 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Virtual devices model.
+
+Alongside support for physical hardware, DPDK can create various virtual devices.
+"""
+
 
 class VirtualDevice(object):
-    """
-    Base class for virtual devices used by DPDK.
+    """Base class for virtual devices used by DPDK.
+
+    Attributes:
+        name: The name of the virtual device.
     """
 
     name: str
 
     def __init__(self, name: str):
+        """Initialize the virtual device.
+
+        Args:
+            name: The name of the virtual device.
+        """
         self.name = name
 
     def __str__(self) -> str:
+        """This corresponds to the name used for DPDK devices."""
         return self.name
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 14/21] dts: cpu docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (12 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 13/21] dts: port and virtual device " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 15/21] dts: os session " Juraj Linkeš
                                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++--------
 1 file changed, 144 insertions(+), 52 deletions(-)

diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 1b392689f5..9e33b2825d 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -1,6 +1,22 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""CPU core representation and filtering.
+
+This module provides a unified representation of logical CPU cores along
+with filtering capabilities.
+
+When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
+the physical CPU cores are split into logical CPU cores with different IDs.
+
+:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify
+the socket from which to filter the number of logical cores. It's also possible to not use all
+logical CPU cores from each physical core (e.g. only the first logical core of each physical core).
+
+:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that
+the logical cores are actually present on the server.
+"""
+
 import dataclasses
 from abc import ABC, abstractmethod
 from collections.abc import Iterable, ValuesView
@@ -11,9 +27,17 @@
 
 @dataclass(slots=True, frozen=True)
 class LogicalCore(object):
-    """
-    Representation of a CPU core. A physical core is represented in OS
-    by multiple logical cores (lcores) if CPU multithreading is enabled.
+    """Representation of a logical CPU core.
+
+    A physical core is represented in OS by multiple logical cores (lcores)
+    if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same.
+
+    Attributes:
+        lcore: The logical core ID of a CPU core. It's the same as `core` with
+            disabled multithreading.
+        core: The physical core ID of a CPU core.
+        socket: The physical socket ID where the CPU resides.
+        node: The NUMA node ID where the CPU resides.
     """
 
     lcore: int
@@ -22,27 +46,36 @@ class LogicalCore(object):
     node: int
 
     def __int__(self) -> int:
+        """The CPU is best represented by the logical core, as that's what we configure in EAL."""
         return self.lcore
 
 
 class LogicalCoreList(object):
-    """
-    Convert these options into a list of logical core ids.
-    lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
-    lcore_list=[0,1,2,3] - a list of int indices
-    lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
-    lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
-
-    The class creates a unified format used across the framework and allows
-    the user to use either a str representation (using str(instance) or directly
-    in f-strings) or a list representation (by accessing instance.lcore_list).
-    Empty lcore_list is allowed.
+    r"""A unified way to store :class:`LogicalCore`\s.
+
+    Create a unified format used across the framework and allow the user to use
+    either a :class:`str` representation (using ``str(instance)`` or directly in f-strings)
+    or a :class:`list` representation (by accessing the `lcore_list` property,
+    which stores logical core IDs).
     """
 
     _lcore_list: list[int]
     _lcore_str: str
 
     def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+        """Process `lcore_list`, then sort.
+
+        There are four supported logical core list formats::
+
+            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
+            lcore_list=[0,1,2,3]        # a list of int indices
+            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
+            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
+
+        Args:
+            lcore_list: Various ways to represent multiple logical cores.
+                Empty `lcore_list` is allowed.
+        """
         self._lcore_list = []
         if isinstance(lcore_list, str):
             lcore_list = lcore_list.split(",")
@@ -58,6 +91,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
     @property
     def lcore_list(self) -> list[int]:
+        """The logical core IDs."""
         return self._lcore_list
 
     def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
@@ -83,28 +117,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
         return formatted_core_list
 
     def __str__(self) -> str:
+        """The consecutive ranges of logical core IDs."""
         return self._lcore_str
 
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class LogicalCoreCount(object):
-    """
-    Define the number of logical cores to use.
-    If sockets is not None, socket_count is ignored.
-    """
+    """Define the number of logical cores per physical cores per sockets."""
 
+    #: Use this many logical cores per each physical core.
     lcores_per_core: int = 1
+    #: Use this many physical cores per each socket.
     cores_per_socket: int = 2
+    #: Use this many sockets.
     socket_count: int = 1
+    #: Use exactly these sockets. This takes precedence over `socket_count`,
+    #: so when `sockets` is not :data:`None`, `socket_count` is ignored.
     sockets: list[int] | None = None
 
 
 class LogicalCoreFilter(ABC):
-    """
-    Filter according to the input filter specifier. Each filter needs to be
-    implemented in a derived class.
-    This class only implements operations common to all filters, such as sorting
-    the list to be filtered beforehand.
+    """Common filtering class.
+
+    Each filter needs to be implemented in a subclass. This base class sorts the list of cores
+    and defines the filtering method, which must be implemented by subclasses.
     """
 
     _filter_specifier: LogicalCoreCount | LogicalCoreList
@@ -116,6 +152,17 @@ def __init__(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ):
+        """Filter according to the input filter specifier.
+
+        The input `lcore_list` is copied and sorted by physical core before filtering.
+        The list is copied so that the original is left intact.
+
+        Args:
+            lcore_list: The logical CPU cores to filter.
+            filter_specifier: Filter cores from `lcore_list` according to this filter.
+            ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`,
+                sort in descending order.
+        """
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
@@ -124,31 +171,45 @@ def __init__(
 
     @abstractmethod
     def filter(self) -> list[LogicalCore]:
-        """
-        Use self._filter_specifier to filter self._lcores_to_filter
-        and return the list of filtered LogicalCores.
-        self._lcores_to_filter is a sorted copy of the original list,
-        so it may be modified.
+        r"""Filter the cores.
+
+        Use `self._filter_specifier` to filter `self._lcores_to_filter` and return
+        the filtered :class:`LogicalCore`\s.
+        `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified.
+
+        Returns:
+            The filtered cores.
         """
 
 
 class LogicalCoreCountFilter(LogicalCoreFilter):
-    """
+    """Filter cores by specified counts.
+
     Filter the input list of LogicalCores according to specified rules:
-    Use cores from the specified number of sockets or from the specified socket ids.
-    If sockets is specified, it takes precedence over socket_count.
-    From each of those sockets, use only cores_per_socket of cores.
-    And for each core, use lcores_per_core of logical cores. Hypertheading
-    must be enabled for this to take effect.
-    If ascending is True, use cores with the lowest numerical id first
-    and continue in ascending order. If False, start with the highest
-    id and continue in descending order. This ordering affects which
-    sockets to consider first as well.
+
+        * The input `filter_specifier` is :class:`LogicalCoreCount`,
+        * Use cores from the specified number of sockets or from the specified socket ids,
+        * If `sockets` is specified, it takes precedence over `socket_count`,
+        * From each of those sockets, use only `cores_per_socket` of cores,
+        * And for each core, use `lcores_per_core` of logical cores. Hypertheading
+          must be enabled for this to take effect.
     """
 
     _filter_specifier: LogicalCoreCount
 
     def filter(self) -> list[LogicalCore]:
+        """Filter the cores according to :class:`LogicalCoreCount`.
+
+        Start by filtering the allowed sockets. The cores matching the allowed sockets are returned.
+        The cores of each socket are stored in separate lists.
+
+        Then filter the allowed physical cores from those lists of cores per socket. When filtering
+        physical cores, store the desired number of logical cores per physical core which then
+        together constitute the final filtered list.
+
+        Returns:
+            The filtered cores.
+        """
         sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
         filtered_lcores = []
         for socket_to_filter in sockets_to_filter:
@@ -158,24 +219,37 @@ def filter(self) -> list[LogicalCore]:
     def _filter_sockets(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> ValuesView[list[LogicalCore]]:
-        """
-        Remove all lcores that don't match the specified socket(s).
-        If self._filter_specifier.sockets is not None, keep lcores from those sockets,
-        otherwise keep lcores from the first
-        self._filter_specifier.socket_count sockets.
+        """Filter a list of cores per each allowed socket.
+
+        The sockets may be specified in two ways, either a number or a specific list of sockets.
+        In case of a specific list, we just need to return the cores from those sockets.
+        If filtering a number of cores, we need to go through all cores and note which sockets
+        appear and only filter from the first n that appear.
+
+        Args:
+            lcores_to_filter: The cores to filter. These must be sorted by the physical core.
+
+        Returns:
+            A list of lists of logical CPU cores. Each list contains cores from one socket.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
         if self._filter_specifier.sockets:
+            # when sockets in filter is specified, the sockets are already set
             socket_count = len(self._filter_specifier.sockets)
             allowed_sockets = set(self._filter_specifier.sockets)
 
+        # filter socket_count sockets from all sockets by checking the socket of each CPU
         filtered_lcores: dict[int, list[LogicalCore]] = {}
         for lcore in lcores_to_filter:
             if not self._filter_specifier.sockets:
+                # this is when sockets is not set, so we do the actual filtering
+                # when it is set, allowed_sockets is already defined and can't be changed
                 if len(allowed_sockets) < socket_count:
+                    # allowed_sockets is a set, so adding an existing socket won't re-add it
                     allowed_sockets.add(lcore.socket)
             if lcore.socket in allowed_sockets:
+                # separate lcores into sockets; this makes it easier in further processing
                 if lcore.socket in filtered_lcores:
                     filtered_lcores[lcore.socket].append(lcore)
                 else:
@@ -192,12 +266,13 @@ def _filter_sockets(
     def _filter_cores_from_socket(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> list[LogicalCore]:
-        """
-        Keep only the first self._filter_specifier.cores_per_socket cores.
-        In multithreaded environments, keep only
-        the first self._filter_specifier.lcores_per_core lcores of those cores.
-        """
+        """Filter a list of cores from the given socket.
+
+        Go through the cores and note how many logical cores per physical core have been filtered.
 
+        Returns:
+            The filtered logical CPU cores.
+        """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
         lcore_count_per_core_map: dict[int, int] = {}
@@ -238,15 +313,21 @@ def _filter_cores_from_socket(
 
 
 class LogicalCoreListFilter(LogicalCoreFilter):
-    """
-    Filter the input list of Logical Cores according to the input list of
-    lcore indices.
-    An empty LogicalCoreList won't filter anything.
+    """Filter the logical CPU cores by logical CPU core IDs.
+
+    This is a simple filter that looks at logical CPU IDs and only filter those that match.
+
+    The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything.
     """
 
     _filter_specifier: LogicalCoreList
 
     def filter(self) -> list[LogicalCore]:
+        """Filter based on logical CPU core ID.
+
+        Return:
+            The filtered logical CPU cores.
+        """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
 
@@ -269,6 +350,17 @@ def lcore_filter(
     filter_specifier: LogicalCoreCount | LogicalCoreList,
     ascending: bool,
 ) -> LogicalCoreFilter:
+    """Factory for providing the filter that corresponds to `filter_specifier`.
+
+    Args:
+        core_list: The logical CPU cores to filter.
+        filter_specifier: The filter to use.
+        ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+            sort in descending order.
+
+    Returns:
+        The filter that corresponds to `filter_specifier`.
+    """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
     elif isinstance(filter_specifier, LogicalCoreCount):
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 15/21] dts: os session docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (13 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 14/21] dts: cpu " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-12-01 17:33                   ` Jeremy Spewock
  2023-11-23 15:13                 ` [PATCH v8 16/21] dts: posix and linux sessions " Juraj Linkeš
                                   ` (7 subsequent siblings)
  22 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/os_session.py | 272 ++++++++++++++++------
 1 file changed, 205 insertions(+), 67 deletions(-)

diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 76e595a518..cfdbd1c4bd 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -2,6 +2,26 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""OS-aware remote session.
+
+DPDK supports multiple different operating systems, meaning it can run on these different operating
+systems. This module defines the common API that OS-unaware layers use and translates the API into
+OS-aware calls/utility usage.
+
+Note:
+    Running commands with administrative privileges requires OS awareness. This is the only layer
+    that's aware of OS differences, so this is where non-privileged command get converted
+    to privileged commands.
+
+Example:
+    A user wishes to remove a directory on a remote :class:`~.sut_node.SutNode`.
+    The :class:`~.sut_node.SutNode` object isn't aware what OS the node is running - it delegates
+    the OS translation logic to :attr:`~.node.Node.main_session`. The SUT node calls
+    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
+    the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux
+    and other commands for other OSs. It also translates the path to match the underlying OS.
+"""
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from ipaddress import IPv4Interface, IPv6Interface
@@ -28,10 +48,16 @@
 
 
 class OSSession(ABC):
-    """
-    The OS classes create a DTS node remote session and implement OS specific
+    """OS-unaware to OS-aware translation API definition.
+
+    The OSSession classes create a remote session to a DTS node and implement OS specific
     behavior. There a few control methods implemented by the base class, the rest need
-    to be implemented by derived classes.
+    to be implemented by subclasses.
+
+    Attributes:
+        name: The name of the session.
+        remote_session: The remote session maintaining the connection to the node.
+        interactive_session: The interactive remote session maintaining the connection to the node.
     """
 
     _config: NodeConfiguration
@@ -46,6 +72,15 @@ def __init__(
         name: str,
         logger: DTSLOG,
     ):
+        """Initialize the OS-aware session.
+
+        Connect to the node right away and also create an interactive remote session.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            name: The name of the session.
+            logger: The logger instance this session will use.
+        """
         self._config = node_config
         self.name = name
         self._logger = logger
@@ -53,15 +88,15 @@ def __init__(
         self.interactive_session = create_interactive_session(node_config, logger)
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session.
+        """Close the underlying remote session.
+
+        Args:
+            force: Force the closure of the connection.
         """
         self.remote_session.close(force)
 
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the underlying remote session is still responding."""
         return self.remote_session.is_alive()
 
     def send_command(
@@ -72,10 +107,23 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        An all-purpose API in case the command to be executed is already
-        OS-agnostic, such as when the path to the executed command has been
-        constructed beforehand.
+        """An all-purpose API for OS-agnostic commands.
+
+        This can be used for an execution of a portable command that's executed the same way
+        on all operating systems, such as Python.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds for `command` execution to complete.
+            privileged: Whether to run the command with administrative privileges.
+            verify: If :data:`True`, will check the exit code of the command.
+            env: A dictionary with environment variables to be used with the command execution.
+
+        Raises:
+            RemoteCommandExecutionError: If verify is :data:`True` and the command failed.
         """
         if privileged:
             command = self._get_privileged_command(command)
@@ -89,8 +137,20 @@ def create_interactive_shell(
         privileged: bool,
         app_args: str,
     ) -> InteractiveShellType:
-        """
-        See "create_interactive_shell" in SutNode
+        """Factory for interactive session handlers.
+
+        Instantiate `shell_cls` according to the remote OS specifics.
+
+        Args:
+            shell_cls: The class of the shell.
+            timeout: Timeout for reading output from the SSH channel. If you are
+                reading from the buffer and don't receive any data within the timeout
+                it will throw an error.
+            privileged: Whether to run the shell with administrative privileges.
+            app_args: The arguments to be passed to the application.
+
+        Returns:
+            An instance of the desired interactive application shell.
         """
         return shell_cls(
             self.interactive_session.session,
@@ -114,27 +174,42 @@ def _get_privileged_command(command: str) -> str:
 
     @abstractmethod
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
-        """
-        Try to find DPDK remote dir in remote_dir.
+        """Try to find DPDK directory in `remote_dir`.
+
+        The directory is the one which is created after the extraction of the tarball. The files
+        are usually extracted into a directory starting with ``dpdk-``.
+
+        Returns:
+            The absolute path of the DPDK remote directory, empty path if not found.
         """
 
     @abstractmethod
     def get_remote_tmp_dir(self) -> PurePath:
-        """
-        Get the path of the temporary directory of the remote OS.
+        """Get the path of the temporary directory of the remote OS.
+
+        Returns:
+            The absolute path of the temporary directory.
         """
 
     @abstractmethod
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for the target architecture. Get
-        information from the node if needed.
+        """Create extra environment variables needed for the target architecture.
+
+        Different architectures may require different configuration, such as setting 32-bit CFLAGS.
+
+        Returns:
+            A dictionary with keys as environment variables.
         """
 
     @abstractmethod
     def join_remote_path(self, *args: str | PurePath) -> PurePath:
-        """
-        Join path parts using the path separator that fits the remote OS.
+        """Join path parts using the path separator that fits the remote OS.
+
+        Args:
+            args: Any number of paths to join.
+
+        Returns:
+            The resulting joined path.
         """
 
     @abstractmethod
@@ -143,13 +218,13 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from the remote Node to the local filesystem.
+        """Copy a file from the remote node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote node associated with this remote
+        session to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
+            source_file: the file on the remote node.
             destination_file: a file or directory path on the local filesystem.
         """
 
@@ -159,14 +234,14 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from local filesystem to the remote Node.
+        """Copy a file from local filesystem to the remote node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file`
+        on the remote node associated with this remote session.
 
         Args:
             source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            destination_file: a file or directory path on the remote node.
         """
 
     @abstractmethod
@@ -176,8 +251,12 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
-        """
-        Remove remote directory, by default remove recursively and forcefully.
+        """Remove remote directory, by default remove recursively and forcefully.
+
+        Args:
+            remote_dir_path: The path of the directory to remove.
+            recursive: If :data:`True`, also remove all contents inside the directory.
+            force: If :data:`True`, ignore all warnings and try to remove at all costs.
         """
 
     @abstractmethod
@@ -186,9 +265,12 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
-        """
-        Extract remote tarball in place. If expected_dir is a non-empty string, check
-        whether the dir exists after extracting the archive.
+        """Extract remote tarball in its remote directory.
+
+        Args:
+            remote_tarball_path: The path of the tarball on the remote node.
+            expected_dir: If non-empty, check whether `expected_dir` exists after extracting
+                the archive.
         """
 
     @abstractmethod
@@ -201,69 +283,119 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """
-        Build DPDK in the input dir with specified environment variables and meson
-        arguments.
+        """Build DPDK on the remote node.
+
+        An extracted DPDK tarball must be present on the node. The build consists of two steps::
+
+            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
+            ninja -C remote_dpdk_build_dir
+
+        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
+        environment variable configure the timeout of DPDK build.
+
+        Args:
+            env_vars: Use these environment variables then building DPDK.
+            meson_args: Use these meson arguments when building DPDK.
+            remote_dpdk_dir: The directory on the remote node where DPDK will be built.
+            remote_dpdk_build_dir: The target build directory on the remote node.
+            rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead
+                of ``meson setup``.
+            timeout: Wait at most this long in seconds for the build execution to complete.
         """
 
     @abstractmethod
     def get_dpdk_version(self, version_path: str | PurePath) -> str:
-        """
-        Inspect DPDK version on the remote node from version_path.
+        """Inspect the DPDK version on the remote node.
+
+        Args:
+            version_path: The path to the VERSION file containing the DPDK version.
+
+        Returns:
+            The DPDK version.
         """
 
     @abstractmethod
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
-        """
-        Compose a list of LogicalCores present on the remote node.
-        If use_first_core is False, the first physical core won't be used.
+        r"""Get the list of :class:`~.cpu.LogicalCore`\s on the remote node.
+
+        Args:
+            use_first_core: If :data:`False`, the first physical core won't be used.
+
+        Returns:
+            The logical cores present on the node.
         """
 
     @abstractmethod
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
-        """
-        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
-        dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+        """Kill and cleanup all DPDK apps.
+
+        Args:
+            dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`.
+                If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean.
         """
 
     @abstractmethod
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
-        """
-        Get the DPDK file prefix that will be used when running DPDK apps.
+        """Make OS-specific modification to the DPDK file prefix.
+
+        Args:
+           dpdk_prefix: The OS-unaware file prefix.
+
+        Returns:
+            The OS-specific file prefix.
         """
 
     @abstractmethod
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
-        """
-        Get the node's Hugepage Size, configure the specified amount of hugepages
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Configure hugepages on the node.
+
+        Get the node's Hugepage Size, configure the specified count of hugepages
         if needed and mount the hugepages if needed.
-        If force_first_numa is True, configure hugepages just on the first socket.
+
+        Args:
+            hugepage_count: Configure this many hugepages.
+            force_first_numa:  If :data:`True`, configure hugepages just on the first numa node.
         """
 
     @abstractmethod
     def get_compiler_version(self, compiler_name: str) -> str:
-        """
-        Get installed version of compiler used for DPDK
+        """Get installed version of compiler used for DPDK.
+
+        Args:
+            compiler_name: The name of the compiler executable.
+
+        Returns:
+            The compiler's version.
         """
 
     @abstractmethod
     def get_node_info(self) -> NodeInfo:
-        """
-        Collect information about the node
+        """Collect additional information about the node.
+
+        Returns:
+            Node information.
         """
 
     @abstractmethod
     def update_ports(self, ports: list[Port]) -> None:
-        """
-        Get additional information about ports:
-            Logical name (e.g. enp7s0) if applicable
-            Mac address
+        """Get additional information about ports from the operating system and update them.
+
+        The additional information is:
+
+            * Logical name (e.g. ``enp7s0``) if applicable,
+            * Mac address.
+
+        Args:
+            ports: The ports to update.
         """
 
     @abstractmethod
     def configure_port_state(self, port: Port, enable: bool) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port` in the operating system.
+
+        Args:
+            port: The port to configure.
+            enable: If :data:`True`, enable the port, otherwise shut it down.
         """
 
     @abstractmethod
@@ -273,12 +405,18 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
-        """
-        Configure (add or delete) an IP address of the input port.
+        """Configure an IP address on `port` in the operating system.
+
+        Args:
+            address: The address to configure.
+            port: The port to configure.
+            delete: If :data:`True`, remove the IP address, otherwise configure it.
         """
 
     @abstractmethod
     def configure_ipv4_forwarding(self, enable: bool) -> None:
-        """
-        Enable IPv4 forwarding in the underlying OS.
+        """Enable IPv4 forwarding in the operating system.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
         """
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 16/21] dts: posix and linux sessions docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (14 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 15/21] dts: os session " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 17/21] dts: node " Juraj Linkeš
                                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/linux_session.py | 64 +++++++++++-----
 dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++---
 2 files changed, 114 insertions(+), 31 deletions(-)

diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 055765ba2d..0ab59cef85 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""Linux OS translator.
+
+Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly
+compliant with POSIX standards, so this module only implements the parts that aren't.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import json
 from ipaddress import IPv4Interface, IPv6Interface
 from typing import TypedDict, Union
@@ -17,43 +24,52 @@
 
 
 class LshwConfigurationOutput(TypedDict):
+    """The relevant parts of ``lshw``'s ``configuration`` section."""
+
+    #:
     link: str
 
 
 class LshwOutput(TypedDict):
-    """
-    A model of the relevant information from json lshw output, e.g.:
-    {
-    ...
-    "businfo" : "pci@0000:08:00.0",
-    "logicalname" : "enp8s0",
-    "version" : "00",
-    "serial" : "52:54:00:59:e1:ac",
-    ...
-    "configuration" : {
-      ...
-      "link" : "yes",
-      ...
-    },
-    ...
+    """A model of the relevant information from ``lshw``'s json output.
+
+    Example:
+        ::
+
+            {
+            ...
+            "businfo" : "pci@0000:08:00.0",
+            "logicalname" : "enp8s0",
+            "version" : "00",
+            "serial" : "52:54:00:59:e1:ac",
+            ...
+            "configuration" : {
+              ...
+              "link" : "yes",
+              ...
+            },
+            ...
     """
 
+    #:
     businfo: str
+    #:
     logicalname: NotRequired[str]
+    #:
     serial: NotRequired[str]
+    #:
     configuration: LshwConfigurationOutput
 
 
 class LinuxSession(PosixSession):
-    """
-    The implementation of non-Posix compliant parts of Linux remote sessions.
-    """
+    """The implementation of non-Posix compliant parts of Linux."""
 
     @staticmethod
     def _get_privileged_command(command: str) -> str:
         return f"sudo -- sh -c '{command}'"
 
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
         cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout
         lcores = []
         for cpu_line in cpu_info.splitlines():
@@ -65,18 +81,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
         return lcores
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
         self._logger.info("Getting Hugepage information.")
         hugepage_size = self._get_hugepage_size()
         hugepages_total = self._get_hugepages_total()
         self._numa_nodes = self._get_numa_nodes()
 
-        if force_first_numa or hugepages_total != hugepage_amount:
+        if force_first_numa or hugepages_total != hugepage_count:
             # when forcing numa, we need to clear existing hugepages regardless
             # of size, so they can be moved to the first numa node
-            self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+            self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
         else:
             self._logger.info("Hugepages already configured.")
         self._mount_huge_pages()
@@ -132,6 +150,7 @@ def _configure_huge_pages(self, amount: int, size: int, force_first_numa: bool)
         self.send_command(f"echo {amount} | tee {hugepage_config_path}", privileged=True)
 
     def update_ports(self, ports: list[Port]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
             assert port.node == self.name, "Attempted to gather port info on the wrong node"
@@ -161,6 +180,7 @@ def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str)
             )
 
     def configure_port_state(self, port: Port, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
         state = "up" if enable else "down"
         self.send_command(f"ip link set dev {port.logical_name} {state}", privileged=True)
 
@@ -170,6 +190,7 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`."""
         command = "del" if delete else "add"
         self.send_command(
             f"ip address {command} {address} dev {port.logical_name}",
@@ -178,5 +199,6 @@ def configure_port_ip_address(
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`."""
         state = 1 if enable else 0
         self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index 5657cc0bc9..d279bb8b53 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -2,6 +2,15 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""POSIX compliant OS translator.
+
+Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards
+for portability between Unix operating systems which not all Linux distributions
+(or the tools most frequently bundled with said distributions) adhere to. Most of Linux
+distributions are mostly compliant though.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import re
 from collections.abc import Iterable
 from pathlib import PurePath, PurePosixPath
@@ -15,13 +24,21 @@
 
 
 class PosixSession(OSSession):
-    """
-    An intermediary class implementing the Posix compliant parts of
-    Linux and other OS remote sessions.
-    """
+    """An intermediary class implementing the POSIX standard."""
 
     @staticmethod
     def combine_short_options(**opts: bool) -> str:
+        """Combine shell options into one argument.
+
+        These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``.
+
+        Args:
+            opts: The keys are option names (usually one letter) and the bool values indicate
+                whether to include the option in the resulting argument.
+
+        Returns:
+            The options combined into one argument.
+        """
         ret_opts = ""
         for opt, include in opts.items():
             if include:
@@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str:
         return ret_opts
 
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`."""
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
 
     def get_remote_tmp_dir(self) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`."""
         return PurePosixPath("/tmp")
 
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for i686 arch build. Get information
-        from the node if needed.
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`.
+
+        Supported architecture: ``i686``.
         """
         env_vars = {}
         if arch == Architecture.i686:
@@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
         return env_vars
 
     def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.join_remote_path`."""
         return PurePosixPath(*args)
 
     def copy_from(
@@ -70,6 +90,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_file)
 
     def copy_to(
@@ -77,6 +98,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_to`."""
         self.remote_session.copy_to(source_file, destination_file)
 
     def remove_remote_dir(
@@ -85,6 +107,7 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(r=recursive, f=force)
         self.send_command(f"rm{opts} {remote_dir_path}")
 
@@ -93,6 +116,7 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
             f"tar xfm {remote_tarball_path} -C {PurePosixPath(remote_tarball_path).parent}",
             60,
@@ -109,6 +133,7 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
         try:
             if rebuild:
                 # reconfigure, then build
@@ -138,10 +163,12 @@ def build_dpdk(
             raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
         out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`."""
         self._logger.info("Cleaning up DPDK apps.")
         dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
         if dpdk_runtime_dirs:
@@ -153,6 +180,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
     def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
+        """Find runtime directories DPDK apps are currently using.
+
+        Args:
+              dpdk_prefix_list: The prefixes DPDK apps were started with.
+
+        Returns:
+            The paths of DPDK apps' runtime dirs.
+        """
         prefix = PurePosixPath("/var", "run", "dpdk")
         if not dpdk_prefix_list:
             remote_prefixes = self._list_remote_dirs(prefix)
@@ -164,9 +199,13 @@ def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePo
         return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
 
     def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
-        """
-        Return a list of directories of the remote_dir.
-        If remote_path doesn't exist, return None.
+        """Contents of remote_path.
+
+        Args:
+            remote_path: List the contents of this path.
+
+        Returns:
+            The contents of remote_path. If remote_path doesn't exist, return None.
         """
         out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
         if "No such file or directory" in out:
@@ -175,6 +214,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
             return out.splitlines()
 
     def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+        """Find PIDs of running DPDK apps.
+
+        Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes
+        that opened those file.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+
+        Returns:
+            The PIDs of running DPDK apps.
+        """
         pids = []
         pid_regex = r"p(\d+)"
         for dpdk_runtime_dir in dpdk_runtime_dirs:
@@ -193,6 +243,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool:
         return not result.return_code
 
     def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+        """Check there aren't any leftover hugepages.
+
+        If any hugepages are found, emit a warning. The hugepages are investigated in the
+        "hugepage_info" file of dpdk_runtime_dirs.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+        """
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
             if self._remote_files_exists(hugepage_info):
@@ -208,9 +266,11 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath])
             self.remove_remote_dir(dpdk_runtime_dir)
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -228,6 +288,7 @@ def get_compiler_version(self, compiler_name: str) -> str:
                 raise ValueError(f"Unknown compiler {compiler_name}")
 
     def get_node_info(self) -> NodeInfo:
+        """Overrides :meth:`~.os_session.OSSession.get_node_info`."""
         os_release_info = self.send_command(
             "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release",
             SETTINGS.timeout,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 17/21] dts: node docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (15 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 16/21] dts: posix and linux sessions " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-11-23 15:13                 ` [PATCH v8 18/21] dts: sut and tg nodes " Juraj Linkeš
                                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 191 +++++++++++++++++++---------
 1 file changed, 131 insertions(+), 60 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index b313b5ad54..6eecbdfd6a 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+A node is any host/server DTS connects to.
+
+The base class, :class:`Node`, provides functionality common to all nodes and is supposed
+to be extended by subclasses with functionalities specific to each node type.
+The :func:`~Node.skip_setup` decorator can be used without subclassing.
 """
 
 from abc import ABC
@@ -35,10 +40,22 @@
 
 
 class Node(ABC):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather subclassed.
+    It implements common methods to manage any node:
+
+        * Connection to the node,
+        * Hugepages setup.
+
+    Attributes:
+        main_session: The primary OS-aware remote session used to communicate with the node.
+        config: The node configuration.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and the test run configuration.
+        ports: The ports of this node specified in the test run configuration.
+        virtual_devices: The virtual devices used on the node.
     """
 
     main_session: OSSession
@@ -52,6 +69,17 @@ class Node(ABC):
     virtual_devices: list[VirtualDevice]
 
     def __init__(self, node_config: NodeConfiguration):
+        """Connect to the node and gather info during initialization.
+
+        Extra gathered information:
+
+        * The list of available logical CPUs. This is then filtered by
+          the ``lcores`` configuration in the YAML test run configuration file,
+        * Information about ports from the YAML test run configuration file.
+
+        Args:
+            node_config: The node's test run configuration.
+        """
         self.config = node_config
         self.name = node_config.name
         self._logger = getLogger(self.name)
@@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Connected to node: {self.name}")
 
         self._get_remote_cpus()
-        # filter the node lcores according to user config
+        # filter the node lcores according to the test run configuration
         self.lcores = LogicalCoreListFilter(
             self.lcores, LogicalCoreList(self.config.lcores)
         ).filter()
@@ -76,9 +104,14 @@ def _init_ports(self) -> None:
             self.configure_port_state(port)
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call :meth:`_set_up_execution` where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution test run configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
@@ -87,54 +120,70 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
             self.virtual_devices.append(VirtualDevice(vdev))
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps common to all DTS node types.
         """
         self.virtual_devices = []
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution teardown steps.
         """
 
     def set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps common to all DTS node types.
+
+        Args:
+            build_target_config: The build target test run configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-aware remote session.
+
+        The returned session won't be used by the node creating it. The session must be used by
+        the caller. The session will be maintained for the entire lifecycle of the node object,
+        at the end of which the session will be cleaned up automatically.
+
+        Note:
+            Any number of these supplementary sessions may be created.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-aware remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -152,19 +201,19 @@ def create_interactive_shell(
         privileged: bool = False,
         app_args: str = "",
     ) -> InteractiveShellType:
-        """Create a handler for an interactive session.
+        """Factory for interactive session handlers.
 
-        Instantiate shell_cls according to the remote OS specifics.
+        Instantiate `shell_cls` according to the remote OS specifics.
 
         Args:
             shell_cls: The class of the shell.
-            timeout: Timeout for reading output from the SSH channel. If you are
-                reading from the buffer and don't receive any data within the timeout
-                it will throw an error.
+            timeout: Timeout for reading output from the SSH channel. If you are reading from
+                the buffer and don't receive any data within the timeout it will throw an error.
             privileged: Whether to run the shell with administrative privileges.
             app_args: The arguments to be passed to the application.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not shell_cls.dpdk_app:
             shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
@@ -181,14 +230,22 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
+
+        Logical cores that DTS can use are the ones that are present on the node, but filtered
+        according to the test run configuration. The `filter_specifier` will filter cores from
+        those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies the number
+                of logical cores per core, cores per socket and the number of sockets,
+                and another one that specifies a logical core list.
+            ascending: If :data:`True`, use cores with the lowest numerical id first and continue
+                in ascending order. If :data:`False`, start with the highest id and continue
+                in descending order. This ordering affects which sockets to consider first as well.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Returns:
+            The filtered logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -198,17 +255,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self) -> None:
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the node.
+
+        Configure the hugepages only if they're specified in the node's test run configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -216,8 +270,11 @@ def _setup_hugepages(self) -> None:
             )
 
     def configure_port_state(self, port: Port, enable: bool = True) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port`.
+
+        Args:
+            port: The port to enable/disable.
+            enable: :data:`True` to enable, :data:`False` to disable.
         """
         self.main_session.configure_port_state(port, enable)
 
@@ -227,15 +284,17 @@ def configure_port_ip_address(
         port: Port,
         delete: bool = False,
     ) -> None:
-        """
-        Configure the IP address of a port on this node.
+        """Add an IP address to `port` on this node.
+
+        Args:
+            address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6.
+            port: The port to which to add the address.
+            delete: If :data:`True`, will delete the address from the port instead of adding it.
         """
         self.main_session.configure_port_ip_address(address, port, delete)
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -244,6 +303,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """Skip the decorated function.
+
+        The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP`
+        environment variable enable the decorator.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
@@ -251,6 +315,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
 
 
 def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession:
+    """Factory for OS-aware sessions.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+    """
     match node_config.os:
         case OS.linux:
             return LinuxSession(node_config, name, logger)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 18/21] dts: sut and tg nodes docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (16 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 17/21] dts: node " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-12-01 18:06                   ` Jeremy Spewock
  2023-11-23 15:13                 ` [PATCH v8 19/21] dts: base traffic generators " Juraj Linkeš
                                   ` (4 subsequent siblings)
  22 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/sut_node.py | 230 ++++++++++++++++--------
 dts/framework/testbed_model/tg_node.py  |  42 +++--
 2 files changed, 176 insertions(+), 96 deletions(-)

diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 5ce9446dba..c4acea38d1 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -3,6 +3,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""System under test (DPDK + hardware) node.
+
+A system under test (SUT) is the combination of DPDK
+and the hardware we're testing with DPDK (NICs, crypto and other devices).
+An SUT node is where this SUT runs.
+"""
+
+
 import os
 import tarfile
 import time
@@ -26,6 +34,11 @@
 
 
 class EalParameters(object):
+    """The environment abstraction layer parameters.
+
+    The string representation can be created by converting the instance to a string.
+    """
+
     def __init__(
         self,
         lcore_list: LogicalCoreList,
@@ -35,21 +48,23 @@ def __init__(
         vdevs: list[VirtualDevice],
         other_eal_param: str,
     ):
-        """
-        Generate eal parameters character string;
-        :param lcore_list: the list of logical cores to use.
-        :param memory_channels: the number of memory channels to use.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
+        """Initialize the parameters according to inputs.
+
+        Process the parameters into the format used on the command line.
+
+        Args:
+            lcore_list: The list of logical cores to use.
+            memory_channels: The number of memory channels to use.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``
         """
         self._lcore_list = f"-l {lcore_list}"
         self._memory_channels = f"-n {memory_channels}"
@@ -61,6 +76,7 @@ def __init__(
         self._other_eal_param = other_eal_param
 
     def __str__(self) -> str:
+        """Create the EAL string."""
         return (
             f"{self._lcore_list} "
             f"{self._memory_channels} "
@@ -72,11 +88,21 @@ def __str__(self) -> str:
 
 
 class SutNode(Node):
-    """
-    A class for managing connections to the System under Test, providing
-    methods that retrieve the necessary information about the node (such as
-    CPU, memory and NIC details) and configuration capabilities.
-    Another key capability is building DPDK according to given build target.
+    """The system under test node.
+
+    The SUT node extends :class:`Node` with DPDK specific features:
+
+        * DPDK build,
+        * Gathering of DPDK build info,
+        * The running of DPDK apps, interactively or one-time execution,
+        * DPDK apps cleanup.
+
+    The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL`
+    environment variable configure the path to the DPDK tarball
+    or the git commit ID, tag ID or tree ID to test.
+
+    Attributes:
+        config: The SUT node configuration
     """
 
     config: SutNodeConfiguration
@@ -94,6 +120,11 @@ class SutNode(Node):
     _path_to_devbind_script: PurePath | None
 
     def __init__(self, node_config: SutNodeConfiguration):
+        """Extend the constructor with SUT node specifics.
+
+        Args:
+            node_config: The SUT node's test run configuration.
+        """
         super(SutNode, self).__init__(node_config)
         self._dpdk_prefix_list = []
         self._build_target_config = None
@@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration):
 
     @property
     def _remote_dpdk_dir(self) -> PurePath:
+        """The remote DPDK dir.
+
+        This internal property should be set after extracting the DPDK tarball. If it's not set,
+        that implies the DPDK setup step has been skipped, in which case we can guess where
+        a previous build was located.
+        """
         if self.__remote_dpdk_dir is None:
             self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
         return self.__remote_dpdk_dir
@@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
 
     @property
     def remote_dpdk_build_dir(self) -> PurePath:
+        """The remote DPDK build directory.
+
+        This is the directory where DPDK was built.
+        We assume it was built in a subdirectory of the extracted tarball.
+        """
         if self._build_target_config:
             return self.main_session.join_remote_path(
                 self._remote_dpdk_dir, self._build_target_config.name
@@ -132,18 +174,21 @@ def remote_dpdk_build_dir(self) -> PurePath:
 
     @property
     def dpdk_version(self) -> str:
+        """Last built DPDK version."""
         if self._dpdk_version is None:
             self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_dir)
         return self._dpdk_version
 
     @property
     def node_info(self) -> NodeInfo:
+        """Additional node information."""
         if self._node_info is None:
             self._node_info = self.main_session.get_node_info()
         return self._node_info
 
     @property
     def compiler_version(self) -> str:
+        """The node's compiler version."""
         if self._compiler_version is None:
             if self._build_target_config is not None:
                 self._compiler_version = self.main_session.get_compiler_version(
@@ -158,6 +203,7 @@ def compiler_version(self) -> str:
 
     @property
     def path_to_devbind_script(self) -> PurePath:
+        """The path to the dpdk-devbind.py script on the node."""
         if self._path_to_devbind_script is None:
             self._path_to_devbind_script = self.main_session.join_remote_path(
                 self._remote_dpdk_dir, "usertools", "dpdk-devbind.py"
@@ -165,6 +211,11 @@ def path_to_devbind_script(self) -> PurePath:
         return self._path_to_devbind_script
 
     def get_build_target_info(self) -> BuildTargetInfo:
+        """Get additional build target information.
+
+        Returns:
+            The build target information,
+        """
         return BuildTargetInfo(
             dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
         )
@@ -173,8 +224,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
         return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
 
     def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        Setup DPDK on the SUT node.
+        """Setup DPDK on the SUT node.
+
+        Additional build target setup steps on top of those in :class:`Node`.
         """
         # we want to ensure that dpdk_version and compiler_version is reset for new
         # build targets
@@ -186,16 +238,14 @@ def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) ->
         self.bind_ports_to_driver()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Bind ports to the operating system drivers.
+
+        Additional build target teardown steps on top of those in :class:`Node`.
         """
         self.bind_ports_to_driver(for_dpdk=False)
 
     def _configure_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        Populate common environment variables and set build target config.
-        """
+        """Populate common environment variables and set build target config."""
         self._env_vars = {}
         self._build_target_config = build_target_config
         self._env_vars.update(self.main_session.get_dpdk_build_env_vars(build_target_config.arch))
@@ -207,9 +257,7 @@ def _configure_build_target(self, build_target_config: BuildTargetConfiguration)
 
     @Node.skip_setup
     def _copy_dpdk_tarball(self) -> None:
-        """
-        Copy to and extract DPDK tarball on the SUT node.
-        """
+        """Copy to and extract DPDK tarball on the SUT node."""
         self._logger.info("Copying DPDK tarball to SUT.")
         self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
 
@@ -238,8 +286,9 @@ def _copy_dpdk_tarball(self) -> None:
 
     @Node.skip_setup
     def _build_dpdk(self) -> None:
-        """
-        Build DPDK. Uses the already configured target. Assumes that the tarball has
+        """Build DPDK.
+
+        Uses the already configured target. Assumes that the tarball has
         already been copied to and extracted on the SUT node.
         """
         self.main_session.build_dpdk(
@@ -250,15 +299,19 @@ def _build_dpdk(self) -> None:
         )
 
     def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
-        """
-        Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
-        When app_name is 'all', build all example apps.
-        When app_name is any other string, tries to build that example app.
-        Return the directory path of the built app. If building all apps, return
-        the path to the examples directory (where all apps reside).
-        The meson_dpdk_args are keyword arguments
-        found in meson_option.txt in root DPDK directory. Do not use -D with them,
-        for example: enable_kmods=True.
+        """Build one or all DPDK apps.
+
+        Requires DPDK to be already built on the SUT node.
+
+        Args:
+            app_name: The name of the DPDK app to build.
+                When `app_name` is ``all``, build all example apps.
+            meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Returns:
+            The directory path of the built app. If building all apps, return
+            the path to the examples directory (where all apps reside).
         """
         self.main_session.build_dpdk(
             self._env_vars,
@@ -277,9 +330,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
     def kill_cleanup_dpdk_apps(self) -> None:
-        """
-        Kill all dpdk applications on the SUT. Cleanup hugepages.
-        """
+        """Kill all dpdk applications on the SUT, then clean up hugepages."""
         if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
             # we can use the session if it exists and responds
             self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
@@ -298,33 +349,34 @@ def create_eal_parameters(
         vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
-        """
-        Generate eal parameters character string;
-        :param lcore_filter_specifier: a number of lcores/cores/sockets to use
-                        or a list of lcore ids to use.
-                        The default will select one lcore for each of two cores
-                        on one socket, in ascending order of core ids.
-        :param ascending_cores: True, use cores with the lowest numerical id first
-                        and continue in ascending order. If False, start with the
-                        highest id and continue in descending order. This ordering
-                        affects which sockets to consider first as well.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param append_prefix_timestamp: if True, will append a timestamp to
-                        DPDK file prefix.
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
-        :return: eal param string, eg:
-                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
-        """
+        """Compose the EAL parameters.
+
+        Process the list of cores and the DPDK prefix and pass that along with
+        the rest of the arguments.
 
+        Args:
+            lcore_filter_specifier: A number of lcores/cores/sockets to use
+                or a list of lcore ids to use.
+                The default will select one lcore for each of two cores
+                on one socket, in ascending order of core ids.
+            ascending_cores: Sort cores in ascending order (lowest to highest IDs).
+                If :data:`False`, sort in descending order.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``.
+
+        Returns:
+            An EAL param string, such as
+            ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``.
+        """
         lcore_list = LogicalCoreList(self.filter_lcores(lcore_filter_specifier, ascending_cores))
 
         if append_prefix_timestamp:
@@ -348,14 +400,29 @@ def create_eal_parameters(
     def run_dpdk_app(
         self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
     ) -> CommandResult:
-        """
-        Run DPDK application on the remote node.
+        """Run DPDK application on the remote node.
+
+        The application is not run interactively - the command that starts the application
+        is executed and then the call waits for it to finish execution.
+
+        Args:
+            app_path: The remote path to the DPDK application.
+            eal_args: EAL parameters to run the DPDK application with.
+            timeout: Wait at most this long in seconds for `command` execution to complete.
+
+        Returns:
+            The result of the DPDK app execution.
         """
         return self.main_session.send_command(
             f"{app_path} {eal_args}", timeout, privileged=True, verify=True
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Enable/disable IPv4 forwarding on the node.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
+        """
         self.main_session.configure_ipv4_forwarding(enable)
 
     def create_interactive_shell(
@@ -365,9 +432,13 @@ def create_interactive_shell(
         privileged: bool = False,
         eal_parameters: EalParameters | str | None = None,
     ) -> InteractiveShellType:
-        """Factory method for creating a handler for an interactive session.
+        """Extend the factory for interactive session handlers.
+
+        The extensions are SUT node specific:
 
-        Instantiate shell_cls according to the remote OS specifics.
+            * The default for `eal_parameters`,
+            * The interactive shell path `shell_cls.path` is prepended with path to the remote
+              DPDK build directory for DPDK apps.
 
         Args:
             shell_cls: The class of the shell.
@@ -377,9 +448,10 @@ def create_interactive_shell(
             privileged: Whether to run the shell with administrative privileges.
             eal_parameters: List of EAL parameters to use to launch the app. If this
                 isn't provided or an empty string is passed, it will default to calling
-                create_eal_parameters().
+                :meth:`create_eal_parameters`.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not eal_parameters:
             eal_parameters = self.create_eal_parameters()
@@ -396,8 +468,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
         """Bind all ports on the SUT to a driver.
 
         Args:
-            for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk
-            or, when False, binds to os_driver. Defaults to True.
+            for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
+                If :data:`False`, binds to os_driver.
         """
         for port in self.ports:
             driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 8a8f0019f3..f269d4c585 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -5,13 +5,8 @@
 
 """Traffic generator node.
 
-This is the node where the traffic generator resides.
-The distinction between a node and a traffic generator is as follows:
-A node is a host that DTS connects to. It could be a baremetal server,
-a VM or a container.
-A traffic generator is software running on the node.
-A traffic generator node is a node running a traffic generator.
-A node can be a traffic generator node as well as system under test node.
+A traffic generator (TG) generates traffic that's sent towards the SUT node.
+A TG node is where the TG runs.
 """
 
 from scapy.packet import Packet  # type: ignore[import]
@@ -24,13 +19,16 @@
 
 
 class TGNode(Node):
-    """Manage connections to a node with a traffic generator.
+    """The traffic generator node.
 
-    Apart from basic node management capabilities, the Traffic Generator node has
-    specialized methods for handling the traffic generator running on it.
+    The TG node extends :class:`Node` with TG specific features:
 
-    Arguments:
-        node_config: The user configuration of the traffic generator node.
+        * Traffic generator initialization,
+        * The sending of traffic and receiving packets,
+        * The sending of traffic without receiving packets.
+
+    Not all traffic generators are capable of capturing traffic, which is why there
+    must be a way to send traffic without that.
 
     Attributes:
         traffic_generator: The traffic generator running on the node.
@@ -39,6 +37,13 @@ class TGNode(Node):
     traffic_generator: CapturingTrafficGenerator
 
     def __init__(self, node_config: TGNodeConfiguration):
+        """Extend the constructor with TG node specifics.
+
+        Initialize the traffic generator on the TG node.
+
+        Args:
+            node_config: The TG node's test run configuration.
+        """
         super(TGNode, self).__init__(node_config)
         self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
         self._logger.info(f"Created node: {self.name}")
@@ -50,17 +55,17 @@ def send_packet_and_capture(
         receive_port: Port,
         duration: float = 1,
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet`, return received traffic.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given duration. Also record the captured traffic
         in a pcap file.
 
         Args:
             packet: The packet to send.
             send_port: The egress port on the TG node.
             receive_port: The ingress port in the TG node.
-            duration: Capture traffic for this amount of time after sending the packet.
+            duration: Capture traffic for this amount of time after sending `packet`.
 
         Returns:
              A list of received packets. May be empty if no packets are captured.
@@ -70,6 +75,9 @@ def send_packet_and_capture(
         )
 
     def close(self) -> None:
-        """Free all resources used by the node"""
+        """Free all resources used by the node.
+
+        This extends the superclass method with TG cleanup.
+        """
         self.traffic_generator.close()
         super(TGNode, self).close()
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 19/21] dts: base traffic generators docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (17 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 18/21] dts: sut and tg nodes " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-12-01 18:05                   ` Jeremy Spewock
  2023-11-23 15:13                 ` [PATCH v8 20/21] dts: scapy tg " Juraj Linkeš
                                   ` (3 subsequent siblings)
  22 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../traffic_generator/__init__.py             | 22 ++++++++-
 .../capturing_traffic_generator.py            | 45 +++++++++++--------
 .../traffic_generator/traffic_generator.py    | 33 ++++++++------
 3 files changed, 67 insertions(+), 33 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 52888d03fa..11e2bd7d97 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -1,6 +1,19 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""DTS traffic generators.
+
+A traffic generator is capable of generating traffic and then monitor returning traffic.
+All traffic generators must count the number of received packets. Some may additionally capture
+individual packets.
+
+A traffic generator may be software running on generic hardware or it could be specialized hardware.
+
+The traffic generators that only count the number of received packets are suitable only for
+performance testing. In functional testing, we need to be able to dissect each arrived packet
+and a capturing traffic generator is required.
+"""
+
 from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
 from framework.exception import ConfigurationError
 from framework.testbed_model.node import Node
@@ -12,8 +25,15 @@
 def create_traffic_generator(
     tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
 ) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
+    """The factory function for creating traffic generator objects from the test run configuration.
+
+    Args:
+        tg_node: The traffic generator node where the created traffic generator will be running.
+        traffic_generator_config: The traffic generator config.
 
+    Returns:
+        A traffic generator capable of capturing received packets.
+    """
     match traffic_generator_config.traffic_generator_type:
         case TrafficGeneratorType.SCAPY:
             return ScapyTrafficGenerator(tg_node, traffic_generator_config)
diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index 1fc7f98c05..0246590333 100644
--- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -23,19 +23,21 @@
 
 
 def _get_default_capture_name() -> str:
-    """
-    This is the function used for the default implementation of capture names.
-    """
     return str(uuid.uuid4())
 
 
 class CapturingTrafficGenerator(TrafficGenerator):
     """Capture packets after sending traffic.
 
-    A mixin interface which enables a packet generator to declare that it can capture
+    The intermediary interface which enables a packet generator to declare that it can capture
     packets and return them to the user.
 
+    Similarly to :class:`~.traffic_generator.TrafficGenerator`, this class exposes
+    the public methods specific to capturing traffic generators and defines a private method
+    that must implement the traffic generation and capturing logic in subclasses.
+
     The methods of capturing traffic generators obey the following workflow:
+
         1. send packets
         2. capture packets
         3. write the capture to a .pcap file
@@ -44,6 +46,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
 
     @property
     def is_capturing(self) -> bool:
+        """This traffic generator can capture traffic."""
         return True
 
     def send_packet_and_capture(
@@ -54,11 +57,12 @@ def send_packet_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet` and capture received traffic.
+
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        The captured traffic is recorded in the `capture_name`.pcap file.
 
         Args:
             packet: The packet to send.
@@ -68,7 +72,7 @@ def send_packet_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         return self.send_packets_and_capture(
             [packet], send_port, receive_port, duration, capture_name
@@ -82,11 +86,14 @@ def send_packets_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send packets, return received traffic.
+        """Send `packets` and capture received traffic.
 
-        Send packets on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        Send `packets` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
+
+        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
+        can be configured with the :option:`--output-dir` command line argument or
+        the :envvar:`DTS_OUTPUT_DIR` environment variable.
 
         Args:
             packets: The packets to send.
@@ -96,7 +103,7 @@ def send_packets_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         self._logger.debug(get_packet_summaries(packets))
         self._logger.debug(
@@ -121,10 +128,12 @@ def _send_packets_and_capture(
         receive_port: Port,
         duration: float,
     ) -> list[Packet]:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port and receives packets on the receive_port
-        for the specified duration. It must be able to handle no received packets.
+        """The implementation of :method:`send_packets_and_capture`.
+
+        The subclasses must implement this method which sends `packets` on `send_port`
+        and receives packets on `receive_port` for the specified `duration`.
+
+        It must be able to handle receiving no packets.
         """
 
     def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None:
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 0d9902ddb7..5fb9824568 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -22,7 +22,8 @@
 class TrafficGenerator(ABC):
     """The base traffic generator.
 
-    Defines the few basic methods that each traffic generator must implement.
+    Exposes the common public methods of all traffic generators and defines private methods
+    that must implement the traffic generation logic in subclasses.
     """
 
     _config: TrafficGeneratorConfig
@@ -30,14 +31,20 @@ class TrafficGenerator(ABC):
     _logger: DTSLOG
 
     def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        """Initialize the traffic generator.
+
+        Args:
+            tg_node: The traffic generator node where the created traffic generator will be running.
+            config: The traffic generator's test run configuration.
+        """
         self._config = config
         self._tg_node = tg_node
         self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}")
 
     def send_packet(self, packet: Packet, port: Port) -> None:
-        """Send a packet and block until it is fully sent.
+        """Send `packet` and block until it is fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packet` on `port`, then wait until `packet` is fully sent.
 
         Args:
             packet: The packet to send.
@@ -46,9 +53,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
         self.send_packets([packet], port)
 
     def send_packets(self, packets: list[Packet], port: Port) -> None:
-        """Send packets and block until they are fully sent.
+        """Send `packets` and block until they are fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packets` on `port`, then wait until `packets` are fully sent.
 
         Args:
             packets: The packets to send.
@@ -60,19 +67,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
 
     @abstractmethod
     def _send_packets(self, packets: list[Packet], port: Port) -> None:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port. The method should block until all packets
-        are fully sent.
+        """The implementation of :method:`send_packets`.
+
+        The subclasses must implement this method which sends `packets` on `port`.
+        The method should block until all `packets` are fully sent.
+
+        What full sent means is defined by the traffic generator.
         """
 
     @property
     def is_capturing(self) -> bool:
-        """Whether this traffic generator can capture traffic.
-
-        Returns:
-            True if the traffic generator can capture traffic, False otherwise.
-        """
+        """This traffic generator can't capture traffic."""
         return False
 
     @abstractmethod
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 20/21] dts: scapy tg docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (18 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 19/21] dts: base traffic generators " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-12-01 18:17                   ` Jeremy Spewock
  2023-11-23 15:13                 ` [PATCH v8 21/21] dts: test suites " Juraj Linkeš
                                   ` (2 subsequent siblings)
  22 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
 1 file changed, 54 insertions(+), 37 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index c88cf28369..30ea3914ee 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -2,14 +2,15 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""Scapy traffic generator.
+"""The Scapy traffic generator.
 
-Traffic generator used for functional testing, implemented using the Scapy library.
+A traffic generator used for functional testing, implemented with
+`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
 The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
 
-The XML-RPC server runs in an interactive remote SSH session running Python console,
-where we start the server. The communication with the server is facilitated with
-a local server proxy.
+The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
+in an interactive remote Python SSH session. The communication with the server is facilitated
+with a local server proxy from the :mod:`xmlrpc.client` module.
 """
 
 import inspect
@@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
     recv_iface: str,
     duration: float,
 ) -> list[bytes]:
-    """RPC function to send and capture packets.
+    """The RPC function to send and capture packets.
 
-    The function is meant to be executed on the remote TG node.
+    The function is meant to be executed on the remote TG node via the server proxy.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
         recv_iface: The logical name of the ingress interface.
         duration: Capture for this amount of time, in seconds.
 
     Returns:
         A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
+        to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     sniffer = scapy.all.AsyncSniffer(
@@ -96,19 +97,15 @@ def scapy_send_packets_and_capture(
 
 
 def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str) -> None:
-    """RPC function to send packets.
+    """The RPC function to send packets.
 
-    The function is meant to be executed on the remote TG node.
-    It doesn't return anything, only sends packets.
+    The function is meant to be executed on the remote TG node via the server proxy.
+    It only sends `xmlrpc_packets`, without capturing them.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
-
-    Returns:
-        A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
@@ -128,11 +125,19 @@ def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: s
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-    """Basic XML-RPC server that may be extended
-    by functions serializable by the marshal module.
+    """Basic XML-RPC server.
+
+    The server may be augmented by functions serializable by the :mod:`marshal` module.
     """
 
     def __init__(self, *args, **kwargs):
+        """Extend the XML-RPC server initialization.
+
+        Args:
+            args: The positional arguments that will be passed to the superclass's constructor.
+            kwargs: The keyword arguments that will be passed to the superclass's constructor.
+                The `allow_none` argument will be set to :data:`True`.
+        """
         kwargs["allow_none"] = True
         super().__init__(*args, **kwargs)
         self.register_introspection_functions()
@@ -140,13 +145,12 @@ def __init__(self, *args, **kwargs):
         self.register_function(self.add_rpc_function)
 
     def quit(self) -> None:
+        """Quit the server."""
         self._BaseServer__shutdown_request = True
         return None
 
     def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
-        """Add a function to the server.
-
-        This is meant to be executed remotely.
+        """Add a function to the server from the local server proxy.
 
         Args:
               name: The name of the function.
@@ -157,6 +161,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
         self.register_function(function)
 
     def serve_forever(self, poll_interval: float = 0.5) -> None:
+        """Extend the superclass method with an additional print.
+
+        Once executed in the local server proxy, the print gives us a clear string to expect
+        when starting the server. The print means the function was executed on the XML-RPC server.
+        """
         print("XMLRPC OK")
         super().serve_forever(poll_interval)
 
@@ -164,19 +173,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
 class ScapyTrafficGenerator(CapturingTrafficGenerator):
     """Provides access to scapy functions via an RPC interface.
 
-    The traffic generator first starts an XML-RPC on the remote TG node.
-    Then it populates the server with functions which use the Scapy library
-    to send/receive traffic.
-
-    Any packets sent to the remote server are first converted to bytes.
-    They are received as xmlrpc.client.Binary objects on the server side.
-    When the server sends the packets back, they are also received as
-    xmlrpc.client.Binary object on the client side, are converted back to Scapy
-    packets and only then returned from the methods.
+    The class extends the base with remote execution of scapy functions.
 
-    Arguments:
-        tg_node: The node where the traffic generator resides.
-        config: The user configuration of the traffic generator.
+    Any packets sent to the remote server are first converted to bytes. They are received as
+    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
+    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
+    converted back to :class:`~scapy.packet.Packet` objects and only then returned from the methods.
 
     Attributes:
         session: The exclusive interactive remote session created by the Scapy
@@ -190,6 +192,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     _config: ScapyTrafficGeneratorConfig
 
     def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        """Extend the constructor with Scapy TG specifics.
+
+        The traffic generator first starts an XML-RPC on the remote `tg_node`.
+        Then it populates the server with functions which use the Scapy library
+        to send/receive traffic:
+
+            * :func:`scapy_send_packets_and_capture`
+            * :func:`scapy_send_packets`
+
+        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
+        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
+
+        Args:
+            tg_node: The node where the traffic generator resides.
+            config: The traffic generator's test run configuration.
+        """
         super().__init__(tg_node, config)
 
         assert (
@@ -231,10 +249,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
         # or class, so strip all lines containing only whitespace
         src = "\n".join([line for line in src.splitlines() if not line.isspace() and line != ""])
 
-        spacing = "\n" * 4
-
         # execute it in the python terminal
-        self.session.send_command(spacing + src + spacing)
+        self.session.send_command(src + "\n")
         self.session.send_command(
             f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));server.serve_forever()",
             "XMLRPC OK",
@@ -267,6 +283,7 @@ def _send_packets_and_capture(
         return scapy_packets
 
     def close(self) -> None:
+        """Close the traffic generator."""
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v8 21/21] dts: test suites docstring update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (19 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 20/21] dts: scapy tg " Juraj Linkeš
@ 2023-11-23 15:13                 ` Juraj Linkeš
  2023-12-01 16:00                 ` [PATCH v8 00/21] dts: docstrings update Yoan Picchi
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
  22 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-11-23 15:13 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/tests/TestSuite_hello_world.py | 16 +++++---
 dts/tests/TestSuite_os_udp.py      | 20 ++++++----
 dts/tests/TestSuite_smoke_tests.py | 61 ++++++++++++++++++++++++------
 3 files changed, 72 insertions(+), 25 deletions(-)

diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 768ba1cfa8..fd7ff1534d 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
 
-"""
+"""The DPDK hello world app test suite.
+
 Run the helloworld example app and verify it prints a message for each used core.
 No other EAL parameters apart from cores are used.
 """
@@ -15,22 +16,25 @@
 
 
 class TestHelloWorld(TestSuite):
+    """DPDK hello world app test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Build the app we're about to test - helloworld.
         """
         self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
 
     def test_hello_world_single_core(self) -> None:
-        """
+        """Single core test case.
+
         Steps:
             Run the helloworld app on the first usable logical core.
         Verify:
             The app prints a message from the used core:
             "hello from core <core_id>"
         """
-
         # get the first usable core
         lcore_amount = LogicalCoreCount(1, 1, 1)
         lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
@@ -42,14 +46,14 @@ def test_hello_world_single_core(self) -> None:
         )
 
     def test_hello_world_all_cores(self) -> None:
-        """
+        """All cores test case.
+
         Steps:
             Run the helloworld app on all usable logical cores.
         Verify:
             The app prints a message from all used cores:
             "hello from core <core_id>"
         """
-
         # get the maximum logical core number
         eal_para = self.sut_node.create_eal_parameters(
             lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
index bf6b93deb5..2cf29d37bb 100644
--- a/dts/tests/TestSuite_os_udp.py
+++ b/dts/tests/TestSuite_os_udp.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
+"""Basic IPv4 OS routing test suite.
+
 Configure SUT node to route traffic from if1 to if2.
 Send a packet to the SUT node, verify it comes back on the second port on the TG node.
 """
@@ -13,24 +14,26 @@
 
 
 class TestOSUdp(TestSuite):
+    """IPv4 UDP OS routing test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
-            Configure SUT ports and SUT to route traffic from if1 to if2.
+            Bind the SUT ports to the OS driver, configure the ports and configure the SUT
+            to route traffic from if1 to if2.
         """
-
-        # This test uses kernel drivers
         self.sut_node.bind_ports_to_driver(for_dpdk=False)
         self.configure_testbed_ipv4()
 
     def test_os_udp(self) -> None:
-        """
+        """Basic UDP IPv4 traffic test case.
+
         Steps:
             Send a UDP packet.
         Verify:
             The packet with proper addresses arrives at the other TG port.
         """
-
         packet = Ether() / IP() / UDP()
 
         received_packets = self.send_packet_and_capture(packet)
@@ -40,7 +43,8 @@ def test_os_udp(self) -> None:
         self.verify_packets(expected_packet, received_packets)
 
     def tear_down_suite(self) -> None:
-        """
+        """Tear down the test suite.
+
         Teardown:
             Remove the SUT port configuration configured in setup.
         """
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 8958f58dac..5e2bac14bd 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -1,6 +1,17 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Smoke test suite.
+
+Smoke tests are a class of tests which are used for validating a minimal set of important features.
+These are the most important features without which (or when they're faulty) the software wouldn't
+work properly. Thus, if any failure occurs while testing these features,
+there isn't that much of a reason to continue testing, as the software is fundamentally broken.
+
+These tests don't have to include only DPDK tests, as the reason for failures could be
+in the infrastructure (a faulty link between NICs or a misconfiguration).
+"""
+
 import re
 
 from framework.config import PortConfig
@@ -11,23 +22,39 @@
 
 
 class SmokeTests(TestSuite):
+    """DPDK and infrastructure smoke test suite.
+
+    The test cases validate the most basic DPDK functionality needed for all other test suites.
+    The infrastructure also needs to be tested, as that is also used by all other test suites.
+
+    Attributes:
+        is_blocking: This test suite will block the execution of all other test suites
+            in the build target after it.
+        nics_in_node: The NICs present on the SUT node.
+    """
+
     is_blocking = True
     # dicts in this list are expected to have two keys:
     # "pci_address" and "current_driver"
     nics_in_node: list[PortConfig] = []
 
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
-            Set the build directory path and generate a list of NICs in the SUT node.
+            Set the build directory path and a list of NICs in the SUT node.
         """
         self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
         self.nics_in_node = self.sut_node.config.ports
 
     def test_unit_tests(self) -> None:
-        """
+        """DPDK meson ``fast-tests`` unit tests.
+
+        Test that all unit test from the ``fast-tests`` suite pass.
+        The suite is a subset with only the most basic tests.
+
         Test:
-            Run the fast-test unit-test suite through meson.
+            Run the ``fast-tests`` unit test suite through meson.
         """
         self.sut_node.main_session.send_command(
             f"meson test -C {self.dpdk_build_dir_path} --suite fast-tests -t 60",
@@ -37,9 +64,14 @@ def test_unit_tests(self) -> None:
         )
 
     def test_driver_tests(self) -> None:
-        """
+        """DPDK meson ``driver-tests`` unit tests.
+
+        Test that all unit test from the ``driver-tests`` suite pass.
+        The suite is a subset with driver tests. This suite may be run with virtual devices
+        configured in the test run configuration.
+
         Test:
-            Run the driver-test unit-test suite through meson.
+            Run the ``driver-tests`` unit test suite through meson.
         """
         vdev_args = ""
         for dev in self.sut_node.virtual_devices:
@@ -60,9 +92,12 @@ def test_driver_tests(self) -> None:
         )
 
     def test_devices_listed_in_testpmd(self) -> None:
-        """
+        """Testpmd device discovery.
+
+        Test that the devices configured in the test run configuration are found in testpmd.
+
         Test:
-            Uses testpmd driver to verify that devices have been found by testpmd.
+            List all devices found in testpmd and verify the configured devices are among them.
         """
         testpmd_driver = self.sut_node.create_interactive_shell(TestPmdShell, privileged=True)
         dev_list = [str(x) for x in testpmd_driver.get_devices()]
@@ -74,10 +109,14 @@ def test_devices_listed_in_testpmd(self) -> None:
             )
 
     def test_device_bound_to_driver(self) -> None:
-        """
+        """Device driver in OS.
+
+        Test that the devices configured in the test run configuration are bound to
+        the proper driver.
+
         Test:
-            Ensure that all drivers listed in the config are bound to the correct
-            driver.
+            List all devices with the ``dpdk-devbind.py`` script and verify that
+            the configured devices are bound to the proper driver.
         """
         path_to_devbind = self.sut_node.path_to_devbind_script
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v7 09/21] dts: test result docstring update
  2023-11-20 16:33                   ` Juraj Linkeš
@ 2023-11-30 21:20                     ` Jeremy Spewock
  0 siblings, 0 replies; 255+ messages in thread
From: Jeremy Spewock @ 2023-11-30 21:20 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek, yoan.picchi, dev

[-- Attachment #1: Type: text/plain, Size: 24047 bytes --]

On Mon, Nov 20, 2023 at 11:33 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> On Thu, Nov 16, 2023 at 11:47 PM Jeremy Spewock <jspewock@iol.unh.edu>
> wrote:
> >
> > The only comments I had on this were a few places where I think
> attribute sections should be class variables instead. I tried to mark all
> of the places I saw it and it could be a difference where because of the
> way they are subclassed they might do it differently but I'm unsure.
> >
> > On Wed, Nov 15, 2023 at 8:12 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
> wrote:
> >>
> >> Format according to the Google format and PEP257, with slight
> >> deviations.
> >>
> >> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> >> ---
> >>  dts/framework/test_result.py | 292 ++++++++++++++++++++++++++++-------
> >>  1 file changed, 234 insertions(+), 58 deletions(-)
> >>
> >> diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
> >> index 603e18872c..05e210f6e7 100644
> >> --- a/dts/framework/test_result.py
> >> +++ b/dts/framework/test_result.py
> >> @@ -2,8 +2,25 @@
> >>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >>  # Copyright(c) 2023 University of New Hampshire
> >>
> >> -"""
> >> -Generic result container and reporters
> >> +r"""Record and process DTS results.
> >> +
> >> +The results are recorded in a hierarchical manner:
> >> +
> >> +    * :class:`DTSResult` contains
> >> +    * :class:`ExecutionResult` contains
> >> +    * :class:`BuildTargetResult` contains
> >> +    * :class:`TestSuiteResult` contains
> >> +    * :class:`TestCaseResult`
> >> +
> >> +Each result may contain multiple lower level results, e.g. there are
> multiple
> >> +:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
> >> +The results have common parts, such as setup and teardown results,
> captured in :class:`BaseResult`,
> >> +which also defines some common behaviors in its methods.
> >> +
> >> +Each result class has its own idiosyncrasies which they implement in
> overridden methods.
> >> +
> >> +The :option:`--output` command line argument and the
> :envvar:`DTS_OUTPUT_DIR` environment
> >> +variable modify the directory where the files with results will be
> stored.
> >>  """
> >>
> >>  import os.path
> >> @@ -26,26 +43,34 @@
> >>
> >>
> >>  class Result(Enum):
> >> -    """
> >> -    An Enum defining the possible states that
> >> -    a setup, a teardown or a test case may end up in.
> >> -    """
> >> +    """The possible states that a setup, a teardown or a test case may
> end up in."""
> >>
> >> +    #:
> >>      PASS = auto()
> >> +    #:
> >>      FAIL = auto()
> >> +    #:
> >>      ERROR = auto()
> >> +    #:
> >>      SKIP = auto()
> >>
> >>      def __bool__(self) -> bool:
> >> +        """Only PASS is True."""
> >>          return self is self.PASS
> >>
> >>
> >>  class FixtureResult(object):
> >> -    """
> >> -    A record that stored the result of a setup or a teardown.
> >> -    The default is FAIL because immediately after creating the object
> >> -    the setup of the corresponding stage will be executed, which also
> guarantees
> >> -    the execution of teardown.
> >> +    """A record that stores the result of a setup or a teardown.
> >> +
> >> +    FAIL is a sensible default since it prevents false positives
> >> +        (which could happen if the default was PASS).
> >> +
> >> +    Preventing false positives or other false results is preferable
> since a failure
> >> +    is mostly likely to be investigated (the other false results may
> not be investigated at all).
> >> +
> >> +    Attributes:
> >> +        result: The associated result.
> >> +        error: The error in case of a failure.
> >>      """
> >
> >
> > I think the items in the attributes section should instead be "#:"
> because they are class variables.
> >
>
> Making these class variables would make the value the same for all
> instances, of which there are plenty. Why do you think these should be
> class variables?
>

That explanation makes more sense. I guess I was thinking of class
variables as anything we statically define as part of the class (i.e., like
we say the class will always have a `result` and an `error` attribute), but
I could have just been mistaken. Using the definition of instance variables
as they can differ between instances I agree makes this comment and the
other ones you touched on obsolete.


>
> >>
> >>
> >>      result: Result
> >> @@ -56,21 +81,32 @@ def __init__(
> >>          result: Result = Result.FAIL,
> >>          error: Exception | None = None,
> >>      ):
> >> +        """Initialize the constructor with the fixture result and
> store a possible error.
> >> +
> >> +        Args:
> >> +            result: The result to store.
> >> +            error: The error which happened when a failure occurred.
> >> +        """
> >>          self.result = result
> >>          self.error = error
> >>
> >>      def __bool__(self) -> bool:
> >> +        """A wrapper around the stored :class:`Result`."""
> >>          return bool(self.result)
> >>
> >>
> >>  class Statistics(dict):
> >> -    """
> >> -    A helper class used to store the number of test cases by its result
> >> -    along a few other basic information.
> >> -    Using a dict provides a convenient way to format the data.
> >> +    """How many test cases ended in which result state along some
> other basic information.
> >> +
> >> +    Subclassing :class:`dict` provides a convenient way to format the
> data.
> >>      """
> >>
> >>      def __init__(self, dpdk_version: str | None):
> >> +        """Extend the constructor with relevant keys.
> >> +
> >> +        Args:
> >> +            dpdk_version: The version of tested DPDK.
> >> +        """
> >
> >
> > Should we maybe mark the "PASS RATE" and the "DPDK VERSION" as instance
> variables of the class?
> >
>
> This is a dict, so these won't work as instance variables, but it
> makes sense to document these keys, so I'll add that.
>
> >>
> >>          super(Statistics, self).__init__()
> >>          for result in Result:
> >>              self[result.name] = 0
> >> @@ -78,8 +114,17 @@ def __init__(self, dpdk_version: str | None):
> >>          self["DPDK VERSION"] = dpdk_version
> >>
> >>      def __iadd__(self, other: Result) -> "Statistics":
> >> -        """
> >> -        Add a Result to the final count.
> >> +        """Add a Result to the final count.
> >> +
> >> +        Example:
> >> +            stats: Statistics = Statistics()  # empty Statistics
> >> +            stats += Result.PASS  # add a Result to `stats`
> >> +
> >> +        Args:
> >> +            other: The Result to add to this statistics object.
> >> +
> >> +        Returns:
> >> +            The modified statistics object.
> >>          """
> >>          self[other.name] += 1
> >>          self["PASS RATE"] = (
> >> @@ -90,9 +135,7 @@ def __iadd__(self, other: Result) -> "Statistics":
> >>          return self
> >>
> >>      def __str__(self) -> str:
> >> -        """
> >> -        Provide a string representation of the data.
> >> -        """
> >> +        """Each line contains the formatted key = value pair."""
> >>          stats_str = ""
> >>          for key, value in self.items():
> >>              stats_str += f"{key:<12} = {value}\n"
> >> @@ -102,10 +145,16 @@ def __str__(self) -> str:
> >>
> >>
> >>  class BaseResult(object):
> >> -    """
> >> -    The Base class for all results. Stores the results of
> >> -    the setup and teardown portions of the corresponding stage
> >> -    and a list of results from each inner stage in _inner_results.
> >> +    """Common data and behavior of DTS results.
> >> +
> >> +    Stores the results of the setup and teardown portions of the
> corresponding stage.
> >> +    The hierarchical nature of DTS results is captured recursively in
> an internal list.
> >> +    A stage is each level in this particular hierarchy (pre-execution
> or the top-most level,
> >> +    execution, build target, test suite and test case.)
> >> +
> >> +    Attributes:
> >> +        setup_result: The result of the setup of the particular stage.
> >> +        teardown_result: The results of the teardown of the particular
> stage.
> >>      """
> >
> >
> > I think this might be another case of the attributes should be marked as
> class variables instead of instance variables.
> >
>
> This is the same as in FixtureResult. For example, there could be
> multiple build targets with different results.
>
> >>
> >>
> >>      setup_result: FixtureResult
> >> @@ -113,15 +162,28 @@ class BaseResult(object):
> >>      _inner_results: MutableSequence["BaseResult"]
> >>
> >>      def __init__(self):
> >> +        """Initialize the constructor."""
> >>          self.setup_result = FixtureResult()
> >>          self.teardown_result = FixtureResult()
> >>          self._inner_results = []
> >>
> >>      def update_setup(self, result: Result, error: Exception | None =
> None) -> None:
> >> +        """Store the setup result.
> >> +
> >> +        Args:
> >> +            result: The result of the setup.
> >> +            error: The error that occurred in case of a failure.
> >> +        """
> >>          self.setup_result.result = result
> >>          self.setup_result.error = error
> >>
> >>      def update_teardown(self, result: Result, error: Exception | None
> = None) -> None:
> >> +        """Store the teardown result.
> >> +
> >> +        Args:
> >> +            result: The result of the teardown.
> >> +            error: The error that occurred in case of a failure.
> >> +        """
> >>          self.teardown_result.result = result
> >>          self.teardown_result.error = error
> >>
> >> @@ -141,27 +203,55 @@ def _get_inner_errors(self) -> list[Exception]:
> >>          ]
> >>
> >>      def get_errors(self) -> list[Exception]:
> >> +        """Compile errors from the whole result hierarchy.
> >> +
> >> +        Returns:
> >> +            The errors from setup, teardown and all errors found in
> the whole result hierarchy.
> >> +        """
> >>          return self._get_setup_teardown_errors() +
> self._get_inner_errors()
> >>
> >>      def add_stats(self, statistics: Statistics) -> None:
> >> +        """Collate stats from the whole result hierarchy.
> >> +
> >> +        Args:
> >> +            statistics: The :class:`Statistics` object where the stats
> will be collated.
> >> +        """
> >>          for inner_result in self._inner_results:
> >>              inner_result.add_stats(statistics)
> >>
> >>
> >>  class TestCaseResult(BaseResult, FixtureResult):
> >> -    """
> >> -    The test case specific result.
> >> -    Stores the result of the actual test case.
> >> -    Also stores the test case name.
> >> +    r"""The test case specific result.
> >> +
> >> +    Stores the result of the actual test case. This is done by adding
> an extra superclass
> >> +    in :class:`FixtureResult`. The setup and teardown results are
> :class:`FixtureResult`\s and
> >> +    the class is itself a record of the test case.
> >> +
> >> +    Attributes:
> >> +        test_case_name: The test case name.
> >>      """
> >>
> >
> > Another spot where I think this should have a class variable comment.
> >
> >>
> >>      test_case_name: str
> >>
> >>      def __init__(self, test_case_name: str):
> >> +        """Extend the constructor with `test_case_name`.
> >> +
> >> +        Args:
> >> +            test_case_name: The test case's name.
> >> +        """
> >>          super(TestCaseResult, self).__init__()
> >>          self.test_case_name = test_case_name
> >>
> >>      def update(self, result: Result, error: Exception | None = None)
> -> None:
> >> +        """Update the test case result.
> >> +
> >> +        This updates the result of the test case itself and doesn't
> affect
> >> +        the results of the setup and teardown steps in any way.
> >> +
> >> +        Args:
> >> +            result: The result of the test case.
> >> +            error: The error that occurred in case of a failure.
> >> +        """
> >>          self.result = result
> >>          self.error = error
> >>
> >> @@ -171,38 +261,66 @@ def _get_inner_errors(self) -> list[Exception]:
> >>          return []
> >>
> >>      def add_stats(self, statistics: Statistics) -> None:
> >> +        r"""Add the test case result to statistics.
> >> +
> >> +        The base method goes through the hierarchy recursively and
> this method is here to stop
> >> +        the recursion, as the :class:`TestCaseResult`\s are the leaves
> of the hierarchy tree.
> >> +
> >> +        Args:
> >> +            statistics: The :class:`Statistics` object where the stats
> will be added.
> >> +        """
> >>          statistics += self.result
> >>
> >>      def __bool__(self) -> bool:
> >> +        """The test case passed only if setup, teardown and the test
> case itself passed."""
> >>          return (
> >>              bool(self.setup_result) and bool(self.teardown_result) and
> bool(self.result)
> >>          )
> >>
> >>
> >>  class TestSuiteResult(BaseResult):
> >> -    """
> >> -    The test suite specific result.
> >> -    The _inner_results list stores results of test cases in a given
> test suite.
> >> -    Also stores the test suite name.
> >> +    """The test suite specific result.
> >> +
> >> +    The internal list stores the results of all test cases in a given
> test suite.
> >> +
> >> +    Attributes:
> >> +        suite_name: The test suite name.
> >>      """
> >>
> >
> > I think this should also be a class variable.
> >
> >
> >>
> >>      suite_name: str
> >>
> >>      def __init__(self, suite_name: str):
> >> +        """Extend the constructor with `suite_name`.
> >> +
> >> +        Args:
> >> +            suite_name: The test suite's name.
> >> +        """
> >>          super(TestSuiteResult, self).__init__()
> >>          self.suite_name = suite_name
> >>
> >>      def add_test_case(self, test_case_name: str) -> TestCaseResult:
> >> +        """Add and return the inner result (test case).
> >> +
> >> +        Returns:
> >> +            The test case's result.
> >> +        """
> >>          test_case_result = TestCaseResult(test_case_name)
> >>          self._inner_results.append(test_case_result)
> >>          return test_case_result
> >>
> >>
> >>  class BuildTargetResult(BaseResult):
> >> -    """
> >> -    The build target specific result.
> >> -    The _inner_results list stores results of test suites in a given
> build target.
> >> -    Also stores build target specifics, such as compiler used to build
> DPDK.
> >> +    """The build target specific result.
> >> +
> >> +    The internal list stores the results of all test suites in a given
> build target.
> >> +
> >> +    Attributes:
> >> +        arch: The DPDK build target architecture.
> >> +        os: The DPDK build target operating system.
> >> +        cpu: The DPDK build target CPU.
> >> +        compiler: The DPDK build target compiler.
> >> +        compiler_version: The DPDK build target compiler version.
> >> +        dpdk_version: The built DPDK version.
> >>      """
> >
> >
> > I think this should be broken into class variables as well.
> >
> >>
> >>
> >>      arch: Architecture
> >> @@ -213,6 +331,11 @@ class BuildTargetResult(BaseResult):
> >>      dpdk_version: str | None
> >>
> >>      def __init__(self, build_target: BuildTargetConfiguration):
> >> +        """Extend the constructor with the `build_target`'s build
> target config.
> >> +
> >> +        Args:
> >> +            build_target: The build target's test run configuration.
> >> +        """
> >>          super(BuildTargetResult, self).__init__()
> >>          self.arch = build_target.arch
> >>          self.os = build_target.os
> >> @@ -222,20 +345,35 @@ def __init__(self, build_target:
> BuildTargetConfiguration):
> >>          self.dpdk_version = None
> >>
> >>      def add_build_target_info(self, versions: BuildTargetInfo) -> None:
> >> +        """Add information about the build target gathered at runtime.
> >> +
> >> +        Args:
> >> +            versions: The additional information.
> >> +        """
> >>          self.compiler_version = versions.compiler_version
> >>          self.dpdk_version = versions.dpdk_version
> >>
> >>      def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
> >> +        """Add and return the inner result (test suite).
> >> +
> >> +        Returns:
> >> +            The test suite's result.
> >> +        """
> >>          test_suite_result = TestSuiteResult(test_suite_name)
> >>          self._inner_results.append(test_suite_result)
> >>          return test_suite_result
> >>
> >>
> >>  class ExecutionResult(BaseResult):
> >> -    """
> >> -    The execution specific result.
> >> -    The _inner_results list stores results of build targets in a given
> execution.
> >> -    Also stores the SUT node configuration.
> >> +    """The execution specific result.
> >> +
> >> +    The internal list stores the results of all build targets in a
> given execution.
> >> +
> >> +    Attributes:
> >> +        sut_node: The SUT node used in the execution.
> >> +        sut_os_name: The operating system of the SUT node.
> >> +        sut_os_version: The operating system version of the SUT node.
> >> +        sut_kernel_version: The operating system kernel version of the
> SUT node.
> >>      """
> >>
> >
> > I think these should be class variables as well.
> >
> >>
> >>      sut_node: NodeConfiguration
> >> @@ -244,36 +382,55 @@ class ExecutionResult(BaseResult):
> >>      sut_kernel_version: str
> >>
> >>      def __init__(self, sut_node: NodeConfiguration):
> >> +        """Extend the constructor with the `sut_node`'s config.
> >> +
> >> +        Args:
> >> +            sut_node: The SUT node's test run configuration used in
> the execution.
> >> +        """
> >>          super(ExecutionResult, self).__init__()
> >>          self.sut_node = sut_node
> >>
> >>      def add_build_target(
> >>          self, build_target: BuildTargetConfiguration
> >>      ) -> BuildTargetResult:
> >> +        """Add and return the inner result (build target).
> >> +
> >> +        Args:
> >> +            build_target: The build target's test run configuration.
> >> +
> >> +        Returns:
> >> +            The build target's result.
> >> +        """
> >>          build_target_result = BuildTargetResult(build_target)
> >>          self._inner_results.append(build_target_result)
> >>          return build_target_result
> >>
> >>      def add_sut_info(self, sut_info: NodeInfo) -> None:
> >> +        """Add SUT information gathered at runtime.
> >> +
> >> +        Args:
> >> +            sut_info: The additional SUT node information.
> >> +        """
> >>          self.sut_os_name = sut_info.os_name
> >>          self.sut_os_version = sut_info.os_version
> >>          self.sut_kernel_version = sut_info.kernel_version
> >>
> >>
> >>  class DTSResult(BaseResult):
> >> -    """
> >> -    Stores environment information and test results from a DTS run,
> which are:
> >> -    * Execution level information, such as SUT and TG hardware.
> >> -    * Build target level information, such as compiler, target OS and
> cpu.
> >> -    * Test suite results.
> >> -    * All errors that are caught and recorded during DTS execution.
> >> +    """Stores environment information and test results from a DTS run.
> >>
> >> -    The information is stored in nested objects.
> >> +        * Execution level information, such as testbed and the test
> suite list,
> >> +        * Build target level information, such as compiler, target OS
> and cpu,
> >> +        * Test suite and test case results,
> >> +        * All errors that are caught and recorded during DTS execution.
> >>
> >> -    The class is capable of computing the return code used to exit DTS
> with
> >> -    from the stored error.
> >> +    The information is stored hierarchically. This is the first level
> of the hierarchy
> >> +    and as such is where the data form the whole hierarchy is collated
> or processed.
> >>
> >> -    It also provides a brief statistical summary of passed/failed test
> cases.
> >> +    The internal list stores the results of all executions.
> >> +
> >> +    Attributes:
> >> +        dpdk_version: The DPDK version to record.
> >>      """
> >>
> >
> > I think this should be a class variable as well.
> >
>
> This is the only place where making this a class variable would work,
> but I don't see a reason for it. An instance variable works just as
> well.
>
> >>
> >>      dpdk_version: str | None
> >> @@ -284,6 +441,11 @@ class DTSResult(BaseResult):
> >>      _stats_filename: str
> >>
> >>      def __init__(self, logger: DTSLOG):
> >> +        """Extend the constructor with top-level specifics.
> >> +
> >> +        Args:
> >> +            logger: The logger instance the whole result will use.
> >> +        """
> >>          super(DTSResult, self).__init__()
> >>          self.dpdk_version = None
> >>          self._logger = logger
> >> @@ -293,21 +455,33 @@ def __init__(self, logger: DTSLOG):
> >>          self._stats_filename = os.path.join(SETTINGS.output_dir,
> "statistics.txt")
> >>
> >>      def add_execution(self, sut_node: NodeConfiguration) ->
> ExecutionResult:
> >> +        """Add and return the inner result (execution).
> >> +
> >> +        Args:
> >> +            sut_node: The SUT node's test run configuration.
> >> +
> >> +        Returns:
> >> +            The execution's result.
> >> +        """
> >>          execution_result = ExecutionResult(sut_node)
> >>          self._inner_results.append(execution_result)
> >>          return execution_result
> >>
> >>      def add_error(self, error: Exception) -> None:
> >> +        """Record an error that occurred outside any execution.
> >> +
> >> +        Args:
> >> +            error: The exception to record.
> >> +        """
> >>          self._errors.append(error)
> >>
> >>      def process(self) -> None:
> >> -        """
> >> -        Process the data after a DTS run.
> >> -        The data is added to nested objects during runtime and this
> parent object
> >> -        is not updated at that time. This requires us to process the
> nested data
> >> -        after it's all been gathered.
> >> +        """Process the data after a whole DTS run.
> >> +
> >> +        The data is added to inner objects during runtime and this
> object is not updated
> >> +        at that time. This requires us to process the inner data after
> it's all been gathered.
> >>
> >> -        The processing gathers all errors and the result statistics of
> test cases.
> >> +        The processing gathers all errors and the statistics of test
> case results.
> >>          """
> >>          self._errors += self.get_errors()
> >>          if self._errors and self._logger:
> >> @@ -321,8 +495,10 @@ def process(self) -> None:
> >>              stats_file.write(str(self._stats_result))
> >>
> >>      def get_return_code(self) -> int:
> >> -        """
> >> -        Go through all stored Exceptions and return the highest error
> code found.
> >> +        """Go through all stored Exceptions and return the final DTS
> error code.
> >> +
> >> +        Returns:
> >> +            The highest error code found.
> >>          """
> >>          for error in self._errors:
> >>              error_return_code = ErrorSeverity.GENERIC_ERR
> >> --
> >> 2.34.1
> >>
>

[-- Attachment #2: Type: text/html, Size: 32100 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 12/21] dts: interactive remote session docstring update
  2023-11-23 15:13                 ` [PATCH v8 12/21] dts: interactive " Juraj Linkeš
@ 2023-11-30 21:49                   ` Jeremy Spewock
  2023-12-04  9:50                     ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-11-30 21:49 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

[-- Attachment #1: Type: text/plain, Size: 18142 bytes --]

On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  .../interactive_remote_session.py             | 36 +++----
>  .../remote_session/interactive_shell.py       | 99 +++++++++++--------
>  dts/framework/remote_session/python_shell.py  | 26 ++++-
>  dts/framework/remote_session/testpmd_shell.py | 58 +++++++++--
>  4 files changed, 149 insertions(+), 70 deletions(-)
>
> diff --git a/dts/framework/remote_session/interactive_remote_session.py
> b/dts/framework/remote_session/interactive_remote_session.py
> index 098ded1bb0..1cc82e3377 100644
> --- a/dts/framework/remote_session/interactive_remote_session.py
> +++ b/dts/framework/remote_session/interactive_remote_session.py
> @@ -22,27 +22,23 @@
>  class InteractiveRemoteSession:
>      """SSH connection dedicated to interactive applications.
>
> -    This connection is created using paramiko and is a persistent
> connection to the
> -    host. This class defines methods for connecting to the node and
> configures this
> -    connection to send "keep alive" packets every 30 seconds. Because
> paramiko attempts
> -    to use SSH keys to establish a connection first, providing a password
> is optional.
> -    This session is utilized by InteractiveShells and cannot be
> interacted with
> -    directly.
> -
> -    Arguments:
> -        node_config: Configuration class for the node you are connecting
> to.
> -        _logger: Desired logger for this session to use.
> +    The connection is created using `paramiko <
> https://docs.paramiko.org/en/latest/>`_
> +    and is a persistent connection to the host. This class defines the
> methods for connecting
> +    to the node and configures the connection to send "keep alive"
> packets every 30 seconds.
> +    Because paramiko attempts to use SSH keys to establish a connection
> first, providing
> +    a password is optional. This session is utilized by InteractiveShells
> +    and cannot be interacted with directly.
>
>      Attributes:
> -        hostname: Hostname that will be used to initialize a connection
> to the node.
> -        ip: A subsection of hostname that removes the port for the
> connection if there
> +        hostname: The hostname that will be used to initialize a
> connection to the node.
> +        ip: A subsection of `hostname` that removes the port for the
> connection if there
>              is one. If there is no port, this will be the same as
> hostname.
> -        port: Port to use for the ssh connection. This will be extracted
> from the
> -            hostname if there is a port included, otherwise it will
> default to 22.
> +        port: Port to use for the ssh connection. This will be extracted
> from `hostname`
> +            if there is a port included, otherwise it will default to
> ``22``.
>          username: User to connect to the node with.
>          password: Password of the user connecting to the host. This will
> default to an
>              empty string if a password is not provided.
> -        session: Underlying paramiko connection.
> +        session: The underlying paramiko connection.
>
>      Raises:
>          SSHConnectionError: There is an error creating the SSH connection.
> @@ -58,9 +54,15 @@ class InteractiveRemoteSession:
>      _node_config: NodeConfiguration
>      _transport: Transport | None
>
> -    def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG)
> -> None:
> +    def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) ->
> None:
> +        """Connect to the node during initialization.
> +
> +        Args:
> +            node_config: The test run configuration of the node to
> connect to.
> +            logger: The logger instance this session will use.
> +        """
>          self._node_config = node_config
> -        self._logger = _logger
> +        self._logger = logger
>          self.hostname = node_config.hostname
>          self.username = node_config.user
>          self.password = node_config.password if node_config.password else
> ""
> diff --git a/dts/framework/remote_session/interactive_shell.py
> b/dts/framework/remote_session/interactive_shell.py
> index 4db19fb9b3..b158f963b6 100644
> --- a/dts/framework/remote_session/interactive_shell.py
> +++ b/dts/framework/remote_session/interactive_shell.py
> @@ -3,18 +3,20 @@
>
>  """Common functionality for interactive shell handling.
>
> -This base class, InteractiveShell, is meant to be extended by other
> classes that
> -contain functionality specific to that shell type. These derived classes
> will often
> -modify things like the prompt to expect or the arguments to pass into the
> application,
> -but still utilize the same method for sending a command and collecting
> output. How
> -this output is handled however is often application specific. If an
> application needs
> -elevated privileges to start it is expected that the method for gaining
> those
> -privileges is provided when initializing the class.
> +The base class, :class:`InteractiveShell`, is meant to be extended by
> subclasses that contain
> +functionality specific to that shell type. These subclasses will often
> modify things like
> +the prompt to expect or the arguments to pass into the application, but
> still utilize
> +the same method for sending a command and collecting output. How this
> output is handled however
> +is often application specific. If an application needs elevated
> privileges to start it is expected
> +that the method for gaining those privileges is provided when
> initializing the class.
> +
> +The :option:`--timeout` command line argument and the
> :envvar:`DTS_TIMEOUT`
> +environment variable configure the timeout of getting the output from
> command execution.
>  """
>
>  from abc import ABC
>  from pathlib import PurePath
> -from typing import Callable
> +from typing import Callable, ClassVar
>
>  from paramiko import Channel, SSHClient, channel  # type: ignore[import]
>
> @@ -30,28 +32,6 @@ class InteractiveShell(ABC):
>      and collecting input until reaching a certain prompt. All interactive
> applications
>      will use the same SSH connection, but each will create their own
> channel on that
>      session.
> -
> -    Arguments:
> -        interactive_session: The SSH session dedicated to interactive
> shells.
> -        logger: Logger used for displaying information in the console.
> -        get_privileged_command: Method for modifying a command to allow
> it to use
> -            elevated privileges. If this is None, the application will
> not be started
> -            with elevated privileges.
> -        app_args: Command line arguments to be passed to the application
> on startup.
> -        timeout: Timeout used for the SSH channel that is dedicated to
> this interactive
> -            shell. This timeout is for collecting output, so if reading
> from the buffer
> -            and no output is gathered within the timeout, an exception is
> thrown.
> -
> -    Attributes
> -        _default_prompt: Prompt to expect at the end of output when
> sending a command.
> -            This is often overridden by derived classes.
> -        _command_extra_chars: Extra characters to add to the end of every
> command
> -            before sending them. This is often overridden by derived
> classes and is
> -            most commonly an additional newline character.
> -        path: Path to the executable to start the interactive application.
> -        dpdk_app: Whether this application is a DPDK app. If it is, the
> build
> -            directory for DPDK on the node will be prepended to the path
> to the
> -            executable.
>      """
>
>      _interactive_session: SSHClient
> @@ -61,10 +41,22 @@ class InteractiveShell(ABC):
>      _logger: DTSLOG
>      _timeout: float
>      _app_args: str
> -    _default_prompt: str = ""
> -    _command_extra_chars: str = ""
> -    path: PurePath
> -    dpdk_app: bool = False
> +
> +    #: Prompt to expect at the end of output when sending a command.
> +    #: This is often overridden by subclasses.
> +    _default_prompt: ClassVar[str] = ""
> +
> +    #: Extra characters to add to the end of every command
> +    #: before sending them. This is often overridden by subclasses and is
> +    #: most commonly an additional newline character.
> +    _command_extra_chars: ClassVar[str] = ""
> +
> +    #: Path to the executable to start the interactive application.
> +    path: ClassVar[PurePath]
> +
> +    #: Whether this application is a DPDK app. If it is, the build
> directory
> +    #: for DPDK on the node will be prepended to the path to the
> executable.
> +    dpdk_app: ClassVar[bool] = False
>
>      def __init__(
>          self,
> @@ -74,6 +66,19 @@ def __init__(
>          app_args: str = "",
>          timeout: float = SETTINGS.timeout,
>      ) -> None:
> +        """Create an SSH channel during initialization.
> +
> +        Args:
> +            interactive_session: The SSH session dedicated to interactive
> shells.
> +            logger: The logger instance this session will use.
> +            get_privileged_command: A method for modifying a command to
> allow it to use
> +                elevated privileges. If :data:`None`, the application
> will not be started
> +                with elevated privileges.
> +            app_args: The command line arguments to be passed to the
> application on startup.
> +            timeout: The timeout used for the SSH channel that is
> dedicated to this interactive
> +                shell. This timeout is for collecting output, so if
> reading from the buffer
> +                and no output is gathered within the timeout, an
> exception is thrown.
> +        """
>          self._interactive_session = interactive_session
>          self._ssh_channel = self._interactive_session.invoke_shell()
>          self._stdin = self._ssh_channel.makefile_stdin("w")
> @@ -90,6 +95,10 @@ def _start_application(self, get_privileged_command:
> Callable[[str], str] | None
>
>          This method is often overridden by subclasses as their process for
>          starting may look different.
> +
> +        Args:
> +            get_privileged_command: A function (but could be any
> callable) that produces
> +                the version of the command with elevated privileges.
>          """
>          start_command = f"{self.path} {self._app_args}"
>          if get_privileged_command is not None:
> @@ -97,16 +106,24 @@ def _start_application(self, get_privileged_command:
> Callable[[str], str] | None
>          self.send_command(start_command)
>
>      def send_command(self, command: str, prompt: str | None = None) ->
> str:
> -        """Send a command and get all output before the expected ending
> string.
> +        """Send `command` and get all output before the expected ending
> string.
>
>          Lines that expect input are not included in the stdout buffer, so
> they cannot
> -        be used for expect. For example, if you were prompted to log into
> something
> -        with a username and password, you cannot expect "username:"
> because it won't
> -        yet be in the stdout buffer. A workaround for this could be
> consuming an
> -        extra newline character to force the current prompt into the
> stdout buffer.
> +        be used for expect.
> +
> +        Example:
> +            If you were prompted to log into something with a username
> and password,
> +            you cannot expect ``username:`` because it won't yet be in
> the stdout buffer.
> +            A workaround for this could be consuming an extra newline
> character to force
> +            the current `prompt` into the stdout buffer.
> +
> +        Args:
> +            command: The command to send.
> +            prompt: After sending the command, `send_command` will be
> expecting this string.
> +                If :data:`None`, will use the class's default prompt.
>
>          Returns:
> -            All output in the buffer before expected string
> +            All output in the buffer before expected string.
>          """
>          self._logger.info(f"Sending: '{command}'")
>          if prompt is None:
> @@ -124,8 +141,10 @@ def send_command(self, command: str, prompt: str |
> None = None) -> str:
>          return out
>
>      def close(self) -> None:
> +        """Properly free all resources."""
>          self._stdin.close()
>          self._ssh_channel.close()
>
>      def __del__(self) -> None:
> +        """Make sure the session is properly closed before deleting the
> object."""
>          self.close()
> diff --git a/dts/framework/remote_session/python_shell.py
> b/dts/framework/remote_session/python_shell.py
> index cc3ad48a68..ccfd3783e8 100644
> --- a/dts/framework/remote_session/python_shell.py
> +++ b/dts/framework/remote_session/python_shell.py
> @@ -1,12 +1,32 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>
> +"""Python interactive shell.
> +
> +Typical usage example in a TestSuite::
> +
> +    from framework.remote_session import PythonShell
> +    python_shell = self.tg_node.create_interactive_shell(
> +        PythonShell, timeout=5, privileged=True
> +    )
> +    python_shell.send_command("print('Hello World')")
> +    python_shell.close()
> +"""
> +
>  from pathlib import PurePath
> +from typing import ClassVar
>
>  from .interactive_shell import InteractiveShell
>
>
>  class PythonShell(InteractiveShell):
> -    _default_prompt: str = ">>>"
> -    _command_extra_chars: str = "\n"
> -    path: PurePath = PurePath("python3")
> +    """Python interactive shell."""
> +
> +    #: Python's prompt.
> +    _default_prompt: ClassVar[str] = ">>>"
> +
> +    #: This forces the prompt to appear after sending a command.
> +    _command_extra_chars: ClassVar[str] = "\n"
> +
> +    #: The Python executable.
> +    path: ClassVar[PurePath] = PurePath("python3")
> diff --git a/dts/framework/remote_session/testpmd_shell.py
> b/dts/framework/remote_session/testpmd_shell.py
> index 08ac311016..79481e845c 100644
> --- a/dts/framework/remote_session/testpmd_shell.py
> +++ b/dts/framework/remote_session/testpmd_shell.py
> @@ -1,41 +1,79 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2023 University of New Hampshire
>
>
Should you add to the copyright here for adding comments?


> +"""Testpmd interactive shell.
> +
> +Typical usage example in a TestSuite::
> +
> +    testpmd_shell = self.sut_node.create_interactive_shell(
> +            TestPmdShell, privileged=True
> +        )
> +    devices = testpmd_shell.get_devices()
> +    for device in devices:
> +        print(device)
> +    testpmd_shell.close()
> +"""
> +
>  from pathlib import PurePath
> -from typing import Callable
> +from typing import Callable, ClassVar
>
>  from .interactive_shell import InteractiveShell
>
>
>  class TestPmdDevice(object):
> +    """The data of a device that testpmd can recognize.
> +
> +    Attributes:
> +        pci_address: The PCI address of the device.
> +    """
> +
>      pci_address: str
>
>      def __init__(self, pci_address_line: str):
> +        """Initialize the device from the testpmd output line string.
> +
> +        Args:
> +            pci_address_line: A line of testpmd output that contains a
> device.
> +        """
>          self.pci_address = pci_address_line.strip().split(": ")[1].strip()
>
>      def __str__(self) -> str:
> +        """The PCI address captures what the device is."""
>          return self.pci_address
>
>
>  class TestPmdShell(InteractiveShell):
> -    path: PurePath = PurePath("app", "dpdk-testpmd")
> -    dpdk_app: bool = True
> -    _default_prompt: str = "testpmd>"
> -    _command_extra_chars: str = "\n"  # We want to append an extra
> newline to every command
> +    """Testpmd interactive shell.
> +
> +    The testpmd shell users should never use
> +    the :meth:`~.interactive_shell.InteractiveShell.send_command` method
> directly, but rather
> +    call specialized methods. If there isn't one that satisfies a need,
> it should be added.
> +    """
> +
> +    #: The path to the testpmd executable.
> +    path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd")
> +
> +    #: Flag this as a DPDK app so that it's clear this is not a system
> app and
> +    #: needs to be looked in a specific path.
> +    dpdk_app: ClassVar[bool] = True
> +
> +    #: The testpmd's prompt.
> +    _default_prompt: ClassVar[str] = "testpmd>"
> +
> +    #: This forces the prompt to appear after sending a command.
> +    _command_extra_chars: ClassVar[str] = "\n"
>
>      def _start_application(self, get_privileged_command: Callable[[str],
> str] | None) -> None:
> -        """See "_start_application" in InteractiveShell."""
>          self._app_args += " -- -i"
>          super()._start_application(get_privileged_command)
>
>      def get_devices(self) -> list[TestPmdDevice]:
> -        """Get a list of device names that are known to testpmd
> +        """Get a list of device names that are known to testpmd.
>
> -        Uses the device info listed in testpmd and then parses the output
> to
> -        return only the names of the devices.
> +        Uses the device info listed in testpmd and then parses the output.
>
>          Returns:
> -            A list of strings representing device names (e.g.
> 0000:14:00.1)
> +            A list of devices.
>          """
>          dev_info: str = self.send_command("show device info all")
>          dev_list: list[TestPmdDevice] = []
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 21199 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 00/21] dts: docstrings update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (20 preceding siblings ...)
  2023-11-23 15:13                 ` [PATCH v8 21/21] dts: test suites " Juraj Linkeš
@ 2023-12-01 16:00                 ` Yoan Picchi
  2023-12-01 18:23                   ` Jeremy Spewock
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
  22 siblings, 1 reply; 255+ messages in thread
From: Yoan Picchi @ 2023-12-01 16:00 UTC (permalink / raw)
  To: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	Luca.Vizzarro
  Cc: dev

On 11/23/23 15:13, Juraj Linkeš wrote:
> The first commit makes changes to the code. These code changes mainly
> change the structure of the code so that the actual API docs generation
> works. There are also some code changes which get reflected in the
> documentation, such as making functions/methods/attributes private or
> public.
> 
> The rest of the commits deal with the actual docstring documentation
> (from which the API docs are generated). The format of the docstrings
> is the Google format [0] with PEP257 [1] and some guidelines captured
> in the last commit of this group covering what the Google format
> doesn't.
> The docstring updates are split into many commits to make review
> possible. When accepted, they may be squashed.
> The docstrings have been composed in anticipation of [2], adhering to
> maximum line length of 100. We don't have a tool for automatic docstring
> formatting, hence the usage of 100 right away to save time.
> 
> NOTE: The logger.py module is not fully documented, as it's being
> refactored and the refactor will be submitted in the near future.
> Documenting it now seems unnecessary.
> 
> [0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
> [1] https://peps.python.org/pep-0257/
> [2] https://patches.dpdk.org/project/dpdk/list/?series=29844
> 
> v7:
> Split the series into docstrings and api docs generation and addressed
> comments.
> 
> v8:
> Addressed review comments, all of which were pretty minor - small
> gramatical changes, a little bit of rewording to remove confusion here
> and there, additional explanations and so on.
> 
> Juraj Linkeš (21):
>    dts: code adjustments for doc generation
>    dts: add docstring checker
>    dts: add basic developer docs
>    dts: exceptions docstring update
>    dts: settings docstring update
>    dts: logger and utils docstring update
>    dts: dts runner and main docstring update
>    dts: test suite docstring update
>    dts: test result docstring update
>    dts: config docstring update
>    dts: remote session docstring update
>    dts: interactive remote session docstring update
>    dts: port and virtual device docstring update
>    dts: cpu docstring update
>    dts: os session docstring update
>    dts: posix and linux sessions docstring update
>    dts: node docstring update
>    dts: sut and tg nodes docstring update
>    dts: base traffic generators docstring update
>    dts: scapy tg docstring update
>    dts: test suites docstring update
> 
>   doc/guides/tools/dts.rst                      |  73 +++
>   dts/framework/__init__.py                     |  12 +-
>   dts/framework/config/__init__.py              | 375 +++++++++++++---
>   dts/framework/config/types.py                 | 132 ++++++
>   dts/framework/dts.py                          | 162 +++++--
>   dts/framework/exception.py                    | 156 ++++---
>   dts/framework/logger.py                       |  72 ++-
>   dts/framework/remote_session/__init__.py      |  80 ++--
>   .../interactive_remote_session.py             |  36 +-
>   .../remote_session/interactive_shell.py       | 150 +++++++
>   dts/framework/remote_session/os_session.py    | 284 ------------
>   dts/framework/remote_session/python_shell.py  |  32 ++
>   .../remote_session/remote/__init__.py         |  27 --
>   .../remote/interactive_shell.py               | 131 ------
>   .../remote_session/remote/python_shell.py     |  12 -
>   .../remote_session/remote/remote_session.py   | 168 -------
>   .../remote_session/remote/testpmd_shell.py    |  45 --
>   .../remote_session/remote_session.py          | 230 ++++++++++
>   .../{remote => }/ssh_session.py               |  28 +-
>   dts/framework/remote_session/testpmd_shell.py |  83 ++++
>   dts/framework/settings.py                     | 188 ++++++--
>   dts/framework/test_result.py                  | 301 ++++++++++---
>   dts/framework/test_suite.py                   | 236 +++++++---
>   dts/framework/testbed_model/__init__.py       |  29 +-
>   dts/framework/testbed_model/{hw => }/cpu.py   | 209 ++++++---
>   dts/framework/testbed_model/hw/__init__.py    |  27 --
>   dts/framework/testbed_model/hw/port.py        |  60 ---
>   .../testbed_model/hw/virtual_device.py        |  16 -
>   .../linux_session.py                          |  70 ++-
>   dts/framework/testbed_model/node.py           | 214 ++++++---
>   dts/framework/testbed_model/os_session.py     | 422 ++++++++++++++++++
>   dts/framework/testbed_model/port.py           |  93 ++++
>   .../posix_session.py                          |  85 +++-
>   dts/framework/testbed_model/sut_node.py       | 238 ++++++----
>   dts/framework/testbed_model/tg_node.py        |  69 ++-
>   .../testbed_model/traffic_generator.py        |  72 ---
>   .../traffic_generator/__init__.py             |  43 ++
>   .../capturing_traffic_generator.py            |  49 +-
>   .../{ => traffic_generator}/scapy.py          | 110 +++--
>   .../traffic_generator/traffic_generator.py    |  85 ++++
>   dts/framework/testbed_model/virtual_device.py |  29 ++
>   dts/framework/utils.py                        | 122 ++---
>   dts/main.py                                   |  19 +-
>   dts/poetry.lock                               |  12 +-
>   dts/pyproject.toml                            |   6 +-
>   dts/tests/TestSuite_hello_world.py            |  16 +-
>   dts/tests/TestSuite_os_udp.py                 |  20 +-
>   dts/tests/TestSuite_smoke_tests.py            |  61 ++-
>   48 files changed, 3506 insertions(+), 1683 deletions(-)
>   create mode 100644 dts/framework/config/types.py
>   rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (76%)
>   create mode 100644 dts/framework/remote_session/interactive_shell.py
>   delete mode 100644 dts/framework/remote_session/os_session.py
>   create mode 100644 dts/framework/remote_session/python_shell.py
>   delete mode 100644 dts/framework/remote_session/remote/__init__.py
>   delete mode 100644 dts/framework/remote_session/remote/interactive_shell.py
>   delete mode 100644 dts/framework/remote_session/remote/python_shell.py
>   delete mode 100644 dts/framework/remote_session/remote/remote_session.py
>   delete mode 100644 dts/framework/remote_session/remote/testpmd_shell.py
>   create mode 100644 dts/framework/remote_session/remote_session.py
>   rename dts/framework/remote_session/{remote => }/ssh_session.py (82%)
>   create mode 100644 dts/framework/remote_session/testpmd_shell.py
>   rename dts/framework/testbed_model/{hw => }/cpu.py (50%)
>   delete mode 100644 dts/framework/testbed_model/hw/__init__.py
>   delete mode 100644 dts/framework/testbed_model/hw/port.py
>   delete mode 100644 dts/framework/testbed_model/hw/virtual_device.py
>   rename dts/framework/{remote_session => testbed_model}/linux_session.py (77%)
>   create mode 100644 dts/framework/testbed_model/os_session.py
>   create mode 100644 dts/framework/testbed_model/port.py
>   rename dts/framework/{remote_session => testbed_model}/posix_session.py (73%)
>   delete mode 100644 dts/framework/testbed_model/traffic_generator.py
>   create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
>   rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (68%)
>   rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (71%)
>   create mode 100644 dts/framework/testbed_model/traffic_generator/traffic_generator.py
>   create mode 100644 dts/framework/testbed_model/virtual_device.py
> 
Reviewed-by: Yoan Picchi <yoan.picchi@arm.com>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 15/21] dts: os session docstring update
  2023-11-23 15:13                 ` [PATCH v8 15/21] dts: os session " Juraj Linkeš
@ 2023-12-01 17:33                   ` Jeremy Spewock
  2023-12-04  9:53                     ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-12-01 17:33 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

[-- Attachment #1: Type: text/plain, Size: 18401 bytes --]

On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/testbed_model/os_session.py | 272 ++++++++++++++++------
>  1 file changed, 205 insertions(+), 67 deletions(-)
>
> diff --git a/dts/framework/testbed_model/os_session.py
> b/dts/framework/testbed_model/os_session.py
> index 76e595a518..cfdbd1c4bd 100644
> --- a/dts/framework/testbed_model/os_session.py
> +++ b/dts/framework/testbed_model/os_session.py
> @@ -2,6 +2,26 @@
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>  # Copyright(c) 2023 University of New Hampshire
>
> +"""OS-aware remote session.
> +
> +DPDK supports multiple different operating systems, meaning it can run on
> these different operating
> +systems. This module defines the common API that OS-unaware layers use
> and translates the API into
> +OS-aware calls/utility usage.
> +
> +Note:
> +    Running commands with administrative privileges requires OS
> awareness. This is the only layer
> +    that's aware of OS differences, so this is where non-privileged
> command get converted
> +    to privileged commands.
> +
> +Example:
> +    A user wishes to remove a directory on a remote
> :class:`~.sut_node.SutNode`.
> +    The :class:`~.sut_node.SutNode` object isn't aware what OS the node
> is running - it delegates
> +    the OS translation logic to :attr:`~.node.Node.main_session`. The SUT
> node calls
> +    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path
> and
> +    the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if
> the node's OS is Linux
> +    and other commands for other OSs. It also translates the path to
> match the underlying OS.
> +"""
> +
>  from abc import ABC, abstractmethod
>  from collections.abc import Iterable
>  from ipaddress import IPv4Interface, IPv6Interface
> @@ -28,10 +48,16 @@
>
>
>  class OSSession(ABC):
> -    """
> -    The OS classes create a DTS node remote session and implement OS
> specific
> +    """OS-unaware to OS-aware translation API definition.
> +
> +    The OSSession classes create a remote session to a DTS node and
> implement OS specific
>      behavior. There a few control methods implemented by the base class,
> the rest need
> -    to be implemented by derived classes.
> +    to be implemented by subclasses.
> +
> +    Attributes:
> +        name: The name of the session.
> +        remote_session: The remote session maintaining the connection to
> the node.
> +        interactive_session: The interactive remote session maintaining
> the connection to the node.
>      """
>
>      _config: NodeConfiguration
> @@ -46,6 +72,15 @@ def __init__(
>          name: str,
>          logger: DTSLOG,
>      ):
> +        """Initialize the OS-aware session.
> +
> +        Connect to the node right away and also create an interactive
> remote session.
> +
> +        Args:
> +            node_config: The test run configuration of the node to
> connect to.
> +            name: The name of the session.
> +            logger: The logger instance this session will use.
> +        """
>          self._config = node_config
>          self.name = name
>          self._logger = logger
> @@ -53,15 +88,15 @@ def __init__(
>          self.interactive_session =
> create_interactive_session(node_config, logger)
>
>      def close(self, force: bool = False) -> None:
> -        """
> -        Close the remote session.
> +        """Close the underlying remote session.
> +
> +        Args:
> +            force: Force the closure of the connection.
>          """
>          self.remote_session.close(force)
>
>      def is_alive(self) -> bool:
> -        """
> -        Check whether the remote session is still responding.
> -        """
> +        """Check whether the underlying remote session is still
> responding."""
>          return self.remote_session.is_alive()
>
>      def send_command(
> @@ -72,10 +107,23 @@ def send_command(
>          verify: bool = False,
>          env: dict | None = None,
>      ) -> CommandResult:
> -        """
> -        An all-purpose API in case the command to be executed is already
> -        OS-agnostic, such as when the path to the executed command has
> been
> -        constructed beforehand.
> +        """An all-purpose API for OS-agnostic commands.
> +
> +        This can be used for an execution of a portable command that's
> executed the same way
> +        on all operating systems, such as Python.
> +
> +        The :option:`--timeout` command line argument and the
> :envvar:`DTS_TIMEOUT`
> +        environment variable configure the timeout of command execution.
> +
> +        Args:
> +            command: The command to execute.
> +            timeout: Wait at most this long in seconds for `command`
> execution to complete.
> +            privileged: Whether to run the command with administrative
> privileges.
> +            verify: If :data:`True`, will check the exit code of the
> command.
> +            env: A dictionary with environment variables to be used with
> the command execution.
> +
> +        Raises:
> +            RemoteCommandExecutionError: If verify is :data:`True` and
> the command failed.
>          """
>          if privileged:
>              command = self._get_privileged_command(command)
> @@ -89,8 +137,20 @@ def create_interactive_shell(
>          privileged: bool,
>          app_args: str,
>      ) -> InteractiveShellType:
> -        """
> -        See "create_interactive_shell" in SutNode
> +        """Factory for interactive session handlers.
> +
> +        Instantiate `shell_cls` according to the remote OS specifics.
> +
> +        Args:
> +            shell_cls: The class of the shell.
> +            timeout: Timeout for reading output from the SSH channel. If
> you are
> +                reading from the buffer and don't receive any data within
> the timeout
> +                it will throw an error.
> +            privileged: Whether to run the shell with administrative
> privileges.
> +            app_args: The arguments to be passed to the application.
> +
> +        Returns:
> +            An instance of the desired interactive application shell.
>          """
>          return shell_cls(
>              self.interactive_session.session,
> @@ -114,27 +174,42 @@ def _get_privileged_command(command: str) -> str:
>
>      @abstractmethod
>      def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) ->
> PurePath:
> -        """
> -        Try to find DPDK remote dir in remote_dir.
> +        """Try to find DPDK directory in `remote_dir`.
> +
> +        The directory is the one which is created after the extraction of
> the tarball. The files
> +        are usually extracted into a directory starting with ``dpdk-``.
> +
> +        Returns:
> +            The absolute path of the DPDK remote directory, empty path if
> not found.
>          """
>
>      @abstractmethod
>      def get_remote_tmp_dir(self) -> PurePath:
> -        """
> -        Get the path of the temporary directory of the remote OS.
> +        """Get the path of the temporary directory of the remote OS.
> +
> +        Returns:
> +            The absolute path of the temporary directory.
>          """
>
>      @abstractmethod
>      def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
> -        """
> -        Create extra environment variables needed for the target
> architecture. Get
> -        information from the node if needed.
> +        """Create extra environment variables needed for the target
> architecture.
> +
> +        Different architectures may require different configuration, such
> as setting 32-bit CFLAGS.
> +
> +        Returns:
> +            A dictionary with keys as environment variables.
>          """
>
>      @abstractmethod
>      def join_remote_path(self, *args: str | PurePath) -> PurePath:
> -        """
> -        Join path parts using the path separator that fits the remote OS.
> +        """Join path parts using the path separator that fits the remote
> OS.
> +
> +        Args:
> +            args: Any number of paths to join.
> +
> +        Returns:
> +            The resulting joined path.
>          """
>
>      @abstractmethod
> @@ -143,13 +218,13 @@ def copy_from(
>          source_file: str | PurePath,
>          destination_file: str | PurePath,
>      ) -> None:
> -        """Copy a file from the remote Node to the local filesystem.
> +        """Copy a file from the remote node to the local filesystem.
>
> -        Copy source_file from the remote Node associated with this remote
> -        session to destination_file on the local filesystem.
> +        Copy `source_file` from the remote node associated with this
> remote
> +        session to `destination_file` on the local filesystem.
>
>          Args:
> -            source_file: the file on the remote Node.
> +            source_file: the file on the remote node.
>              destination_file: a file or directory path on the local
> filesystem.
>          """
>
> @@ -159,14 +234,14 @@ def copy_to(
>          source_file: str | PurePath,
>          destination_file: str | PurePath,
>      ) -> None:
> -        """Copy a file from local filesystem to the remote Node.
> +        """Copy a file from local filesystem to the remote node.
>
> -        Copy source_file from local filesystem to destination_file
> -        on the remote Node associated with this remote session.
> +        Copy `source_file` from local filesystem to `destination_file`
> +        on the remote node associated with this remote session.
>
>          Args:
>              source_file: the file on the local filesystem.
> -            destination_file: a file or directory path on the remote Node.
> +            destination_file: a file or directory path on the remote node.
>          """
>
>      @abstractmethod
> @@ -176,8 +251,12 @@ def remove_remote_dir(
>          recursive: bool = True,
>          force: bool = True,
>      ) -> None:
> -        """
> -        Remove remote directory, by default remove recursively and
> forcefully.
> +        """Remove remote directory, by default remove recursively and
> forcefully.
> +
> +        Args:
> +            remote_dir_path: The path of the directory to remove.
> +            recursive: If :data:`True`, also remove all contents inside
> the directory.
> +            force: If :data:`True`, ignore all warnings and try to remove
> at all costs.
>          """
>
>      @abstractmethod
> @@ -186,9 +265,12 @@ def extract_remote_tarball(
>          remote_tarball_path: str | PurePath,
>          expected_dir: str | PurePath | None = None,
>      ) -> None:
> -        """
> -        Extract remote tarball in place. If expected_dir is a non-empty
> string, check
> -        whether the dir exists after extracting the archive.
> +        """Extract remote tarball in its remote directory.
> +
> +        Args:
> +            remote_tarball_path: The path of the tarball on the remote
> node.
> +            expected_dir: If non-empty, check whether `expected_dir`
> exists after extracting
> +                the archive.
>          """
>
>      @abstractmethod
> @@ -201,69 +283,119 @@ def build_dpdk(
>          rebuild: bool = False,
>          timeout: float = SETTINGS.compile_timeout,
>      ) -> None:
> -        """
> -        Build DPDK in the input dir with specified environment variables
> and meson
> -        arguments.
> +        """Build DPDK on the remote node.
> +
> +        An extracted DPDK tarball must be present on the node. The build
> consists of two steps::
> +
> +            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
> +            ninja -C remote_dpdk_build_dir
> +
> +        The :option:`--compile-timeout` command line argument and the
> :envvar:`DTS_COMPILE_TIMEOUT`
> +        environment variable configure the timeout of DPDK build.
> +
> +        Args:
> +            env_vars: Use these environment variables then building DPDK.
>

I think this is meant to be "when building DPDK" instead.


> +            meson_args: Use these meson arguments when building DPDK.
> +            remote_dpdk_dir: The directory on the remote node where DPDK
> will be built.
> +            remote_dpdk_build_dir: The target build directory on the
> remote node.
> +            rebuild: If :data:`True`, do a subsequent build with ``meson
> configure`` instead
> +                of ``meson setup``.
> +            timeout: Wait at most this long in seconds for the build
> execution to complete.
>          """
>
>      @abstractmethod
>      def get_dpdk_version(self, version_path: str | PurePath) -> str:
> -        """
> -        Inspect DPDK version on the remote node from version_path.
> +        """Inspect the DPDK version on the remote node.
> +
> +        Args:
> +            version_path: The path to the VERSION file containing the
> DPDK version.
> +
> +        Returns:
> +            The DPDK version.
>          """
>
>      @abstractmethod
>      def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
> -        """
> -        Compose a list of LogicalCores present on the remote node.
> -        If use_first_core is False, the first physical core won't be used.
> +        r"""Get the list of :class:`~.cpu.LogicalCore`\s on the remote
> node.
> +
> +        Args:
> +            use_first_core: If :data:`False`, the first physical core
> won't be used.
> +
> +        Returns:
> +            The logical cores present on the node.
>          """
>
>      @abstractmethod
>      def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) ->
> None:
> -        """
> -        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
> -        dpdk_prefix_list is empty, attempt to find running DPDK apps to
> kill and clean.
> +        """Kill and cleanup all DPDK apps.
> +
> +        Args:
> +            dpdk_prefix_list: Kill all apps identified by
> `dpdk_prefix_list`.
> +                If `dpdk_prefix_list` is empty, attempt to find running
> DPDK apps to kill and clean.
>          """
>
>      @abstractmethod
>      def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
> -        """
> -        Get the DPDK file prefix that will be used when running DPDK apps.
> +        """Make OS-specific modification to the DPDK file prefix.
> +
> +        Args:
> +           dpdk_prefix: The OS-unaware file prefix.
> +
> +        Returns:
> +            The OS-specific file prefix.
>          """
>
>      @abstractmethod
> -    def setup_hugepages(self, hugepage_amount: int, force_first_numa:
> bool) -> None:
> -        """
> -        Get the node's Hugepage Size, configure the specified amount of
> hugepages
> +    def setup_hugepages(self, hugepage_count: int, force_first_numa:
> bool) -> None:
> +        """Configure hugepages on the node.
> +
> +        Get the node's Hugepage Size, configure the specified count of
> hugepages
>          if needed and mount the hugepages if needed.
> -        If force_first_numa is True, configure hugepages just on the
> first socket.
> +
> +        Args:
> +            hugepage_count: Configure this many hugepages.
> +            force_first_numa:  If :data:`True`, configure hugepages just
> on the first numa node.
>          """
>
>      @abstractmethod
>      def get_compiler_version(self, compiler_name: str) -> str:
> -        """
> -        Get installed version of compiler used for DPDK
> +        """Get installed version of compiler used for DPDK.
> +
> +        Args:
> +            compiler_name: The name of the compiler executable.
> +
> +        Returns:
> +            The compiler's version.
>          """
>
>      @abstractmethod
>      def get_node_info(self) -> NodeInfo:
> -        """
> -        Collect information about the node
> +        """Collect additional information about the node.
> +
> +        Returns:
> +            Node information.
>          """
>
>      @abstractmethod
>      def update_ports(self, ports: list[Port]) -> None:
> -        """
> -        Get additional information about ports:
> -            Logical name (e.g. enp7s0) if applicable
> -            Mac address
> +        """Get additional information about ports from the operating
> system and update them.
> +
> +        The additional information is:
> +
> +            * Logical name (e.g. ``enp7s0``) if applicable,
> +            * Mac address.
> +
> +        Args:
> +            ports: The ports to update.
>          """
>
>      @abstractmethod
>      def configure_port_state(self, port: Port, enable: bool) -> None:
> -        """
> -        Enable/disable port.
> +        """Enable/disable `port` in the operating system.
> +
> +        Args:
> +            port: The port to configure.
> +            enable: If :data:`True`, enable the port, otherwise shut it
> down.
>          """
>
>      @abstractmethod
> @@ -273,12 +405,18 @@ def configure_port_ip_address(
>          port: Port,
>          delete: bool,
>      ) -> None:
> -        """
> -        Configure (add or delete) an IP address of the input port.
> +        """Configure an IP address on `port` in the operating system.
> +
> +        Args:
> +            address: The address to configure.
> +            port: The port to configure.
> +            delete: If :data:`True`, remove the IP address, otherwise
> configure it.
>          """
>
>      @abstractmethod
>      def configure_ipv4_forwarding(self, enable: bool) -> None:
> -        """
> -        Enable IPv4 forwarding in the underlying OS.
> +        """Enable IPv4 forwarding in the operating system.
> +
> +        Args:
> +            enable: If :data:`True`, enable the forwarding, otherwise
> disable it.
>          """
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 22555 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 19/21] dts: base traffic generators docstring update
  2023-11-23 15:13                 ` [PATCH v8 19/21] dts: base traffic generators " Juraj Linkeš
@ 2023-12-01 18:05                   ` Jeremy Spewock
  2023-12-04 10:03                     ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-12-01 18:05 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

[-- Attachment #1: Type: text/plain, Size: 11286 bytes --]

On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  .../traffic_generator/__init__.py             | 22 ++++++++-
>  .../capturing_traffic_generator.py            | 45 +++++++++++--------
>  .../traffic_generator/traffic_generator.py    | 33 ++++++++------
>  3 files changed, 67 insertions(+), 33 deletions(-)
>
> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py
> b/dts/framework/testbed_model/traffic_generator/__init__.py
> index 52888d03fa..11e2bd7d97 100644
> --- a/dts/framework/testbed_model/traffic_generator/__init__.py
> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
> @@ -1,6 +1,19 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>
> +"""DTS traffic generators.
> +
> +A traffic generator is capable of generating traffic and then monitor
> returning traffic.
> +All traffic generators must count the number of received packets. Some
> may additionally capture
> +individual packets.
> +
> +A traffic generator may be software running on generic hardware or it
> could be specialized hardware.
> +
> +The traffic generators that only count the number of received packets are
> suitable only for
> +performance testing. In functional testing, we need to be able to dissect
> each arrived packet
> +and a capturing traffic generator is required.
> +"""
> +
>  from framework.config import ScapyTrafficGeneratorConfig,
> TrafficGeneratorType
>  from framework.exception import ConfigurationError
>  from framework.testbed_model.node import Node
> @@ -12,8 +25,15 @@
>  def create_traffic_generator(
>      tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
>  ) -> CapturingTrafficGenerator:
> -    """A factory function for creating traffic generator object from user
> config."""
> +    """The factory function for creating traffic generator objects from
> the test run configuration.
> +
> +    Args:
> +        tg_node: The traffic generator node where the created traffic
> generator will be running.
> +        traffic_generator_config: The traffic generator config.
>
> +    Returns:
> +        A traffic generator capable of capturing received packets.
> +    """
>      match traffic_generator_config.traffic_generator_type:
>          case TrafficGeneratorType.SCAPY:
>              return ScapyTrafficGenerator(tg_node,
> traffic_generator_config)
> diff --git
> a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> index 1fc7f98c05..0246590333 100644
> ---
> a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> +++
> b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
> @@ -23,19 +23,21 @@
>
>
>  def _get_default_capture_name() -> str:
> -    """
> -    This is the function used for the default implementation of capture
> names.
> -    """
>      return str(uuid.uuid4())
>
>
>  class CapturingTrafficGenerator(TrafficGenerator):
>      """Capture packets after sending traffic.
>
> -    A mixin interface which enables a packet generator to declare that it
> can capture
> +    The intermediary interface which enables a packet generator to
> declare that it can capture
>      packets and return them to the user.
>
> +    Similarly to :class:`~.traffic_generator.TrafficGenerator`, this
> class exposes
> +    the public methods specific to capturing traffic generators and
> defines a private method
> +    that must implement the traffic generation and capturing logic in
> subclasses.
> +
>      The methods of capturing traffic generators obey the following
> workflow:
> +
>          1. send packets
>          2. capture packets
>          3. write the capture to a .pcap file
> @@ -44,6 +46,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
>
>      @property
>      def is_capturing(self) -> bool:
> +        """This traffic generator can capture traffic."""
>          return True
>
>      def send_packet_and_capture(
> @@ -54,11 +57,12 @@ def send_packet_and_capture(
>          duration: float,
>          capture_name: str = _get_default_capture_name(),
>      ) -> list[Packet]:
> -        """Send a packet, return received traffic.
> +        """Send `packet` and capture received traffic.
> +
> +        Send `packet` on `send_port` and then return all traffic captured
> +        on `receive_port` for the given `duration`.
>
> -        Send a packet on the send_port and then return all traffic
> captured
> -        on the receive_port for the given duration. Also record the
> captured traffic
> -        in a pcap file.
> +        The captured traffic is recorded in the `capture_name`.pcap file.
>
>          Args:
>              packet: The packet to send.
> @@ -68,7 +72,7 @@ def send_packet_and_capture(
>              capture_name: The name of the .pcap file where to store the
> capture.
>
>          Returns:
> -             A list of received packets. May be empty if no packets are
> captured.
> +             The received packets. May be empty if no packets are
> captured.
>          """
>          return self.send_packets_and_capture(
>              [packet], send_port, receive_port, duration, capture_name
> @@ -82,11 +86,14 @@ def send_packets_and_capture(
>          duration: float,
>          capture_name: str = _get_default_capture_name(),
>      ) -> list[Packet]:
> -        """Send packets, return received traffic.
> +        """Send `packets` and capture received traffic.
>
> -        Send packets on the send_port and then return all traffic captured
> -        on the receive_port for the given duration. Also record the
> captured traffic
> -        in a pcap file.
> +        Send `packets` on `send_port` and then return all traffic captured
> +        on `receive_port` for the given `duration`.
> +
> +        The captured traffic is recorded in the `capture_name`.pcap file.
> The target directory
> +        can be configured with the :option:`--output-dir` command line
> argument or
> +        the :envvar:`DTS_OUTPUT_DIR` environment variable.
>
>          Args:
>              packets: The packets to send.
> @@ -96,7 +103,7 @@ def send_packets_and_capture(
>              capture_name: The name of the .pcap file where to store the
> capture.
>
>          Returns:
> -             A list of received packets. May be empty if no packets are
> captured.
> +             The received packets. May be empty if no packets are
> captured.
>          """
>          self._logger.debug(get_packet_summaries(packets))
>          self._logger.debug(
> @@ -121,10 +128,12 @@ def _send_packets_and_capture(
>          receive_port: Port,
>          duration: float,
>      ) -> list[Packet]:
> -        """
> -        The extended classes must implement this method which
> -        sends packets on send_port and receives packets on the
> receive_port
> -        for the specified duration. It must be able to handle no received
> packets.
> +        """The implementation of :method:`send_packets_and_capture`.
> +
> +        The subclasses must implement this method which sends `packets`
> on `send_port`
> +        and receives packets on `receive_port` for the specified
> `duration`.
> +
> +        It must be able to handle receiving no packets.
>          """
>
>      def _write_capture_from_packets(self, capture_name: str, packets:
> list[Packet]) -> None:
> diff --git
> a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> index 0d9902ddb7..5fb9824568 100644
> --- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> +++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
> @@ -22,7 +22,8 @@
>  class TrafficGenerator(ABC):
>      """The base traffic generator.
>
> -    Defines the few basic methods that each traffic generator must
> implement.
> +    Exposes the common public methods of all traffic generators and
> defines private methods
> +    that must implement the traffic generation logic in subclasses.
>      """
>
>      _config: TrafficGeneratorConfig
> @@ -30,14 +31,20 @@ class TrafficGenerator(ABC):
>      _logger: DTSLOG
>
>      def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
> +        """Initialize the traffic generator.
> +
> +        Args:
> +            tg_node: The traffic generator node where the created traffic
> generator will be running.
> +            config: The traffic generator's test run configuration.
> +        """
>          self._config = config
>          self._tg_node = tg_node
>          self._logger = getLogger(f"{self._tg_node.name}
> {self._config.traffic_generator_type}")
>
>      def send_packet(self, packet: Packet, port: Port) -> None:
> -        """Send a packet and block until it is fully sent.
> +        """Send `packet` and block until it is fully sent.
>
> -        What fully sent means is defined by the traffic generator.
> +        Send `packet` on `port`, then wait until `packet` is fully sent.
>
>          Args:
>              packet: The packet to send.
> @@ -46,9 +53,9 @@ def send_packet(self, packet: Packet, port: Port) ->
> None:
>          self.send_packets([packet], port)
>
>      def send_packets(self, packets: list[Packet], port: Port) -> None:
> -        """Send packets and block until they are fully sent.
> +        """Send `packets` and block until they are fully sent.
>
> -        What fully sent means is defined by the traffic generator.
> +        Send `packets` on `port`, then wait until `packets` are fully
> sent.
>
>          Args:
>              packets: The packets to send.
> @@ -60,19 +67,17 @@ def send_packets(self, packets: list[Packet], port:
> Port) -> None:
>
>      @abstractmethod
>      def _send_packets(self, packets: list[Packet], port: Port) -> None:
> -        """
> -        The extended classes must implement this method which
> -        sends packets on send_port. The method should block until all
> packets
> -        are fully sent.
> +        """The implementation of :method:`send_packets`.
> +
> +        The subclasses must implement this method which sends `packets`
> on `port`.
> +        The method should block until all `packets` are fully sent.
> +
> +        What full sent means is defined by the traffic generator.
>          """
>

I think this should be "what fully sent means"


>      @property
>      def is_capturing(self) -> bool:
> -        """Whether this traffic generator can capture traffic.
> -
> -        Returns:
> -            True if the traffic generator can capture traffic, False
> otherwise.
> -        """
> +        """This traffic generator can't capture traffic."""
>          return False
>
>      @abstractmethod
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 13466 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 18/21] dts: sut and tg nodes docstring update
  2023-11-23 15:13                 ` [PATCH v8 18/21] dts: sut and tg nodes " Juraj Linkeš
@ 2023-12-01 18:06                   ` Jeremy Spewock
  2023-12-04 10:02                     ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-12-01 18:06 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

[-- Attachment #1: Type: text/plain, Size: 23415 bytes --]

On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/framework/testbed_model/sut_node.py | 230 ++++++++++++++++--------
>  dts/framework/testbed_model/tg_node.py  |  42 +++--
>  2 files changed, 176 insertions(+), 96 deletions(-)
>
> diff --git a/dts/framework/testbed_model/sut_node.py
> b/dts/framework/testbed_model/sut_node.py
> index 5ce9446dba..c4acea38d1 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -3,6 +3,14 @@
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>  # Copyright(c) 2023 University of New Hampshire
>
> +"""System under test (DPDK + hardware) node.
> +
> +A system under test (SUT) is the combination of DPDK
> +and the hardware we're testing with DPDK (NICs, crypto and other devices).
> +An SUT node is where this SUT runs.
> +"""
>

I think this should just be "A SUT node"


> +
> +
>  import os
>  import tarfile
>  import time
> @@ -26,6 +34,11 @@
>
>
>  class EalParameters(object):
> +    """The environment abstraction layer parameters.
> +
> +    The string representation can be created by converting the instance
> to a string.
> +    """
> +
>      def __init__(
>          self,
>          lcore_list: LogicalCoreList,
> @@ -35,21 +48,23 @@ def __init__(
>          vdevs: list[VirtualDevice],
>          other_eal_param: str,
>      ):
> -        """
> -        Generate eal parameters character string;
> -        :param lcore_list: the list of logical cores to use.
> -        :param memory_channels: the number of memory channels to use.
> -        :param prefix: set file prefix string, eg:
> -                        prefix='vf'
> -        :param no_pci: switch of disable PCI bus eg:
> -                        no_pci=True
> -        :param vdevs: virtual device list, eg:
> -                        vdevs=[
> -                            VirtualDevice('net_ring0'),
> -                            VirtualDevice('net_ring1')
> -                        ]
> -        :param other_eal_param: user defined DPDK eal parameters, eg:
> -                        other_eal_param='--single-file-segments'
> +        """Initialize the parameters according to inputs.
> +
> +        Process the parameters into the format used on the command line.
> +
> +        Args:
> +            lcore_list: The list of logical cores to use.
> +            memory_channels: The number of memory channels to use.
> +            prefix: Set the file prefix string with which to start DPDK,
> e.g.: ``prefix='vf'``.
> +            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
> +            vdevs: Virtual devices, e.g.::
> +
> +                vdevs=[
> +                    VirtualDevice('net_ring0'),
> +                    VirtualDevice('net_ring1')
> +                ]
> +            other_eal_param: user defined DPDK EAL parameters, e.g.:
> +                ``other_eal_param='--single-file-segments'``
>          """
>          self._lcore_list = f"-l {lcore_list}"
>          self._memory_channels = f"-n {memory_channels}"
> @@ -61,6 +76,7 @@ def __init__(
>          self._other_eal_param = other_eal_param
>
>      def __str__(self) -> str:
> +        """Create the EAL string."""
>          return (
>              f"{self._lcore_list} "
>              f"{self._memory_channels} "
> @@ -72,11 +88,21 @@ def __str__(self) -> str:
>
>
>  class SutNode(Node):
> -    """
> -    A class for managing connections to the System under Test, providing
> -    methods that retrieve the necessary information about the node (such
> as
> -    CPU, memory and NIC details) and configuration capabilities.
> -    Another key capability is building DPDK according to given build
> target.
> +    """The system under test node.
> +
> +    The SUT node extends :class:`Node` with DPDK specific features:
> +
> +        * DPDK build,
> +        * Gathering of DPDK build info,
> +        * The running of DPDK apps, interactively or one-time execution,
> +        * DPDK apps cleanup.
> +
> +    The :option:`--tarball` command line argument and the
> :envvar:`DTS_DPDK_TARBALL`
> +    environment variable configure the path to the DPDK tarball
> +    or the git commit ID, tag ID or tree ID to test.
> +
> +    Attributes:
> +        config: The SUT node configuration
>      """
>
>      config: SutNodeConfiguration
> @@ -94,6 +120,11 @@ class SutNode(Node):
>      _path_to_devbind_script: PurePath | None
>
>      def __init__(self, node_config: SutNodeConfiguration):
> +        """Extend the constructor with SUT node specifics.
> +
> +        Args:
> +            node_config: The SUT node's test run configuration.
> +        """
>          super(SutNode, self).__init__(node_config)
>          self._dpdk_prefix_list = []
>          self._build_target_config = None
> @@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration):
>
>      @property
>      def _remote_dpdk_dir(self) -> PurePath:
> +        """The remote DPDK dir.
> +
> +        This internal property should be set after extracting the DPDK
> tarball. If it's not set,
> +        that implies the DPDK setup step has been skipped, in which case
> we can guess where
> +        a previous build was located.
> +        """
>          if self.__remote_dpdk_dir is None:
>              self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
>          return self.__remote_dpdk_dir
> @@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
>
>      @property
>      def remote_dpdk_build_dir(self) -> PurePath:
> +        """The remote DPDK build directory.
> +
> +        This is the directory where DPDK was built.
> +        We assume it was built in a subdirectory of the extracted tarball.
> +        """
>          if self._build_target_config:
>              return self.main_session.join_remote_path(
>                  self._remote_dpdk_dir, self._build_target_config.name
> @@ -132,18 +174,21 @@ def remote_dpdk_build_dir(self) -> PurePath:
>
>      @property
>      def dpdk_version(self) -> str:
> +        """Last built DPDK version."""
>          if self._dpdk_version is None:
>              self._dpdk_version =
> self.main_session.get_dpdk_version(self._remote_dpdk_dir)
>          return self._dpdk_version
>
>      @property
>      def node_info(self) -> NodeInfo:
> +        """Additional node information."""
>          if self._node_info is None:
>              self._node_info = self.main_session.get_node_info()
>          return self._node_info
>
>      @property
>      def compiler_version(self) -> str:
> +        """The node's compiler version."""
>          if self._compiler_version is None:
>              if self._build_target_config is not None:
>                  self._compiler_version =
> self.main_session.get_compiler_version(
> @@ -158,6 +203,7 @@ def compiler_version(self) -> str:
>
>      @property
>      def path_to_devbind_script(self) -> PurePath:
> +        """The path to the dpdk-devbind.py script on the node."""
>          if self._path_to_devbind_script is None:
>              self._path_to_devbind_script =
> self.main_session.join_remote_path(
>                  self._remote_dpdk_dir, "usertools", "dpdk-devbind.py"
> @@ -165,6 +211,11 @@ def path_to_devbind_script(self) -> PurePath:
>          return self._path_to_devbind_script
>
>      def get_build_target_info(self) -> BuildTargetInfo:
> +        """Get additional build target information.
> +
> +        Returns:
> +            The build target information,
> +        """
>          return BuildTargetInfo(
>              dpdk_version=self.dpdk_version,
> compiler_version=self.compiler_version
>          )
> @@ -173,8 +224,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
>          return
> self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
>
>      def _set_up_build_target(self, build_target_config:
> BuildTargetConfiguration) -> None:
> -        """
> -        Setup DPDK on the SUT node.
> +        """Setup DPDK on the SUT node.
> +
> +        Additional build target setup steps on top of those in
> :class:`Node`.
>          """
>          # we want to ensure that dpdk_version and compiler_version is
> reset for new
>          # build targets
> @@ -186,16 +238,14 @@ def _set_up_build_target(self, build_target_config:
> BuildTargetConfiguration) ->
>          self.bind_ports_to_driver()
>
>      def _tear_down_build_target(self) -> None:
> -        """
> -        This method exists to be optionally overwritten by derived
> classes and
> -        is not decorated so that the derived class doesn't have to use
> the decorator.
> +        """Bind ports to the operating system drivers.
> +
> +        Additional build target teardown steps on top of those in
> :class:`Node`.
>          """
>          self.bind_ports_to_driver(for_dpdk=False)
>
>      def _configure_build_target(self, build_target_config:
> BuildTargetConfiguration) -> None:
> -        """
> -        Populate common environment variables and set build target config.
> -        """
> +        """Populate common environment variables and set build target
> config."""
>          self._env_vars = {}
>          self._build_target_config = build_target_config
>
>  self._env_vars.update(self.main_session.get_dpdk_build_env_vars(build_target_config.arch))
> @@ -207,9 +257,7 @@ def _configure_build_target(self, build_target_config:
> BuildTargetConfiguration)
>
>      @Node.skip_setup
>      def _copy_dpdk_tarball(self) -> None:
> -        """
> -        Copy to and extract DPDK tarball on the SUT node.
> -        """
> +        """Copy to and extract DPDK tarball on the SUT node."""
>          self._logger.info("Copying DPDK tarball to SUT.")
>          self.main_session.copy_to(SETTINGS.dpdk_tarball_path,
> self._remote_tmp_dir)
>
> @@ -238,8 +286,9 @@ def _copy_dpdk_tarball(self) -> None:
>
>      @Node.skip_setup
>      def _build_dpdk(self) -> None:
> -        """
> -        Build DPDK. Uses the already configured target. Assumes that the
> tarball has
> +        """Build DPDK.
> +
> +        Uses the already configured target. Assumes that the tarball has
>          already been copied to and extracted on the SUT node.
>          """
>          self.main_session.build_dpdk(
> @@ -250,15 +299,19 @@ def _build_dpdk(self) -> None:
>          )
>
>      def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str |
> bool) -> PurePath:
> -        """
> -        Build one or all DPDK apps. Requires DPDK to be already built on
> the SUT node.
> -        When app_name is 'all', build all example apps.
> -        When app_name is any other string, tries to build that example
> app.
> -        Return the directory path of the built app. If building all apps,
> return
> -        the path to the examples directory (where all apps reside).
> -        The meson_dpdk_args are keyword arguments
> -        found in meson_option.txt in root DPDK directory. Do not use -D
> with them,
> -        for example: enable_kmods=True.
> +        """Build one or all DPDK apps.
> +
> +        Requires DPDK to be already built on the SUT node.
> +
> +        Args:
> +            app_name: The name of the DPDK app to build.
> +                When `app_name` is ``all``, build all example apps.
> +            meson_dpdk_args: The arguments found in ``meson_options.txt``
> in root DPDK directory.
> +                Do not use ``-D`` with them.
> +
> +        Returns:
> +            The directory path of the built app. If building all apps,
> return
> +            the path to the examples directory (where all apps reside).
>          """
>          self.main_session.build_dpdk(
>              self._env_vars,
> @@ -277,9 +330,7 @@ def build_dpdk_app(self, app_name: str,
> **meson_dpdk_args: str | bool) -> PurePa
>          )
>
>      def kill_cleanup_dpdk_apps(self) -> None:
> -        """
> -        Kill all dpdk applications on the SUT. Cleanup hugepages.
> -        """
> +        """Kill all dpdk applications on the SUT, then clean up
> hugepages."""
>          if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
>              # we can use the session if it exists and responds
>
>  self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
> @@ -298,33 +349,34 @@ def create_eal_parameters(
>          vdevs: list[VirtualDevice] | None = None,
>          other_eal_param: str = "",
>      ) -> "EalParameters":
> -        """
> -        Generate eal parameters character string;
> -        :param lcore_filter_specifier: a number of lcores/cores/sockets
> to use
> -                        or a list of lcore ids to use.
> -                        The default will select one lcore for each of two
> cores
> -                        on one socket, in ascending order of core ids.
> -        :param ascending_cores: True, use cores with the lowest numerical
> id first
> -                        and continue in ascending order. If False, start
> with the
> -                        highest id and continue in descending order. This
> ordering
> -                        affects which sockets to consider first as well.
> -        :param prefix: set file prefix string, eg:
> -                        prefix='vf'
> -        :param append_prefix_timestamp: if True, will append a timestamp
> to
> -                        DPDK file prefix.
> -        :param no_pci: switch of disable PCI bus eg:
> -                        no_pci=True
> -        :param vdevs: virtual device list, eg:
> -                        vdevs=[
> -                            VirtualDevice('net_ring0'),
> -                            VirtualDevice('net_ring1')
> -                        ]
> -        :param other_eal_param: user defined DPDK eal parameters, eg:
> -                        other_eal_param='--single-file-segments'
> -        :return: eal param string, eg:
> -                '-c 0xf -a 0000:88:00.0
> --file-prefix=dpdk_1112_20190809143420';
> -        """
> +        """Compose the EAL parameters.
> +
> +        Process the list of cores and the DPDK prefix and pass that along
> with
> +        the rest of the arguments.
>
> +        Args:
> +            lcore_filter_specifier: A number of lcores/cores/sockets to
> use
> +                or a list of lcore ids to use.
> +                The default will select one lcore for each of two cores
> +                on one socket, in ascending order of core ids.
> +            ascending_cores: Sort cores in ascending order (lowest to
> highest IDs).
> +                If :data:`False`, sort in descending order.
> +            prefix: Set the file prefix string with which to start DPDK,
> e.g.: ``prefix='vf'``.
> +            append_prefix_timestamp: If :data:`True`, will append a
> timestamp to DPDK file prefix.
> +            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
> +            vdevs: Virtual devices, e.g.::
> +
> +                vdevs=[
> +                    VirtualDevice('net_ring0'),
> +                    VirtualDevice('net_ring1')
> +                ]
> +            other_eal_param: user defined DPDK EAL parameters, e.g.:
> +                ``other_eal_param='--single-file-segments'``.
> +
> +        Returns:
> +            An EAL param string, such as
> +            ``-c 0xf -a 0000:88:00.0
> --file-prefix=dpdk_1112_20190809143420``.
> +        """
>          lcore_list =
> LogicalCoreList(self.filter_lcores(lcore_filter_specifier, ascending_cores))
>
>          if append_prefix_timestamp:
> @@ -348,14 +400,29 @@ def create_eal_parameters(
>      def run_dpdk_app(
>          self, app_path: PurePath, eal_args: "EalParameters", timeout:
> float = 30
>      ) -> CommandResult:
> -        """
> -        Run DPDK application on the remote node.
> +        """Run DPDK application on the remote node.
> +
> +        The application is not run interactively - the command that
> starts the application
> +        is executed and then the call waits for it to finish execution.
> +
> +        Args:
> +            app_path: The remote path to the DPDK application.
> +            eal_args: EAL parameters to run the DPDK application with.
> +            timeout: Wait at most this long in seconds for `command`
> execution to complete.
> +
> +        Returns:
> +            The result of the DPDK app execution.
>          """
>          return self.main_session.send_command(
>              f"{app_path} {eal_args}", timeout, privileged=True,
> verify=True
>          )
>
>      def configure_ipv4_forwarding(self, enable: bool) -> None:
> +        """Enable/disable IPv4 forwarding on the node.
> +
> +        Args:
> +            enable: If :data:`True`, enable the forwarding, otherwise
> disable it.
> +        """
>          self.main_session.configure_ipv4_forwarding(enable)
>
>      def create_interactive_shell(
> @@ -365,9 +432,13 @@ def create_interactive_shell(
>          privileged: bool = False,
>          eal_parameters: EalParameters | str | None = None,
>      ) -> InteractiveShellType:
> -        """Factory method for creating a handler for an interactive
> session.
> +        """Extend the factory for interactive session handlers.
> +
> +        The extensions are SUT node specific:
>
> -        Instantiate shell_cls according to the remote OS specifics.
> +            * The default for `eal_parameters`,
> +            * The interactive shell path `shell_cls.path` is prepended
> with path to the remote
> +              DPDK build directory for DPDK apps.
>
>          Args:
>              shell_cls: The class of the shell.
> @@ -377,9 +448,10 @@ def create_interactive_shell(
>              privileged: Whether to run the shell with administrative
> privileges.
>              eal_parameters: List of EAL parameters to use to launch the
> app. If this
>                  isn't provided or an empty string is passed, it will
> default to calling
> -                create_eal_parameters().
> +                :meth:`create_eal_parameters`.
> +
>          Returns:
> -            Instance of the desired interactive application.
> +            An instance of the desired interactive application shell.
>          """
>          if not eal_parameters:
>              eal_parameters = self.create_eal_parameters()
> @@ -396,8 +468,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True)
> -> None:
>          """Bind all ports on the SUT to a driver.
>
>          Args:
> -            for_dpdk: Boolean that, when True, binds ports to
> os_driver_for_dpdk
> -            or, when False, binds to os_driver. Defaults to True.
> +            for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
> +                If :data:`False`, binds to os_driver.
>          """
>          for port in self.ports:
>              driver = port.os_driver_for_dpdk if for_dpdk else
> port.os_driver
> diff --git a/dts/framework/testbed_model/tg_node.py
> b/dts/framework/testbed_model/tg_node.py
> index 8a8f0019f3..f269d4c585 100644
> --- a/dts/framework/testbed_model/tg_node.py
> +++ b/dts/framework/testbed_model/tg_node.py
> @@ -5,13 +5,8 @@
>
>  """Traffic generator node.
>
> -This is the node where the traffic generator resides.
> -The distinction between a node and a traffic generator is as follows:
> -A node is a host that DTS connects to. It could be a baremetal server,
> -a VM or a container.
> -A traffic generator is software running on the node.
> -A traffic generator node is a node running a traffic generator.
> -A node can be a traffic generator node as well as system under test node.
> +A traffic generator (TG) generates traffic that's sent towards the SUT
> node.
> +A TG node is where the TG runs.
>  """
>
>  from scapy.packet import Packet  # type: ignore[import]
> @@ -24,13 +19,16 @@
>
>
>  class TGNode(Node):
> -    """Manage connections to a node with a traffic generator.
> +    """The traffic generator node.
>
> -    Apart from basic node management capabilities, the Traffic Generator
> node has
> -    specialized methods for handling the traffic generator running on it.
> +    The TG node extends :class:`Node` with TG specific features:
>
> -    Arguments:
> -        node_config: The user configuration of the traffic generator node.
> +        * Traffic generator initialization,
> +        * The sending of traffic and receiving packets,
> +        * The sending of traffic without receiving packets.
> +
> +    Not all traffic generators are capable of capturing traffic, which is
> why there
> +    must be a way to send traffic without that.
>
>      Attributes:
>          traffic_generator: The traffic generator running on the node.
> @@ -39,6 +37,13 @@ class TGNode(Node):
>      traffic_generator: CapturingTrafficGenerator
>
>      def __init__(self, node_config: TGNodeConfiguration):
> +        """Extend the constructor with TG node specifics.
> +
> +        Initialize the traffic generator on the TG node.
> +
> +        Args:
> +            node_config: The TG node's test run configuration.
> +        """
>          super(TGNode, self).__init__(node_config)
>          self.traffic_generator = create_traffic_generator(self,
> node_config.traffic_generator)
>          self._logger.info(f"Created node: {self.name}")
> @@ -50,17 +55,17 @@ def send_packet_and_capture(
>          receive_port: Port,
>          duration: float = 1,
>      ) -> list[Packet]:
> -        """Send a packet, return received traffic.
> +        """Send `packet`, return received traffic.
>
> -        Send a packet on the send_port and then return all traffic
> captured
> -        on the receive_port for the given duration. Also record the
> captured traffic
> +        Send `packet` on `send_port` and then return all traffic captured
> +        on `receive_port` for the given duration. Also record the
> captured traffic
>          in a pcap file.
>
>          Args:
>              packet: The packet to send.
>              send_port: The egress port on the TG node.
>              receive_port: The ingress port in the TG node.
> -            duration: Capture traffic for this amount of time after
> sending the packet.
> +            duration: Capture traffic for this amount of time after
> sending `packet`.
>
>          Returns:
>               A list of received packets. May be empty if no packets are
> captured.
> @@ -70,6 +75,9 @@ def send_packet_and_capture(
>          )
>
>      def close(self) -> None:
> -        """Free all resources used by the node"""
> +        """Free all resources used by the node.
> +
> +        This extends the superclass method with TG cleanup.
> +        """
>          self.traffic_generator.close()
>          super(TGNode, self).close()
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 28595 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 20/21] dts: scapy tg docstring update
  2023-11-23 15:13                 ` [PATCH v8 20/21] dts: scapy tg " Juraj Linkeš
@ 2023-12-01 18:17                   ` Jeremy Spewock
  2023-12-04 10:07                     ` Juraj Linkeš
  0 siblings, 1 reply; 255+ messages in thread
From: Jeremy Spewock @ 2023-12-01 18:17 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

[-- Attachment #1: Type: text/plain, Size: 10350 bytes --]

On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
>  1 file changed, 54 insertions(+), 37 deletions(-)
>
> diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py
> b/dts/framework/testbed_model/traffic_generator/scapy.py
> index c88cf28369..30ea3914ee 100644
> --- a/dts/framework/testbed_model/traffic_generator/scapy.py
> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
> @@ -2,14 +2,15 @@
>  # Copyright(c) 2022 University of New Hampshire
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>
> -"""Scapy traffic generator.
> +"""The Scapy traffic generator.
>
> -Traffic generator used for functional testing, implemented using the
> Scapy library.
> +A traffic generator used for functional testing, implemented with
> +`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
>  The traffic generator uses an XML-RPC server to run Scapy on the remote
> TG node.
>
> -The XML-RPC server runs in an interactive remote SSH session running
> Python console,
> -where we start the server. The communication with the server is
> facilitated with
> -a local server proxy.
> +The traffic generator uses the :mod:`xmlrpc.server` module to run an
> XML-RPC server
> +in an interactive remote Python SSH session. The communication with the
> server is facilitated
> +with a local server proxy from the :mod:`xmlrpc.client` module.
>  """
>
>  import inspect
> @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
>      recv_iface: str,
>      duration: float,
>  ) -> list[bytes]:
> -    """RPC function to send and capture packets.
> +    """The RPC function to send and capture packets.
>
> -    The function is meant to be executed on the remote TG node.
> +    The function is meant to be executed on the remote TG node via the
> server proxy.
>
>
Should this maybe be "This function is meant" instead? I'm not completely
sure if it should be, I feel like it might be able to go either way.


>      Args:
>          xmlrpc_packets: The packets to send. These need to be converted to
> -            xmlrpc.client.Binary before sending to the remote server.
> +            :class:`~xmlrpc.client.Binary` objects before sending to the
> remote server.
>          send_iface: The logical name of the egress interface.
>          recv_iface: The logical name of the ingress interface.
>          duration: Capture for this amount of time, in seconds.
>
>      Returns:
>          A list of bytes. Each item in the list represents one packet,
> which needs
> -            to be converted back upon transfer from the remote node.
> +        to be converted back upon transfer from the remote node.
>      """
>      scapy_packets = [scapy.all.Packet(packet.data) for packet in
> xmlrpc_packets]
>      sniffer = scapy.all.AsyncSniffer(
> @@ -96,19 +97,15 @@ def scapy_send_packets_and_capture(
>
>
>  def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary],
> send_iface: str) -> None:
> -    """RPC function to send packets.
> +    """The RPC function to send packets.
>
> -    The function is meant to be executed on the remote TG node.
> -    It doesn't return anything, only sends packets.
> +    The function is meant to be executed on the remote TG node via the
> server proxy.
>

Same thing here. I don't think it matters that much since you refer to it
as being "the RPC function" for sending packets, but it feels like you are
referring instead to this specific function on this line.


> +    It only sends `xmlrpc_packets`, without capturing them.
>
>      Args:
>          xmlrpc_packets: The packets to send. These need to be converted to
> -            xmlrpc.client.Binary before sending to the remote server.
> +            :class:`~xmlrpc.client.Binary` objects before sending to the
> remote server.
>          send_iface: The logical name of the egress interface.
> -
> -    Returns:
> -        A list of bytes. Each item in the list represents one packet,
> which needs
> -            to be converted back upon transfer from the remote node.
>      """
>      scapy_packets = [scapy.all.Packet(packet.data) for packet in
> xmlrpc_packets]
>      scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True,
> verbose=True)
> @@ -128,11 +125,19 @@ def scapy_send_packets(xmlrpc_packets:
> list[xmlrpc.client.Binary], send_iface: s
>
>
>  class QuittableXMLRPCServer(SimpleXMLRPCServer):
> -    """Basic XML-RPC server that may be extended
> -    by functions serializable by the marshal module.
> +    """Basic XML-RPC server.
> +
> +    The server may be augmented by functions serializable by the
> :mod:`marshal` module.
>      """
>
>      def __init__(self, *args, **kwargs):
> +        """Extend the XML-RPC server initialization.
> +
> +        Args:
> +            args: The positional arguments that will be passed to the
> superclass's constructor.
> +            kwargs: The keyword arguments that will be passed to the
> superclass's constructor.
> +                The `allow_none` argument will be set to :data:`True`.
> +        """
>          kwargs["allow_none"] = True
>          super().__init__(*args, **kwargs)
>          self.register_introspection_functions()
> @@ -140,13 +145,12 @@ def __init__(self, *args, **kwargs):
>          self.register_function(self.add_rpc_function)
>
>      def quit(self) -> None:
> +        """Quit the server."""
>          self._BaseServer__shutdown_request = True
>          return None
>
>      def add_rpc_function(self, name: str, function_bytes:
> xmlrpc.client.Binary) -> None:
> -        """Add a function to the server.
> -
> -        This is meant to be executed remotely.
> +        """Add a function to the server from the local server proxy.
>
>          Args:
>                name: The name of the function.
> @@ -157,6 +161,11 @@ def add_rpc_function(self, name: str, function_bytes:
> xmlrpc.client.Binary) -> N
>          self.register_function(function)
>
>      def serve_forever(self, poll_interval: float = 0.5) -> None:
> +        """Extend the superclass method with an additional print.
> +
> +        Once executed in the local server proxy, the print gives us a
> clear string to expect
> +        when starting the server. The print means the function was
> executed on the XML-RPC server.
> +        """
>          print("XMLRPC OK")
>          super().serve_forever(poll_interval)
>
> @@ -164,19 +173,12 @@ def serve_forever(self, poll_interval: float = 0.5)
> -> None:
>  class ScapyTrafficGenerator(CapturingTrafficGenerator):
>      """Provides access to scapy functions via an RPC interface.
>
> -    The traffic generator first starts an XML-RPC on the remote TG node.
> -    Then it populates the server with functions which use the Scapy
> library
> -    to send/receive traffic.
> -
> -    Any packets sent to the remote server are first converted to bytes.
> -    They are received as xmlrpc.client.Binary objects on the server side.
> -    When the server sends the packets back, they are also received as
> -    xmlrpc.client.Binary object on the client side, are converted back to
> Scapy
> -    packets and only then returned from the methods.
> +    The class extends the base with remote execution of scapy functions.
>

Same thing here if the above end up getting changed.


>
> -    Arguments:
> -        tg_node: The node where the traffic generator resides.
> -        config: The user configuration of the traffic generator.
> +    Any packets sent to the remote server are first converted to bytes.
> They are received as
> +    :class:`~xmlrpc.client.Binary` objects on the server side. When the
> server sends the packets
> +    back, they are also received as :class:`~xmlrpc.client.Binary`
> objects on the client side, are
> +    converted back to :class:`~scapy.packet.Packet` objects and only then
> returned from the methods.
>
>      Attributes:
>          session: The exclusive interactive remote session created by the
> Scapy
> @@ -190,6 +192,22 @@ class
> ScapyTrafficGenerator(CapturingTrafficGenerator):
>      _config: ScapyTrafficGeneratorConfig
>
>      def __init__(self, tg_node: Node, config:
> ScapyTrafficGeneratorConfig):
> +        """Extend the constructor with Scapy TG specifics.
> +
> +        The traffic generator first starts an XML-RPC on the remote
> `tg_node`.
> +        Then it populates the server with functions which use the Scapy
> library
> +        to send/receive traffic:
> +
> +            * :func:`scapy_send_packets_and_capture`
> +            * :func:`scapy_send_packets`
> +
> +        To enable verbose logging from the xmlrpc client, use the
> :option:`--verbose`
> +        command line argument or the :envvar:`DTS_VERBOSE` environment
> variable.
> +
> +        Args:
> +            tg_node: The node where the traffic generator resides.
> +            config: The traffic generator's test run configuration.
> +        """
>          super().__init__(tg_node, config)
>
>          assert (
> @@ -231,10 +249,8 @@ def _start_xmlrpc_server_in_remote_python(self,
> listen_port: int) -> None:
>          # or class, so strip all lines containing only whitespace
>          src = "\n".join([line for line in src.splitlines() if not
> line.isspace() and line != ""])
>
> -        spacing = "\n" * 4
> -
>          # execute it in the python terminal
> -        self.session.send_command(spacing + src + spacing)
> +        self.session.send_command(src + "\n")
>          self.session.send_command(
>              f"server = QuittableXMLRPCServer(('0.0.0.0',
> {listen_port}));server.serve_forever()",
>              "XMLRPC OK",
> @@ -267,6 +283,7 @@ def _send_packets_and_capture(
>          return scapy_packets
>
>      def close(self) -> None:
> +        """Close the traffic generator."""
>          try:
>              self.rpc_server_proxy.quit()
>          except ConnectionRefusedError:
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 12814 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 00/21] dts: docstrings update
  2023-12-01 16:00                 ` [PATCH v8 00/21] dts: docstrings update Yoan Picchi
@ 2023-12-01 18:23                   ` Jeremy Spewock
  0 siblings, 0 replies; 255+ messages in thread
From: Jeremy Spewock @ 2023-12-01 18:23 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: Juraj Linkeš,
	thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	Luca.Vizzarro, dev

[-- Attachment #1: Type: text/plain, Size: 207 bytes --]

Hey Juraj,

I looked through all the patches and left a few comments. All of the
comments I left though were very minor comments about spelling/grammar on a
few patches. Otherwise this all looks good to me.

[-- Attachment #2: Type: text/html, Size: 466 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 12/21] dts: interactive remote session docstring update
  2023-11-30 21:49                   ` Jeremy Spewock
@ 2023-12-04  9:50                     ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04  9:50 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

On Thu, Nov 30, 2023 at 10:50 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
>
> On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Format according to the Google format and PEP257, with slight
>> deviations.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  .../interactive_remote_session.py             | 36 +++----
>>  .../remote_session/interactive_shell.py       | 99 +++++++++++--------
>>  dts/framework/remote_session/python_shell.py  | 26 ++++-
>>  dts/framework/remote_session/testpmd_shell.py | 58 +++++++++--
>>  4 files changed, 149 insertions(+), 70 deletions(-)
>>
>> diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
>> index 098ded1bb0..1cc82e3377 100644
>> --- a/dts/framework/remote_session/interactive_remote_session.py
>> +++ b/dts/framework/remote_session/interactive_remote_session.py
>> @@ -22,27 +22,23 @@
>>  class InteractiveRemoteSession:
>>      """SSH connection dedicated to interactive applications.
>>
>> -    This connection is created using paramiko and is a persistent connection to the
>> -    host. This class defines methods for connecting to the node and configures this
>> -    connection to send "keep alive" packets every 30 seconds. Because paramiko attempts
>> -    to use SSH keys to establish a connection first, providing a password is optional.
>> -    This session is utilized by InteractiveShells and cannot be interacted with
>> -    directly.
>> -
>> -    Arguments:
>> -        node_config: Configuration class for the node you are connecting to.
>> -        _logger: Desired logger for this session to use.
>> +    The connection is created using `paramiko <https://docs.paramiko.org/en/latest/>`_
>> +    and is a persistent connection to the host. This class defines the methods for connecting
>> +    to the node and configures the connection to send "keep alive" packets every 30 seconds.
>> +    Because paramiko attempts to use SSH keys to establish a connection first, providing
>> +    a password is optional. This session is utilized by InteractiveShells
>> +    and cannot be interacted with directly.
>>
>>      Attributes:
>> -        hostname: Hostname that will be used to initialize a connection to the node.
>> -        ip: A subsection of hostname that removes the port for the connection if there
>> +        hostname: The hostname that will be used to initialize a connection to the node.
>> +        ip: A subsection of `hostname` that removes the port for the connection if there
>>              is one. If there is no port, this will be the same as hostname.
>> -        port: Port to use for the ssh connection. This will be extracted from the
>> -            hostname if there is a port included, otherwise it will default to 22.
>> +        port: Port to use for the ssh connection. This will be extracted from `hostname`
>> +            if there is a port included, otherwise it will default to ``22``.
>>          username: User to connect to the node with.
>>          password: Password of the user connecting to the host. This will default to an
>>              empty string if a password is not provided.
>> -        session: Underlying paramiko connection.
>> +        session: The underlying paramiko connection.
>>
>>      Raises:
>>          SSHConnectionError: There is an error creating the SSH connection.
>> @@ -58,9 +54,15 @@ class InteractiveRemoteSession:
>>      _node_config: NodeConfiguration
>>      _transport: Transport | None
>>
>> -    def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None:
>> +    def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
>> +        """Connect to the node during initialization.
>> +
>> +        Args:
>> +            node_config: The test run configuration of the node to connect to.
>> +            logger: The logger instance this session will use.
>> +        """
>>          self._node_config = node_config
>> -        self._logger = _logger
>> +        self._logger = logger
>>          self.hostname = node_config.hostname
>>          self.username = node_config.user
>>          self.password = node_config.password if node_config.password else ""
>> diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
>> index 4db19fb9b3..b158f963b6 100644
>> --- a/dts/framework/remote_session/interactive_shell.py
>> +++ b/dts/framework/remote_session/interactive_shell.py
>> @@ -3,18 +3,20 @@
>>
>>  """Common functionality for interactive shell handling.
>>
>> -This base class, InteractiveShell, is meant to be extended by other classes that
>> -contain functionality specific to that shell type. These derived classes will often
>> -modify things like the prompt to expect or the arguments to pass into the application,
>> -but still utilize the same method for sending a command and collecting output. How
>> -this output is handled however is often application specific. If an application needs
>> -elevated privileges to start it is expected that the method for gaining those
>> -privileges is provided when initializing the class.
>> +The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain
>> +functionality specific to that shell type. These subclasses will often modify things like
>> +the prompt to expect or the arguments to pass into the application, but still utilize
>> +the same method for sending a command and collecting output. How this output is handled however
>> +is often application specific. If an application needs elevated privileges to start it is expected
>> +that the method for gaining those privileges is provided when initializing the class.
>> +
>> +The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
>> +environment variable configure the timeout of getting the output from command execution.
>>  """
>>
>>  from abc import ABC
>>  from pathlib import PurePath
>> -from typing import Callable
>> +from typing import Callable, ClassVar
>>
>>  from paramiko import Channel, SSHClient, channel  # type: ignore[import]
>>
>> @@ -30,28 +32,6 @@ class InteractiveShell(ABC):
>>      and collecting input until reaching a certain prompt. All interactive applications
>>      will use the same SSH connection, but each will create their own channel on that
>>      session.
>> -
>> -    Arguments:
>> -        interactive_session: The SSH session dedicated to interactive shells.
>> -        logger: Logger used for displaying information in the console.
>> -        get_privileged_command: Method for modifying a command to allow it to use
>> -            elevated privileges. If this is None, the application will not be started
>> -            with elevated privileges.
>> -        app_args: Command line arguments to be passed to the application on startup.
>> -        timeout: Timeout used for the SSH channel that is dedicated to this interactive
>> -            shell. This timeout is for collecting output, so if reading from the buffer
>> -            and no output is gathered within the timeout, an exception is thrown.
>> -
>> -    Attributes
>> -        _default_prompt: Prompt to expect at the end of output when sending a command.
>> -            This is often overridden by derived classes.
>> -        _command_extra_chars: Extra characters to add to the end of every command
>> -            before sending them. This is often overridden by derived classes and is
>> -            most commonly an additional newline character.
>> -        path: Path to the executable to start the interactive application.
>> -        dpdk_app: Whether this application is a DPDK app. If it is, the build
>> -            directory for DPDK on the node will be prepended to the path to the
>> -            executable.
>>      """
>>
>>      _interactive_session: SSHClient
>> @@ -61,10 +41,22 @@ class InteractiveShell(ABC):
>>      _logger: DTSLOG
>>      _timeout: float
>>      _app_args: str
>> -    _default_prompt: str = ""
>> -    _command_extra_chars: str = ""
>> -    path: PurePath
>> -    dpdk_app: bool = False
>> +
>> +    #: Prompt to expect at the end of output when sending a command.
>> +    #: This is often overridden by subclasses.
>> +    _default_prompt: ClassVar[str] = ""
>> +
>> +    #: Extra characters to add to the end of every command
>> +    #: before sending them. This is often overridden by subclasses and is
>> +    #: most commonly an additional newline character.
>> +    _command_extra_chars: ClassVar[str] = ""
>> +
>> +    #: Path to the executable to start the interactive application.
>> +    path: ClassVar[PurePath]
>> +
>> +    #: Whether this application is a DPDK app. If it is, the build directory
>> +    #: for DPDK on the node will be prepended to the path to the executable.
>> +    dpdk_app: ClassVar[bool] = False
>>
>>      def __init__(
>>          self,
>> @@ -74,6 +66,19 @@ def __init__(
>>          app_args: str = "",
>>          timeout: float = SETTINGS.timeout,
>>      ) -> None:
>> +        """Create an SSH channel during initialization.
>> +
>> +        Args:
>> +            interactive_session: The SSH session dedicated to interactive shells.
>> +            logger: The logger instance this session will use.
>> +            get_privileged_command: A method for modifying a command to allow it to use
>> +                elevated privileges. If :data:`None`, the application will not be started
>> +                with elevated privileges.
>> +            app_args: The command line arguments to be passed to the application on startup.
>> +            timeout: The timeout used for the SSH channel that is dedicated to this interactive
>> +                shell. This timeout is for collecting output, so if reading from the buffer
>> +                and no output is gathered within the timeout, an exception is thrown.
>> +        """
>>          self._interactive_session = interactive_session
>>          self._ssh_channel = self._interactive_session.invoke_shell()
>>          self._stdin = self._ssh_channel.makefile_stdin("w")
>> @@ -90,6 +95,10 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None
>>
>>          This method is often overridden by subclasses as their process for
>>          starting may look different.
>> +
>> +        Args:
>> +            get_privileged_command: A function (but could be any callable) that produces
>> +                the version of the command with elevated privileges.
>>          """
>>          start_command = f"{self.path} {self._app_args}"
>>          if get_privileged_command is not None:
>> @@ -97,16 +106,24 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None
>>          self.send_command(start_command)
>>
>>      def send_command(self, command: str, prompt: str | None = None) -> str:
>> -        """Send a command and get all output before the expected ending string.
>> +        """Send `command` and get all output before the expected ending string.
>>
>>          Lines that expect input are not included in the stdout buffer, so they cannot
>> -        be used for expect. For example, if you were prompted to log into something
>> -        with a username and password, you cannot expect "username:" because it won't
>> -        yet be in the stdout buffer. A workaround for this could be consuming an
>> -        extra newline character to force the current prompt into the stdout buffer.
>> +        be used for expect.
>> +
>> +        Example:
>> +            If you were prompted to log into something with a username and password,
>> +            you cannot expect ``username:`` because it won't yet be in the stdout buffer.
>> +            A workaround for this could be consuming an extra newline character to force
>> +            the current `prompt` into the stdout buffer.
>> +
>> +        Args:
>> +            command: The command to send.
>> +            prompt: After sending the command, `send_command` will be expecting this string.
>> +                If :data:`None`, will use the class's default prompt.
>>
>>          Returns:
>> -            All output in the buffer before expected string
>> +            All output in the buffer before expected string.
>>          """
>>          self._logger.info(f"Sending: '{command}'")
>>          if prompt is None:
>> @@ -124,8 +141,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str:
>>          return out
>>
>>      def close(self) -> None:
>> +        """Properly free all resources."""
>>          self._stdin.close()
>>          self._ssh_channel.close()
>>
>>      def __del__(self) -> None:
>> +        """Make sure the session is properly closed before deleting the object."""
>>          self.close()
>> diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
>> index cc3ad48a68..ccfd3783e8 100644
>> --- a/dts/framework/remote_session/python_shell.py
>> +++ b/dts/framework/remote_session/python_shell.py
>> @@ -1,12 +1,32 @@
>>  # SPDX-License-Identifier: BSD-3-Clause
>>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>
>> +"""Python interactive shell.
>> +
>> +Typical usage example in a TestSuite::
>> +
>> +    from framework.remote_session import PythonShell
>> +    python_shell = self.tg_node.create_interactive_shell(
>> +        PythonShell, timeout=5, privileged=True
>> +    )
>> +    python_shell.send_command("print('Hello World')")
>> +    python_shell.close()
>> +"""
>> +
>>  from pathlib import PurePath
>> +from typing import ClassVar
>>
>>  from .interactive_shell import InteractiveShell
>>
>>
>>  class PythonShell(InteractiveShell):
>> -    _default_prompt: str = ">>>"
>> -    _command_extra_chars: str = "\n"
>> -    path: PurePath = PurePath("python3")
>> +    """Python interactive shell."""
>> +
>> +    #: Python's prompt.
>> +    _default_prompt: ClassVar[str] = ">>>"
>> +
>> +    #: This forces the prompt to appear after sending a command.
>> +    _command_extra_chars: ClassVar[str] = "\n"
>> +
>> +    #: The Python executable.
>> +    path: ClassVar[PurePath] = PurePath("python3")
>> diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
>> index 08ac311016..79481e845c 100644
>> --- a/dts/framework/remote_session/testpmd_shell.py
>> +++ b/dts/framework/remote_session/testpmd_shell.py
>> @@ -1,41 +1,79 @@
>>  # SPDX-License-Identifier: BSD-3-Clause
>>  # Copyright(c) 2023 University of New Hampshire
>>
>
> Should you add to the copyright here for adding comments?
>

I'll add it, as it sounds fine to me (it is a real contribution), but
I actually don't know.

>>
>> +"""Testpmd interactive shell.
>> +
>> +Typical usage example in a TestSuite::
>> +
>> +    testpmd_shell = self.sut_node.create_interactive_shell(
>> +            TestPmdShell, privileged=True
>> +        )
>> +    devices = testpmd_shell.get_devices()
>> +    for device in devices:
>> +        print(device)
>> +    testpmd_shell.close()
>> +"""
>> +
>>  from pathlib import PurePath
>> -from typing import Callable
>> +from typing import Callable, ClassVar
>>
>>  from .interactive_shell import InteractiveShell
>>
>>
>>  class TestPmdDevice(object):
>> +    """The data of a device that testpmd can recognize.
>> +
>> +    Attributes:
>> +        pci_address: The PCI address of the device.
>> +    """
>> +
>>      pci_address: str
>>
>>      def __init__(self, pci_address_line: str):
>> +        """Initialize the device from the testpmd output line string.
>> +
>> +        Args:
>> +            pci_address_line: A line of testpmd output that contains a device.
>> +        """
>>          self.pci_address = pci_address_line.strip().split(": ")[1].strip()
>>
>>      def __str__(self) -> str:
>> +        """The PCI address captures what the device is."""
>>          return self.pci_address
>>
>>
>>  class TestPmdShell(InteractiveShell):
>> -    path: PurePath = PurePath("app", "dpdk-testpmd")
>> -    dpdk_app: bool = True
>> -    _default_prompt: str = "testpmd>"
>> -    _command_extra_chars: str = "\n"  # We want to append an extra newline to every command
>> +    """Testpmd interactive shell.
>> +
>> +    The testpmd shell users should never use
>> +    the :meth:`~.interactive_shell.InteractiveShell.send_command` method directly, but rather
>> +    call specialized methods. If there isn't one that satisfies a need, it should be added.
>> +    """
>> +
>> +    #: The path to the testpmd executable.
>> +    path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd")
>> +
>> +    #: Flag this as a DPDK app so that it's clear this is not a system app and
>> +    #: needs to be looked in a specific path.
>> +    dpdk_app: ClassVar[bool] = True
>> +
>> +    #: The testpmd's prompt.
>> +    _default_prompt: ClassVar[str] = "testpmd>"
>> +
>> +    #: This forces the prompt to appear after sending a command.
>> +    _command_extra_chars: ClassVar[str] = "\n"
>>
>>      def _start_application(self, get_privileged_command: Callable[[str], str] | None) -> None:
>> -        """See "_start_application" in InteractiveShell."""
>>          self._app_args += " -- -i"
>>          super()._start_application(get_privileged_command)
>>
>>      def get_devices(self) -> list[TestPmdDevice]:
>> -        """Get a list of device names that are known to testpmd
>> +        """Get a list of device names that are known to testpmd.
>>
>> -        Uses the device info listed in testpmd and then parses the output to
>> -        return only the names of the devices.
>> +        Uses the device info listed in testpmd and then parses the output.
>>
>>          Returns:
>> -            A list of strings representing device names (e.g. 0000:14:00.1)
>> +            A list of devices.
>>          """
>>          dev_info: str = self.send_command("show device info all")
>>          dev_list: list[TestPmdDevice] = []
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 15/21] dts: os session docstring update
  2023-12-01 17:33                   ` Jeremy Spewock
@ 2023-12-04  9:53                     ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04  9:53 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

>> diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
>> index 76e595a518..cfdbd1c4bd 100644
>> --- a/dts/framework/testbed_model/os_session.py
>> +++ b/dts/framework/testbed_model/os_session.py
<snip>
>> @@ -201,69 +283,119 @@ def build_dpdk(
>>          rebuild: bool = False,
>>          timeout: float = SETTINGS.compile_timeout,
>>      ) -> None:
>> -        """
>> -        Build DPDK in the input dir with specified environment variables and meson
>> -        arguments.
>> +        """Build DPDK on the remote node.
>> +
>> +        An extracted DPDK tarball must be present on the node. The build consists of two steps::
>> +
>> +            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
>> +            ninja -C remote_dpdk_build_dir
>> +
>> +        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
>> +        environment variable configure the timeout of DPDK build.
>> +
>> +        Args:
>> +            env_vars: Use these environment variables then building DPDK.
>
>
> I think this is meant to be "when building DPDK" instead.
>

Yes, good catch.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 18/21] dts: sut and tg nodes docstring update
  2023-12-01 18:06                   ` Jeremy Spewock
@ 2023-12-04 10:02                     ` Juraj Linkeš
  2023-12-04 11:02                       ` Bruce Richardson
  0 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:02 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

On Fri, Dec 1, 2023 at 7:06 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
>
> On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Format according to the Google format and PEP257, with slight
>> deviations.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  dts/framework/testbed_model/sut_node.py | 230 ++++++++++++++++--------
>>  dts/framework/testbed_model/tg_node.py  |  42 +++--
>>  2 files changed, 176 insertions(+), 96 deletions(-)
>>
>> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
>> index 5ce9446dba..c4acea38d1 100644
>> --- a/dts/framework/testbed_model/sut_node.py
>> +++ b/dts/framework/testbed_model/sut_node.py
>> @@ -3,6 +3,14 @@
>>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>  # Copyright(c) 2023 University of New Hampshire
>>
>> +"""System under test (DPDK + hardware) node.
>> +
>> +A system under test (SUT) is the combination of DPDK
>> +and the hardware we're testing with DPDK (NICs, crypto and other devices).
>> +An SUT node is where this SUT runs.
>> +"""
>
>
> I think this should just be "A SUT node"
>

I always spell it out which is why I used "an" (an es, ju:, ti: node).
From what I understand, the article is based on how the word is
pronounced. If it's an initialism (it's spelled), we should use "an"
and if it's an abbreviation (pronounced as the whole word), we should
use "a". It always made sense to me as an initialism - I think that's
the common usage.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 19/21] dts: base traffic generators docstring update
  2023-12-01 18:05                   ` Jeremy Spewock
@ 2023-12-04 10:03                     ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:03 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

On Fri, Dec 1, 2023 at 7:05 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
>
> On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Format according to the Google format and PEP257, with slight
>> deviations.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  .../traffic_generator/__init__.py             | 22 ++++++++-
>>  .../capturing_traffic_generator.py            | 45 +++++++++++--------
>>  .../traffic_generator/traffic_generator.py    | 33 ++++++++------
>>  3 files changed, 67 insertions(+), 33 deletions(-)
>>
>> diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
>> index 52888d03fa..11e2bd7d97 100644
>> --- a/dts/framework/testbed_model/traffic_generator/__init__.py
>> +++ b/dts/framework/testbed_model/traffic_generator/__init__.py
<snip>
>> @@ -60,19 +67,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
>>
>>      @abstractmethod
>>      def _send_packets(self, packets: list[Packet], port: Port) -> None:
>> -        """
>> -        The extended classes must implement this method which
>> -        sends packets on send_port. The method should block until all packets
>> -        are fully sent.
>> +        """The implementation of :method:`send_packets`.
>> +
>> +        The subclasses must implement this method which sends `packets` on `port`.
>> +        The method should block until all `packets` are fully sent.
>> +
>> +        What full sent means is defined by the traffic generator.
>>          """
>
>
> I think this should be "what fully sent means"
>

Thanks, Yoan also caught this.

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 20/21] dts: scapy tg docstring update
  2023-12-01 18:17                   ` Jeremy Spewock
@ 2023-12-04 10:07                     ` Juraj Linkeš
  0 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:07 UTC (permalink / raw)
  To: Jeremy Spewock
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

On Fri, Dec 1, 2023 at 7:18 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
>
> On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Format according to the Google format and PEP257, with slight
>> deviations.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>>  .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
>>  1 file changed, 54 insertions(+), 37 deletions(-)
>>
>> diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
>> index c88cf28369..30ea3914ee 100644
>> --- a/dts/framework/testbed_model/traffic_generator/scapy.py
>> +++ b/dts/framework/testbed_model/traffic_generator/scapy.py
>> @@ -2,14 +2,15 @@
>>  # Copyright(c) 2022 University of New Hampshire
>>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>>
>> -"""Scapy traffic generator.
>> +"""The Scapy traffic generator.
>>
>> -Traffic generator used for functional testing, implemented using the Scapy library.
>> +A traffic generator used for functional testing, implemented with
>> +`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
>>  The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
>>
>> -The XML-RPC server runs in an interactive remote SSH session running Python console,
>> -where we start the server. The communication with the server is facilitated with
>> -a local server proxy.
>> +The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
>> +in an interactive remote Python SSH session. The communication with the server is facilitated
>> +with a local server proxy from the :mod:`xmlrpc.client` module.
>>  """
>>
>>  import inspect
>> @@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
>>      recv_iface: str,
>>      duration: float,
>>  ) -> list[bytes]:
>> -    """RPC function to send and capture packets.
>> +    """The RPC function to send and capture packets.
>>
>> -    The function is meant to be executed on the remote TG node.
>> +    The function is meant to be executed on the remote TG node via the server proxy.
>>
>
> Should this maybe be "This function is meant" instead? I'm not completely sure if it should be, I feel like it might be able to go either way.
>

There is something to this. It's a bit more explicit and as such less
confusing which feels better, so I'll change it in all three
instances.

>>
>>      Args:
>>          xmlrpc_packets: The packets to send. These need to be converted to
>> -            xmlrpc.client.Binary before sending to the remote server.
>> +            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
>>          send_iface: The logical name of the egress interface.
>>          recv_iface: The logical name of the ingress interface.
>>          duration: Capture for this amount of time, in seconds.
>>
>>      Returns:
>>          A list of bytes. Each item in the list represents one packet, which needs
>> -            to be converted back upon transfer from the remote node.
>> +        to be converted back upon transfer from the remote node.
>>      """
>>      scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
>>      sniffer = scapy.all.AsyncSniffer(
>> @@ -96,19 +97,15 @@ def scapy_send_packets_and_capture(
>>
>>
>>  def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str) -> None:
>> -    """RPC function to send packets.
>> +    """The RPC function to send packets.
>>
>> -    The function is meant to be executed on the remote TG node.
>> -    It doesn't return anything, only sends packets.
>> +    The function is meant to be executed on the remote TG node via the server proxy.
>
>
> Same thing here. I don't think it matters that much since you refer to it as being "the RPC function" for sending packets, but it feels like you are referring instead to this specific function on this line.
>
>>
>> +    It only sends `xmlrpc_packets`, without capturing them.
>>
>>      Args:
>>          xmlrpc_packets: The packets to send. These need to be converted to
>> -            xmlrpc.client.Binary before sending to the remote server.
>> +            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
>>          send_iface: The logical name of the egress interface.
>> -
>> -    Returns:
>> -        A list of bytes. Each item in the list represents one packet, which needs
>> -            to be converted back upon transfer from the remote node.
>>      """
>>      scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
>>      scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
>> @@ -128,11 +125,19 @@ def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: s
>>
>>
>>  class QuittableXMLRPCServer(SimpleXMLRPCServer):
>> -    """Basic XML-RPC server that may be extended
>> -    by functions serializable by the marshal module.
>> +    """Basic XML-RPC server.
>> +
>> +    The server may be augmented by functions serializable by the :mod:`marshal` module.
>>      """
>>
>>      def __init__(self, *args, **kwargs):
>> +        """Extend the XML-RPC server initialization.
>> +
>> +        Args:
>> +            args: The positional arguments that will be passed to the superclass's constructor.
>> +            kwargs: The keyword arguments that will be passed to the superclass's constructor.
>> +                The `allow_none` argument will be set to :data:`True`.
>> +        """
>>          kwargs["allow_none"] = True
>>          super().__init__(*args, **kwargs)
>>          self.register_introspection_functions()
>> @@ -140,13 +145,12 @@ def __init__(self, *args, **kwargs):
>>          self.register_function(self.add_rpc_function)
>>
>>      def quit(self) -> None:
>> +        """Quit the server."""
>>          self._BaseServer__shutdown_request = True
>>          return None
>>
>>      def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
>> -        """Add a function to the server.
>> -
>> -        This is meant to be executed remotely.
>> +        """Add a function to the server from the local server proxy.
>>
>>          Args:
>>                name: The name of the function.
>> @@ -157,6 +161,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
>>          self.register_function(function)
>>
>>      def serve_forever(self, poll_interval: float = 0.5) -> None:
>> +        """Extend the superclass method with an additional print.
>> +
>> +        Once executed in the local server proxy, the print gives us a clear string to expect
>> +        when starting the server. The print means the function was executed on the XML-RPC server.
>> +        """
>>          print("XMLRPC OK")
>>          super().serve_forever(poll_interval)
>>
>> @@ -164,19 +173,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
>>  class ScapyTrafficGenerator(CapturingTrafficGenerator):
>>      """Provides access to scapy functions via an RPC interface.
>>
>> -    The traffic generator first starts an XML-RPC on the remote TG node.
>> -    Then it populates the server with functions which use the Scapy library
>> -    to send/receive traffic.
>> -
>> -    Any packets sent to the remote server are first converted to bytes.
>> -    They are received as xmlrpc.client.Binary objects on the server side.
>> -    When the server sends the packets back, they are also received as
>> -    xmlrpc.client.Binary object on the client side, are converted back to Scapy
>> -    packets and only then returned from the methods.
>> +    The class extends the base with remote execution of scapy functions.
>
>
> Same thing here if the above end up getting changed.
>
>>
>>
>> -    Arguments:
>> -        tg_node: The node where the traffic generator resides.
>> -        config: The user configuration of the traffic generator.
>> +    Any packets sent to the remote server are first converted to bytes. They are received as
>> +    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
>> +    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
>> +    converted back to :class:`~scapy.packet.Packet` objects and only then returned from the methods.
>>
>>      Attributes:
>>          session: The exclusive interactive remote session created by the Scapy
>> @@ -190,6 +192,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
>>      _config: ScapyTrafficGeneratorConfig
>>
>>      def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
>> +        """Extend the constructor with Scapy TG specifics.
>> +
>> +        The traffic generator first starts an XML-RPC on the remote `tg_node`.
>> +        Then it populates the server with functions which use the Scapy library
>> +        to send/receive traffic:
>> +
>> +            * :func:`scapy_send_packets_and_capture`
>> +            * :func:`scapy_send_packets`
>> +
>> +        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
>> +        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
>> +
>> +        Args:
>> +            tg_node: The node where the traffic generator resides.
>> +            config: The traffic generator's test run configuration.
>> +        """
>>          super().__init__(tg_node, config)
>>
>>          assert (
>> @@ -231,10 +249,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
>>          # or class, so strip all lines containing only whitespace
>>          src = "\n".join([line for line in src.splitlines() if not line.isspace() and line != ""])
>>
>> -        spacing = "\n" * 4
>> -
>>          # execute it in the python terminal
>> -        self.session.send_command(spacing + src + spacing)
>> +        self.session.send_command(src + "\n")
>>          self.session.send_command(
>>              f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));server.serve_forever()",
>>              "XMLRPC OK",
>> @@ -267,6 +283,7 @@ def _send_packets_and_capture(
>>          return scapy_packets
>>
>>      def close(self) -> None:
>> +        """Close the traffic generator."""
>>          try:
>>              self.rpc_server_proxy.quit()
>>          except ConnectionRefusedError:
>> --
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 00/21] dts: docstrings update
  2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
                                   ` (21 preceding siblings ...)
  2023-12-01 16:00                 ` [PATCH v8 00/21] dts: docstrings update Yoan Picchi
@ 2023-12-04 10:24                 ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 01/21] dts: code adjustments for doc generation Juraj Linkeš
                                     ` (21 more replies)
  22 siblings, 22 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The first commit makes changes to the code. These code changes mainly
change the structure of the code so that the actual API docs generation
works. There are also some code changes which get reflected in the
documentation, such as making functions/methods/attributes private or
public.

The rest of the commits deal with the actual docstring documentation
(from which the API docs are generated). The format of the docstrings
is the Google format [0] with PEP257 [1] and some guidelines captured
in the last commit of this group covering what the Google format
doesn't.
The docstring updates are split into many commits to make review
possible. When accepted, they may be squashed.
The docstrings have been composed in anticipation of [2], adhering to
maximum line length of 100. We don't have a tool for automatic docstring
formatting, hence the usage of 100 right away to save time.

NOTE: The logger.py module is not fully documented, as it's being
refactored and the refactor will be submitted in the near future.
Documenting it now seems unnecessary.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
[1] https://peps.python.org/pep-0257/
[2] https://patches.dpdk.org/project/dpdk/list/?series=29844

v7:
Split the series into docstrings and api docs generation and addressed
comments.

v8:
Addressed review comments, all of which were pretty minor - small
gramatical changes, a little bit of rewording to remove confusion here
and there, additional explanations and so on.

v9:
Addressed review comments, again all minor grammar fixes.

Juraj Linkeš (21):
  dts: code adjustments for doc generation
  dts: add docstring checker
  dts: add basic developer docs
  dts: exceptions docstring update
  dts: settings docstring update
  dts: logger and utils docstring update
  dts: dts runner and main docstring update
  dts: test suite docstring update
  dts: test result docstring update
  dts: config docstring update
  dts: remote session docstring update
  dts: interactive remote session docstring update
  dts: port and virtual device docstring update
  dts: cpu docstring update
  dts: os session docstring update
  dts: posix and linux sessions docstring update
  dts: node docstring update
  dts: sut and tg nodes docstring update
  dts: base traffic generators docstring update
  dts: scapy tg docstring update
  dts: test suites docstring update

 doc/guides/tools/dts.rst                      |  73 +++
 dts/framework/__init__.py                     |  12 +-
 dts/framework/config/__init__.py              | 375 +++++++++++++---
 dts/framework/config/types.py                 | 132 ++++++
 dts/framework/dts.py                          | 162 +++++--
 dts/framework/exception.py                    | 156 ++++---
 dts/framework/logger.py                       |  72 ++-
 dts/framework/remote_session/__init__.py      |  80 ++--
 .../interactive_remote_session.py             |  36 +-
 .../remote_session/interactive_shell.py       | 150 +++++++
 dts/framework/remote_session/os_session.py    | 284 ------------
 dts/framework/remote_session/python_shell.py  |  32 ++
 .../remote_session/remote/__init__.py         |  27 --
 .../remote/interactive_shell.py               | 131 ------
 .../remote_session/remote/python_shell.py     |  12 -
 .../remote_session/remote/remote_session.py   | 168 -------
 .../remote_session/remote/testpmd_shell.py    |  45 --
 .../remote_session/remote_session.py          | 230 ++++++++++
 .../{remote => }/ssh_session.py               |  28 +-
 dts/framework/remote_session/testpmd_shell.py |  84 ++++
 dts/framework/settings.py                     | 188 ++++++--
 dts/framework/test_result.py                  | 301 ++++++++++---
 dts/framework/test_suite.py                   | 236 +++++++---
 dts/framework/testbed_model/__init__.py       |  29 +-
 dts/framework/testbed_model/{hw => }/cpu.py   | 209 ++++++---
 dts/framework/testbed_model/hw/__init__.py    |  27 --
 dts/framework/testbed_model/hw/port.py        |  60 ---
 .../testbed_model/hw/virtual_device.py        |  16 -
 .../linux_session.py                          |  70 ++-
 dts/framework/testbed_model/node.py           | 214 ++++++---
 dts/framework/testbed_model/os_session.py     | 422 ++++++++++++++++++
 dts/framework/testbed_model/port.py           |  93 ++++
 .../posix_session.py                          |  85 +++-
 dts/framework/testbed_model/sut_node.py       | 238 ++++++----
 dts/framework/testbed_model/tg_node.py        |  69 ++-
 .../traffic_generator/__init__.py             |  43 ++
 .../capturing_traffic_generator.py            |  49 +-
 .../{ => traffic_generator}/scapy.py          | 110 +++--
 .../traffic_generator.py                      |  47 +-
 dts/framework/testbed_model/virtual_device.py |  29 ++
 dts/framework/utils.py                        | 122 ++---
 dts/main.py                                   |  19 +-
 dts/poetry.lock                               |  12 +-
 dts/pyproject.toml                            |   6 +-
 dts/tests/TestSuite_hello_world.py            |  16 +-
 dts/tests/TestSuite_os_udp.py                 |  20 +-
 dts/tests/TestSuite_smoke_tests.py            |  61 ++-
 47 files changed, 3452 insertions(+), 1628 deletions(-)
 create mode 100644 dts/framework/config/types.py
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (76%)
 create mode 100644 dts/framework/remote_session/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/os_session.py
 create mode 100644 dts/framework/remote_session/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 delete mode 100644 dts/framework/remote_session/remote/interactive_shell.py
 delete mode 100644 dts/framework/remote_session/remote/python_shell.py
 delete mode 100644 dts/framework/remote_session/remote/remote_session.py
 delete mode 100644 dts/framework/remote_session/remote/testpmd_shell.py
 create mode 100644 dts/framework/remote_session/remote_session.py
 rename dts/framework/remote_session/{remote => }/ssh_session.py (82%)
 create mode 100644 dts/framework/remote_session/testpmd_shell.py
 rename dts/framework/testbed_model/{hw => }/cpu.py (50%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 delete mode 100644 dts/framework/testbed_model/hw/port.py
 delete mode 100644 dts/framework/testbed_model/hw/virtual_device.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (77%)
 create mode 100644 dts/framework/testbed_model/os_session.py
 create mode 100644 dts/framework/testbed_model/port.py
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (73%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (68%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (71%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (51%)
 create mode 100644 dts/framework/testbed_model/virtual_device.py

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 01/21] dts: code adjustments for doc generation
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 02/21] dts: add docstring checker Juraj Linkeš
                                     ` (20 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The standard Python tool for generating API documentation, Sphinx,
imports modules one-by-one when generating the documentation. This
requires code changes:
* properly guarding argument parsing in the if __name__ == '__main__'
  block,
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create log files outside of a DTS run,
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere,
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by reorganizing the code,
* some code reorganization was done because the resulting structure
  makes more sense, improving documentation clarity.

The are some other changes which are documentation related:
* added missing type annotation so they appear in the generated docs,
* reordered arguments in some methods,
* removed superfluous arguments and attributes,
* change private functions/methods/attributes to private and vice-versa.

The above all appear in the generated documentation and the with them,
the documentation is improved.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py              |  8 +-
 dts/framework/dts.py                          | 31 +++++--
 dts/framework/exception.py                    | 54 +++++-------
 dts/framework/remote_session/__init__.py      | 41 +++++----
 .../interactive_remote_session.py             |  0
 .../{remote => }/interactive_shell.py         |  0
 .../{remote => }/python_shell.py              |  0
 .../remote_session/remote/__init__.py         | 27 ------
 .../{remote => }/remote_session.py            |  0
 .../{remote => }/ssh_session.py               | 12 +--
 .../{remote => }/testpmd_shell.py             |  0
 dts/framework/settings.py                     | 85 +++++++++++--------
 dts/framework/test_result.py                  |  4 +-
 dts/framework/test_suite.py                   |  7 +-
 dts/framework/testbed_model/__init__.py       | 12 +--
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py    | 27 ------
 .../linux_session.py                          |  6 +-
 dts/framework/testbed_model/node.py           | 23 +++--
 .../os_session.py                             | 22 ++---
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py                          |  4 +-
 dts/framework/testbed_model/sut_node.py       |  8 +-
 dts/framework/testbed_model/tg_node.py        | 29 +------
 .../traffic_generator/__init__.py             | 23 +++++
 .../capturing_traffic_generator.py            |  4 +-
 .../{ => traffic_generator}/scapy.py          | 19 ++---
 .../traffic_generator.py                      | 14 ++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py                        | 40 +++------
 dts/main.py                                   |  9 +-
 31 files changed, 244 insertions(+), 278 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => traffic_generator}/capturing_traffic_generator.py (98%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
 rename dts/framework/testbed_model/{ => traffic_generator}/traffic_generator.py (81%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 9b32cf0532..ef25a463c0 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -17,6 +17,7 @@
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
 
@@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict):
+    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
         # This looks useless now, but is designed to allow expansion to traffic
         # generators that require more configuration later.
         match TrafficGeneratorType(d["type"]):
@@ -97,6 +98,8 @@ def from_dict(d: dict):
                 return ScapyTrafficGeneratorConfig(
                     traffic_generator_type=TrafficGeneratorType.SCAPY
                 )
+            case _:
+                raise ConfigurationError(f'Unknown traffic generator type "{d["type"]}".')
 
 
 @dataclass(slots=True, frozen=True)
@@ -314,6 +317,3 @@ def load_config() -> Configuration:
     config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
     config_obj: Configuration = Configuration.from_dict(dict(config))
     return config_obj
-
-
-CONFIGURATION = load_config()
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 25d6942d81..356368ef10 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -6,19 +6,19 @@
 import sys
 
 from .config import (
-    CONFIGURATION,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     TestSuiteConfig,
+    load_config,
 )
 from .exception import BlockingTestSuiteError
 from .logger import DTSLOG, getLogger
 from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
 from .test_suite import get_test_suites
 from .testbed_model import SutNode, TGNode
-from .utils import check_dts_python_version
 
-dts_logger: DTSLOG = getLogger("DTSRunner")
+# dummy defaults to satisfy linters
+dts_logger: DTSLOG = None  # type: ignore[assignment]
 result: DTSResult = DTSResult(dts_logger)
 
 
@@ -30,14 +30,18 @@ def run_all() -> None:
     global dts_logger
     global result
 
+    # create a regular DTS logger and create a new result with it
+    dts_logger = getLogger("DTSRunner")
+    result = DTSResult(dts_logger)
+
     # check the python version of the server that run dts
-    check_dts_python_version()
+    _check_dts_python_version()
 
     sut_nodes: dict[str, SutNode] = {}
     tg_nodes: dict[str, TGNode] = {}
     try:
         # for all Execution sections
-        for execution in CONFIGURATION.executions:
+        for execution in load_config().executions:
             sut_node = sut_nodes.get(execution.system_under_test_node.name)
             tg_node = tg_nodes.get(execution.traffic_generator_node.name)
 
@@ -82,6 +86,23 @@ def run_all() -> None:
     _exit_dts()
 
 
+def _check_dts_python_version() -> None:
+    def RED(text: str) -> str:
+        return f"\u001B[31;1m{str(text)}\u001B[0m"
+
+    if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10):
+        print(
+            RED(
+                (
+                    "WARNING: DTS execution node's python version is lower than"
+                    "python 3.10, is deprecated and will not work in future releases."
+                )
+            ),
+            file=sys.stderr,
+        )
+        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+
+
 def _run_execution(
     sut_node: SutNode,
     tg_node: TGNode,
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index b362e42924..151e4d3aa9 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -42,19 +42,14 @@ class SSHTimeoutError(DTSError):
     Command execution timeout.
     """
 
-    command: str
-    output: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _command: str
 
-    def __init__(self, command: str, output: str):
-        self.command = command
-        self.output = output
+    def __init__(self, command: str):
+        self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self.command}"
-
-    def get_output(self) -> str:
-        return self.output
+        return f"TIMEOUT on {self._command}"
 
 
 class SSHConnectionError(DTSError):
@@ -62,18 +57,18 @@ class SSHConnectionError(DTSError):
     SSH connection error.
     """
 
-    host: str
-    errors: list[str]
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
+    _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
-        self.host = host
-        self.errors = [] if errors is None else errors
+        self._host = host
+        self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
-        message = f"Error trying to connect with {self.host}."
-        if self.errors:
-            message += f" Errors encountered while retrying: {', '.join(self.errors)}"
+        message = f"Error trying to connect with {self._host}."
+        if self._errors:
+            message += f" Errors encountered while retrying: {', '.join(self._errors)}"
 
         return message
 
@@ -84,14 +79,14 @@ class SSHSessionDeadError(DTSError):
     It can no longer be used.
     """
 
-    host: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
+    _host: str
 
     def __init__(self, host: str):
-        self.host = host
+        self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self.host} has died"
+        return f"SSH session with {self._host} has died"
 
 
 class ConfigurationError(DTSError):
@@ -107,16 +102,16 @@ class RemoteCommandExecutionError(DTSError):
     Raised when a command executed on a Node returns a non-zero exit status.
     """
 
-    command: str
-    command_return_code: int
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    command: str
+    _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
         self.command = command
-        self.command_return_code = command_return_code
+        self._command_return_code = command_return_code
 
     def __str__(self) -> str:
-        return f"Command {self.command} returned a non-zero exit code: {self.command_return_code}"
+        return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}"
 
 
 class RemoteDirectoryExistsError(DTSError):
@@ -140,22 +135,15 @@ class TestCaseVerifyError(DTSError):
     Used in test cases to verify the expected behavior.
     """
 
-    value: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
-    def __init__(self, value: str):
-        self.value = value
-
-    def __str__(self) -> str:
-        return repr(self.value)
-
 
 class BlockingTestSuiteError(DTSError):
-    suite_name: str
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
+    _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
-        self.suite_name = suite_name
+        self._suite_name = suite_name
 
     def __str__(self) -> str:
-        return f"Blocking suite {self.suite_name} failed."
+        return f"Blocking suite {self._suite_name} failed."
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 6124417bd7..5e7ddb2b05 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -12,27 +12,24 @@
 
 # pylama:ignore=W0611
 
-from framework.config import OS, NodeConfiguration
-from framework.exception import ConfigurationError
+from framework.config import NodeConfiguration
 from framework.logger import DTSLOG
 
-from .linux_session import LinuxSession
-from .os_session import InteractiveShellType, OSSession
-from .remote import (
-    CommandResult,
-    InteractiveRemoteSession,
-    InteractiveShell,
-    PythonShell,
-    RemoteSession,
-    SSHSession,
-    TestPmdDevice,
-    TestPmdShell,
-)
-
-
-def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession:
-    match node_config.os:
-        case OS.linux:
-            return LinuxSession(node_config, name, logger)
-        case _:
-            raise ConfigurationError(f"Unsupported OS {node_config.os}")
+from .interactive_remote_session import InteractiveRemoteSession
+from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
+from .remote_session import CommandResult, RemoteSession
+from .ssh_session import SSHSession
+from .testpmd_shell import TestPmdShell
+
+
+def create_remote_session(
+    node_config: NodeConfiguration, name: str, logger: DTSLOG
+) -> RemoteSession:
+    return SSHSession(node_config, name, logger)
+
+
+def create_interactive_session(
+    node_config: NodeConfiguration, logger: DTSLOG
+) -> InteractiveRemoteSession:
+    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_remote_session.py
rename to dts/framework/remote_session/interactive_remote_session.py
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/interactive_shell.py
rename to dts/framework/remote_session/interactive_shell.py
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/python_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/python_shell.py
rename to dts/framework/remote_session/python_shell.py
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
deleted file mode 100644
index 06403691a5..0000000000
--- a/dts/framework/remote_session/remote/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-# Copyright(c) 2023 University of New Hampshire
-
-# pylama:ignore=W0611
-
-from framework.config import NodeConfiguration
-from framework.logger import DTSLOG
-
-from .interactive_remote_session import InteractiveRemoteSession
-from .interactive_shell import InteractiveShell
-from .python_shell import PythonShell
-from .remote_session import CommandResult, RemoteSession
-from .ssh_session import SSHSession
-from .testpmd_shell import TestPmdDevice, TestPmdShell
-
-
-def create_remote_session(
-    node_config: NodeConfiguration, name: str, logger: DTSLOG
-) -> RemoteSession:
-    return SSHSession(node_config, name, logger)
-
-
-def create_interactive_session(
-    node_config: NodeConfiguration, logger: DTSLOG
-) -> InteractiveRemoteSession:
-    return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote_session.py
similarity index 100%
rename from dts/framework/remote_session/remote/remote_session.py
rename to dts/framework/remote_session/remote_session.py
diff --git a/dts/framework/remote_session/remote/ssh_session.py b/dts/framework/remote_session/ssh_session.py
similarity index 91%
rename from dts/framework/remote_session/remote/ssh_session.py
rename to dts/framework/remote_session/ssh_session.py
index 1a7ee649ab..a467033a13 100644
--- a/dts/framework/remote_session/remote/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -18,9 +18,7 @@
     SSHException,
 )
 
-from framework.config import NodeConfiguration
 from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
-from framework.logger import DTSLOG
 
 from .remote_session import CommandResult, RemoteSession
 
@@ -45,14 +43,6 @@ class SSHSession(RemoteSession):
 
     session: Connection
 
-    def __init__(
-        self,
-        node_config: NodeConfiguration,
-        session_name: str,
-        logger: DTSLOG,
-    ):
-        super(SSHSession, self).__init__(node_config, session_name, logger)
-
     def _connect(self) -> None:
         errors = []
         retry_attempts = 10
@@ -111,7 +101,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma
 
         except CommandTimedOut as e:
             self._logger.exception(e)
-            raise SSHTimeoutError(command, e.result.stderr) from e
+            raise SSHTimeoutError(command) from e
 
         return CommandResult(self.name, command, output.stdout, output.stderr, output.return_code)
 
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
similarity index 100%
rename from dts/framework/remote_session/remote/testpmd_shell.py
rename to dts/framework/remote_session/testpmd_shell.py
diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 974793a11a..25b5dcff22 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -6,7 +6,7 @@
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
-from dataclasses import dataclass
+from dataclasses import dataclass, field
 from pathlib import Path
 from typing import Any, TypeVar
 
@@ -22,8 +22,8 @@ def __init__(
             option_strings: Sequence[str],
             dest: str,
             nargs: str | int | None = None,
-            const: str | None = None,
-            default: str = None,
+            const: bool | None = None,
+            default: Any = None,
             type: Callable[[str], _T | argparse.FileType | None] = None,
             choices: Iterable[_T] | None = None,
             required: bool = False,
@@ -32,6 +32,12 @@ def __init__(
         ) -> None:
             env_var_value = os.environ.get(env_var)
             default = env_var_value or default
+            if const is not None:
+                nargs = 0
+                default = const if env_var_value else default
+                type = None
+                choices = None
+                metavar = None
             super(_EnvironmentArgument, self).__init__(
                 option_strings,
                 dest,
@@ -52,22 +58,28 @@ def __call__(
             values: Any,
             option_string: str = None,
         ) -> None:
-            setattr(namespace, self.dest, values)
+            if self.const is not None:
+                setattr(namespace, self.dest, self.const)
+            else:
+                setattr(namespace, self.dest, values)
 
     return _EnvironmentArgument
 
 
-@dataclass(slots=True, frozen=True)
-class _Settings:
-    config_file_path: str
-    output_dir: str
-    timeout: float
-    verbose: bool
-    skip_setup: bool
-    dpdk_tarball_path: Path
-    compile_timeout: float
-    test_cases: list
-    re_run: int
+@dataclass(slots=True)
+class Settings:
+    config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    output_dir: str = "output"
+    timeout: float = 15
+    verbose: bool = False
+    skip_setup: bool = False
+    dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    compile_timeout: float = 1200
+    test_cases: list[str] = field(default_factory=list)
+    re_run: int = 0
+
+
+SETTINGS: Settings = Settings()
 
 
 def _get_parser() -> argparse.ArgumentParser:
@@ -80,7 +92,8 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--config-file",
         action=_env_arg("DTS_CFG_FILE"),
-        default="conf.yaml",
+        default=SETTINGS.config_file_path,
+        type=Path,
         help="[DTS_CFG_FILE] configuration file that describes the test cases, SUTs and targets.",
     )
 
@@ -88,7 +101,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--output-dir",
         "--output",
         action=_env_arg("DTS_OUTPUT_DIR"),
-        default="output",
+        default=SETTINGS.output_dir,
         help="[DTS_OUTPUT_DIR] Output directory where dts logs and results are saved.",
     )
 
@@ -96,7 +109,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "-t",
         "--timeout",
         action=_env_arg("DTS_TIMEOUT"),
-        default=15,
+        default=SETTINGS.timeout,
         type=float,
         help="[DTS_TIMEOUT] The default timeout for all DTS operations except for compiling DPDK.",
     )
@@ -105,8 +118,9 @@ def _get_parser() -> argparse.ArgumentParser:
         "-v",
         "--verbose",
         action=_env_arg("DTS_VERBOSE"),
-        default="N",
-        help="[DTS_VERBOSE] Set to 'Y' to enable verbose output, logging all messages "
+        default=SETTINGS.verbose,
+        const=True,
+        help="[DTS_VERBOSE] Specify to enable verbose output, logging all messages "
         "to the console.",
     )
 
@@ -114,8 +128,8 @@ def _get_parser() -> argparse.ArgumentParser:
         "-s",
         "--skip-setup",
         action=_env_arg("DTS_SKIP_SETUP"),
-        default="N",
-        help="[DTS_SKIP_SETUP] Set to 'Y' to skip all setup steps on SUT and TG nodes.",
+        const=True,
+        help="[DTS_SKIP_SETUP] Specify to skip all setup steps on SUT and TG nodes.",
     )
 
     parser.add_argument(
@@ -123,7 +137,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--snapshot",
         "--git-ref",
         action=_env_arg("DTS_DPDK_TARBALL"),
-        default="dpdk.tar.xz",
+        default=SETTINGS.dpdk_tarball_path,
         type=Path,
         help="[DTS_DPDK_TARBALL] Path to DPDK source code tarball or a git commit ID, "
         "tag ID or tree ID to test. To test local changes, first commit them, "
@@ -133,7 +147,7 @@ def _get_parser() -> argparse.ArgumentParser:
     parser.add_argument(
         "--compile-timeout",
         action=_env_arg("DTS_COMPILE_TIMEOUT"),
-        default=1200,
+        default=SETTINGS.compile_timeout,
         type=float,
         help="[DTS_COMPILE_TIMEOUT] The timeout for compiling DPDK.",
     )
@@ -150,7 +164,7 @@ def _get_parser() -> argparse.ArgumentParser:
         "--re-run",
         "--re_run",
         action=_env_arg("DTS_RERUN"),
-        default=0,
+        default=SETTINGS.re_run,
         type=int,
         help="[DTS_RERUN] Re-run each test case the specified amount of times "
         "if a test failure occurs",
@@ -159,21 +173,20 @@ def _get_parser() -> argparse.ArgumentParser:
     return parser
 
 
-def _get_settings() -> _Settings:
+def get_settings() -> Settings:
     parsed_args = _get_parser().parse_args()
-    return _Settings(
+    return Settings(
         config_file_path=parsed_args.config_file,
         output_dir=parsed_args.output_dir,
         timeout=parsed_args.timeout,
-        verbose=(parsed_args.verbose == "Y"),
-        skip_setup=(parsed_args.skip_setup == "Y"),
-        dpdk_tarball_path=Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
-        if not os.path.exists(parsed_args.tarball)
-        else Path(parsed_args.tarball),
+        verbose=parsed_args.verbose,
+        skip_setup=parsed_args.skip_setup,
+        dpdk_tarball_path=Path(
+            Path(DPDKGitTarball(parsed_args.tarball, parsed_args.output_dir))
+            if not os.path.exists(parsed_args.tarball)
+            else Path(parsed_args.tarball)
+        ),
         compile_timeout=parsed_args.compile_timeout,
-        test_cases=parsed_args.test_cases.split(",") if parsed_args.test_cases else [],
+        test_cases=(parsed_args.test_cases.split(",") if parsed_args.test_cases else []),
         re_run=parsed_args.re_run,
     )
-
-
-SETTINGS: _Settings = _get_settings()
diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 4c2e7e2418..57090feb04 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -246,7 +246,7 @@ def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTarge
         self._inner_results.append(build_target_result)
         return build_target_result
 
-    def add_sut_info(self, sut_info: NodeInfo):
+    def add_sut_info(self, sut_info: NodeInfo) -> None:
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
@@ -289,7 +289,7 @@ def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
         self._inner_results.append(execution_result)
         return execution_result
 
-    def add_error(self, error) -> None:
+    def add_error(self, error: Exception) -> None:
         self._errors.append(error)
 
     def process(self) -> None:
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 4a7907ec33..f9e66e814a 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -11,7 +11,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Union
+from typing import Any, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -26,8 +26,7 @@
 from .logger import DTSLOG, getLogger
 from .settings import SETTINGS
 from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode, TGNode
-from .testbed_model.hw.port import Port, PortLink
+from .testbed_model import Port, PortLink, SutNode, TGNode
 from .utils import get_packet_summaries
 
 
@@ -426,7 +425,7 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
-    def is_test_suite(object) -> bool:
+    def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
                 return True
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 5cbb859e47..8ced05653b 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -9,15 +9,9 @@
 
 # pylama:ignore=W0611
 
-from .hw import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-    VirtualDevice,
-    lcore_filter,
-)
+from .cpu import LogicalCoreCount, LogicalCoreCountFilter, LogicalCoreList
 from .node import Node
+from .port import Port, PortLink
 from .sut_node import SutNode
 from .tg_node import TGNode
+from .virtual_device import VirtualDevice
diff --git a/dts/framework/testbed_model/hw/cpu.py b/dts/framework/testbed_model/cpu.py
similarity index 95%
rename from dts/framework/testbed_model/hw/cpu.py
rename to dts/framework/testbed_model/cpu.py
index cbc5fe7fff..1b392689f5 100644
--- a/dts/framework/testbed_model/hw/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -262,3 +262,16 @@ def filter(self) -> list[LogicalCore]:
             )
 
         return filtered_lcores
+
+
+def lcore_filter(
+    core_list: list[LogicalCore],
+    filter_specifier: LogicalCoreCount | LogicalCoreList,
+    ascending: bool,
+) -> LogicalCoreFilter:
+    if isinstance(filter_specifier, LogicalCoreList):
+        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
+    elif isinstance(filter_specifier, LogicalCoreCount):
+        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
+    else:
+        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/testbed_model/hw/__init__.py b/dts/framework/testbed_model/hw/__init__.py
deleted file mode 100644
index 88ccac0b0e..0000000000
--- a/dts/framework/testbed_model/hw/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-# pylama:ignore=W0611
-
-from .cpu import (
-    LogicalCore,
-    LogicalCoreCount,
-    LogicalCoreCountFilter,
-    LogicalCoreFilter,
-    LogicalCoreList,
-    LogicalCoreListFilter,
-)
-from .virtual_device import VirtualDevice
-
-
-def lcore_filter(
-    core_list: list[LogicalCore],
-    filter_specifier: LogicalCoreCount | LogicalCoreList,
-    ascending: bool,
-) -> LogicalCoreFilter:
-    if isinstance(filter_specifier, LogicalCoreList):
-        return LogicalCoreListFilter(core_list, filter_specifier, ascending)
-    elif isinstance(filter_specifier, LogicalCoreCount):
-        return LogicalCoreCountFilter(core_list, filter_specifier, ascending)
-    else:
-        raise ValueError(f"Unsupported filter r{filter_specifier}")
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/testbed_model/linux_session.py
similarity index 97%
rename from dts/framework/remote_session/linux_session.py
rename to dts/framework/testbed_model/linux_session.py
index fd877fbfae..055765ba2d 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -9,10 +9,10 @@
 from typing_extensions import NotRequired
 
 from framework.exception import RemoteCommandExecutionError
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
 from framework.utils import expand_range
 
+from .cpu import LogicalCore
+from .port import Port
 from .posix_session import PosixSession
 
 
@@ -64,7 +64,7 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
             lcores.append(LogicalCore(lcore, core, socket, node))
         return lcores
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return dpdk_prefix
 
     def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index ef700d8114..b313b5ad54 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -12,23 +12,26 @@
 from typing import Any, Callable, Type, Union
 
 from framework.config import (
+    OS,
     BuildTargetConfiguration,
     ExecutionConfiguration,
     NodeConfiguration,
 )
+from framework.exception import ConfigurationError
 from framework.logger import DTSLOG, getLogger
-from framework.remote_session import InteractiveShellType, OSSession, create_session
 from framework.settings import SETTINGS
 
-from .hw import (
+from .cpu import (
     LogicalCore,
     LogicalCoreCount,
     LogicalCoreList,
     LogicalCoreListFilter,
-    VirtualDevice,
     lcore_filter,
 )
-from .hw.port import Port
+from .linux_session import LinuxSession
+from .os_session import InteractiveShellType, OSSession
+from .port import Port
+from .virtual_device import VirtualDevice
 
 
 class Node(ABC):
@@ -168,9 +171,9 @@ def create_interactive_shell(
 
         return self.main_session.create_interactive_shell(
             shell_cls,
-            app_args,
             timeout,
             privileged,
+            app_args,
         )
 
     def filter_lcores(
@@ -201,7 +204,7 @@ def _get_remote_cpus(self) -> None:
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
-    def _setup_hugepages(self):
+    def _setup_hugepages(self) -> None:
         """
         Setup hugepages on the Node. Different architectures can supply different
         amounts of memory for hugepages and numa-based hugepage allocation may need
@@ -245,3 +248,11 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
             return lambda *args: None
         else:
             return func
+
+
+def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession:
+    match node_config.os:
+        case OS.linux:
+            return LinuxSession(node_config, name, logger)
+        case _:
+            raise ConfigurationError(f"Unsupported OS {node_config.os}")
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/testbed_model/os_session.py
similarity index 95%
rename from dts/framework/remote_session/os_session.py
rename to dts/framework/testbed_model/os_session.py
index 8a709eac1c..76e595a518 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -10,19 +10,19 @@
 
 from framework.config import Architecture, NodeConfiguration, NodeInfo
 from framework.logger import DTSLOG
-from framework.remote_session.remote import InteractiveShell
-from framework.settings import SETTINGS
-from framework.testbed_model import LogicalCore
-from framework.testbed_model.hw.port import Port
-from framework.utils import MesonArgs
-
-from .remote import (
+from framework.remote_session import (
     CommandResult,
     InteractiveRemoteSession,
+    InteractiveShell,
     RemoteSession,
     create_interactive_session,
     create_remote_session,
 )
+from framework.settings import SETTINGS
+from framework.utils import MesonArgs
+
+from .cpu import LogicalCore
+from .port import Port
 
 InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
 
@@ -85,9 +85,9 @@ def send_command(
     def create_interactive_shell(
         self,
         shell_cls: Type[InteractiveShellType],
-        eal_parameters: str,
         timeout: float,
         privileged: bool,
+        app_args: str,
     ) -> InteractiveShellType:
         """
         See "create_interactive_shell" in SutNode
@@ -96,7 +96,7 @@ def create_interactive_shell(
             self.interactive_session.session,
             self._logger,
             self._get_privileged_command if privileged else None,
-            eal_parameters,
+            app_args,
             timeout,
         )
 
@@ -113,7 +113,7 @@ def _get_privileged_command(command: str) -> str:
         """
 
     @abstractmethod
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
         """
         Try to find DPDK remote dir in remote_dir.
         """
@@ -227,7 +227,7 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
         """
 
     @abstractmethod
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         """
         Get the DPDK file prefix that will be used when running DPDK apps.
         """
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/port.py
similarity index 100%
rename from dts/framework/testbed_model/hw/port.py
rename to dts/framework/testbed_model/port.py
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/testbed_model/posix_session.py
similarity index 98%
rename from dts/framework/remote_session/posix_session.py
rename to dts/framework/testbed_model/posix_session.py
index a29e2e8280..5657cc0bc9 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -32,7 +32,7 @@ def combine_short_options(**opts: bool) -> str:
 
         return ret_opts
 
-    def guess_dpdk_remote_dir(self, remote_dir) -> PurePosixPath:
+    def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
@@ -207,7 +207,7 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath])
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             self.remove_remote_dir(dpdk_runtime_dir)
 
-    def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
+    def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 7f75043bd3..5ce9446dba 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -15,12 +15,14 @@
     NodeInfo,
     SutNodeConfiguration,
 )
-from framework.remote_session import CommandResult, InteractiveShellType, OSSession
+from framework.remote_session import CommandResult
 from framework.settings import SETTINGS
 from framework.utils import MesonArgs
 
-from .hw import LogicalCoreCount, LogicalCoreList, VirtualDevice
+from .cpu import LogicalCoreCount, LogicalCoreList
 from .node import Node
+from .os_session import InteractiveShellType, OSSession
+from .virtual_device import VirtualDevice
 
 
 class EalParameters(object):
@@ -293,7 +295,7 @@ def create_eal_parameters(
         prefix: str = "dpdk",
         append_prefix_timestamp: bool = True,
         no_pci: bool = False,
-        vdevs: list[VirtualDevice] = None,
+        vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
         """
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 79a55663b5..8a8f0019f3 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -16,16 +16,11 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.config import (
-    ScapyTrafficGeneratorConfig,
-    TGNodeConfiguration,
-    TrafficGeneratorType,
-)
-from framework.exception import ConfigurationError
-
-from .capturing_traffic_generator import CapturingTrafficGenerator
-from .hw.port import Port
+from framework.config import TGNodeConfiguration
+
 from .node import Node
+from .port import Port
+from .traffic_generator import CapturingTrafficGenerator, create_traffic_generator
 
 
 class TGNode(Node):
@@ -78,19 +73,3 @@ def close(self) -> None:
         """Free all resources used by the node"""
         self.traffic_generator.close()
         super(TGNode, self).close()
-
-
-def create_traffic_generator(
-    tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
-) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
-
-    from .scapy import ScapyTrafficGenerator
-
-    match traffic_generator_config.traffic_generator_type:
-        case TrafficGeneratorType.SCAPY:
-            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
-        case _:
-            raise ConfigurationError(
-                f"Unknown traffic generator: {traffic_generator_config.traffic_generator_type}"
-            )
diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
new file mode 100644
index 0000000000..52888d03fa
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
+from framework.exception import ConfigurationError
+from framework.testbed_model.node import Node
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .scapy import ScapyTrafficGenerator
+
+
+def create_traffic_generator(
+    tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+    """A factory function for creating traffic generator object from user config."""
+
+    match traffic_generator_config.traffic_generator_type:
+        case TrafficGeneratorType.SCAPY:
+            return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+        case _:
+            raise ConfigurationError(
+                "Unknown traffic generator: {traffic_generator_config.traffic_generator_type}"
+            )
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
similarity index 98%
rename from dts/framework/testbed_model/capturing_traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index e6512061d7..1fc7f98c05 100644
--- a/dts/framework/testbed_model/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -16,9 +16,9 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.settings import SETTINGS
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
 from .traffic_generator import TrafficGenerator
 
 
@@ -127,7 +127,7 @@ def _send_packets_and_capture(
         for the specified duration. It must be able to handle no received packets.
         """
 
-    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+    def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None:
         file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
         self._logger.debug(f"Writing packets to {file_name}.")
         scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
similarity index 95%
rename from dts/framework/testbed_model/scapy.py
rename to dts/framework/testbed_model/traffic_generator/scapy.py
index 9083e92b3d..c88cf28369 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -24,16 +24,15 @@
 from scapy.packet import Packet  # type: ignore[import]
 
 from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import DTSLOG, getLogger
 from framework.remote_session import PythonShell
 from framework.settings import SETTINGS
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 
 from .capturing_traffic_generator import (
     CapturingTrafficGenerator,
     _get_default_capture_name,
 )
-from .hw.port import Port
-from .tg_node import TGNode
 
 """
 ========= BEGIN RPC FUNCTIONS =========
@@ -144,7 +143,7 @@ def quit(self) -> None:
         self._BaseServer__shutdown_request = True
         return None
 
-    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+    def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
         """Add a function to the server.
 
         This is meant to be executed remotely.
@@ -189,13 +188,9 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     session: PythonShell
     rpc_server_proxy: xmlrpc.client.ServerProxy
     _config: ScapyTrafficGeneratorConfig
-    _tg_node: TGNode
-    _logger: DTSLOG
 
-    def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
-        self._config = config
-        self._tg_node = tg_node
-        self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}")
+    def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        super().__init__(tg_node, config)
 
         assert (
             self._tg_node.config.os == OS.linux
@@ -229,7 +224,7 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
             function_bytes = marshal.dumps(function.__code__)
             self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
 
-    def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
+    def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
         # load the source of the function
         src = inspect.getsource(QuittableXMLRPCServer)
         # Lines with only whitespace break the repl if in the middle of a function
@@ -271,7 +266,7 @@ def _send_packets_and_capture(
         scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
         return scapy_packets
 
-    def close(self):
+    def close(self) -> None:
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
similarity index 81%
rename from dts/framework/testbed_model/traffic_generator.py
rename to dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 28c35d3ce4..0d9902ddb7 100644
--- a/dts/framework/testbed_model/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -12,11 +12,12 @@
 
 from scapy.packet import Packet  # type: ignore[import]
 
-from framework.logger import DTSLOG
+from framework.config import TrafficGeneratorConfig
+from framework.logger import DTSLOG, getLogger
+from framework.testbed_model.node import Node
+from framework.testbed_model.port import Port
 from framework.utils import get_packet_summaries
 
-from .hw.port import Port
-
 
 class TrafficGenerator(ABC):
     """The base traffic generator.
@@ -24,8 +25,15 @@ class TrafficGenerator(ABC):
     Defines the few basic methods that each traffic generator must implement.
     """
 
+    _config: TrafficGeneratorConfig
+    _tg_node: Node
     _logger: DTSLOG
 
+    def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        self._config = config
+        self._tg_node = tg_node
+        self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}")
+
     def send_packet(self, packet: Packet, port: Port) -> None:
         """Send a packet and block until it is fully sent.
 
diff --git a/dts/framework/testbed_model/hw/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
similarity index 100%
rename from dts/framework/testbed_model/hw/virtual_device.py
rename to dts/framework/testbed_model/virtual_device.py
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index d098d364ff..a0f2173949 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -7,7 +7,6 @@
 import json
 import os
 import subprocess
-import sys
 from enum import Enum
 from pathlib import Path
 from subprocess import SubprocessError
@@ -16,31 +15,7 @@
 
 from .exception import ConfigurationError
 
-
-class StrEnum(Enum):
-    @staticmethod
-    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
-        return name
-
-    def __str__(self) -> str:
-        return self.name
-
-
-REGEX_FOR_PCI_ADDRESS = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
-
-
-def check_dts_python_version() -> None:
-    if sys.version_info.major < 3 or (sys.version_info.major == 3 and sys.version_info.minor < 10):
-        print(
-            RED(
-                (
-                    "WARNING: DTS execution node's python version is lower than"
-                    "python 3.10, is deprecated and will not work in future releases."
-                )
-            ),
-            file=sys.stderr,
-        )
-        print(RED("Please use Python >= 3.10 instead"), file=sys.stderr)
+REGEX_FOR_PCI_ADDRESS: str = "/[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}.[0-9]{1}/"
 
 
 def expand_range(range_str: str) -> list[int]:
@@ -61,7 +36,7 @@ def expand_range(range_str: str) -> list[int]:
     return expanded_range
 
 
-def get_packet_summaries(packets: list[Packet]):
+def get_packet_summaries(packets: list[Packet]) -> str:
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -69,8 +44,13 @@ def get_packet_summaries(packets: list[Packet]):
     return f"Packet contents: \n{packet_summaries}"
 
 
-def RED(text: str) -> str:
-    return f"\u001B[31;1m{str(text)}\u001B[0m"
+class StrEnum(Enum):
+    @staticmethod
+    def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
+        return name
+
+    def __str__(self) -> str:
+        return self.name
 
 
 class MesonArgs(object):
@@ -215,5 +195,5 @@ def _delete_tarball(self) -> None:
         if self._tarball_path and os.path.exists(self._tarball_path):
             os.remove(self._tarball_path)
 
-    def __fspath__(self):
+    def __fspath__(self) -> str:
         return str(self._tarball_path)
diff --git a/dts/main.py b/dts/main.py
index 43311fa847..5d4714b0c3 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -10,10 +10,17 @@
 
 import logging
 
-from framework import dts
+from framework import settings
 
 
 def main() -> None:
+    """Set DTS settings, then run DTS.
+
+    The DTS settings are taken from the command line arguments and the environment variables.
+    """
+    settings.SETTINGS = settings.get_settings()
+    from framework import dts
+
     dts.run_all()
 
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 02/21] dts: add docstring checker
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 01/21] dts: code adjustments for doc generation Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 03/21] dts: add basic developer docs Juraj Linkeš
                                     ` (19 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Python docstrings are the in-code way to document the code. The
docstring checker of choice is pydocstyle which we're executing from
Pylama, but the current latest versions are not complatible due to [0],
so pin the pydocstyle version to the latest working version.

[0] https://github.com/klen/pylama/issues/232

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 12 ++++++------
 dts/pyproject.toml |  6 +++++-
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..a734fa71f0 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -489,20 +489,20 @@ files = [
 
 [[package]]
 name = "pydocstyle"
-version = "6.3.0"
+version = "6.1.1"
 description = "Python docstring style checker"
 optional = false
 python-versions = ">=3.6"
 files = [
-    {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
-    {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+    {file = "pydocstyle-6.1.1-py3-none-any.whl", hash = "sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
+    {file = "pydocstyle-6.1.1.tar.gz", hash = "sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
 ]
 
 [package.dependencies]
-snowballstemmer = ">=2.2.0"
+snowballstemmer = "*"
 
 [package.extras]
-toml = ["tomli (>=1.2.3)"]
+toml = ["toml"]
 
 [[package]]
 name = "pyflakes"
@@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 980ac3c7db..37a692d655 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -25,6 +25,7 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
+pydocstyle = "6.1.1"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^0.961"
@@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
 [tool.pylama]
-linters = "mccabe,pycodestyle,pyflakes"
+linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
 max_line_length = 100
 
+[tool.pylama.linter.pydocstyle]
+convention = "google"
+
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 03/21] dts: add basic developer docs
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 01/21] dts: code adjustments for doc generation Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 02/21] dts: add docstring checker Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 04/21] dts: exceptions docstring update Juraj Linkeš
                                     ` (18 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Expand the framework contribution guidelines and add how to document the
code with Python docstrings.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 doc/guides/tools/dts.rst | 73 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..cd771a428c 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line argument.
 The results contain basic statistics of passed/failed test cases and DPDK version.
 
 
+Contributing to DTS
+-------------------
+
+There are two areas of contribution: The DTS framework and DTS test suites.
+
+The framework contains the logic needed to run test cases, such as connecting to nodes,
+running DPDK apps and collecting results.
+
+The test cases call APIs from the framework to test their scenarios. Adding test cases may
+require adding code to the framework as well.
+
+
+Framework Coding Guidelines
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When adding code to the DTS framework, pay attention to the rest of the code
+and try not to divert much from it. The :ref:`DTS developer tools <dts_dev_tools>` will issue
+warnings when some of the basics are not met.
+
+The code must be properly documented with docstrings. The style must conform to
+the `Google style <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
+See an example of the style
+`here <https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html>`_.
+For cases which are not covered by the Google style, refer
+to `PEP 257 <https://peps.python.org/pep-0257/>`_. There are some cases which are not covered by
+the two style guides, where we deviate or where some additional clarification is helpful:
+
+   * The __init__() methods of classes are documented separately from the docstring of the class
+     itself.
+   * The docstrigs of implemented abstract methods should refer to the superclass's definition
+     if there's no deviation.
+   * Instance variables/attributes should be documented in the docstring of the class
+     in the ``Attributes:`` section.
+   * The dataclass.dataclass decorator changes how the attributes are processed. The dataclass
+     attributes which result in instance variables/attributes should also be recorded
+     in the ``Attributes:`` section.
+   * Class variables/attributes, on the other hand, should be documented with ``#:`` above
+     the type annotated line. The description may be omitted if the meaning is obvious.
+   * The Enum and TypedDict also process the attributes in particular ways and should be documented
+     with ``#:`` as well. This is mainly so that the autogenerated docs contain the assigned value.
+   * When referencing a parameter of a function or a method in their docstring, don't use
+     any articles and put the parameter into single backticks. This mimics the style of
+     `Python's documentation <https://docs.python.org/3/index.html>`_.
+   * When specifying a value, use double backticks::
+
+        def foo(greet: bool) -> None:
+            """Demonstration of single and double backticks.
+
+            `greet` controls whether ``Hello World`` is printed.
+
+            Args:
+               greet: Whether to print the ``Hello World`` message.
+            """
+            if greet:
+               print(f"Hello World")
+
+   * The docstring maximum line length is the same as the code maximum line length.
+
+
 How To Write a Test Suite
 -------------------------
 
@@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
    | These methods don't need to be implemented if there's no need for them in a test suite.
      In that case, nothing will happen when they're is executed.
 
+#. **Configuration, traffic and other logic**
+
+   The ``TestSuite`` class contains a variety of methods for anything that
+   a test suite setup, a teardown, or a test case may need to do.
+
+   The test suites also frequently use a DPDK app, such as testpmd, in interactive mode
+   and use the interactive shell instances directly.
+
+   These are the two main ways to call the framework logic in test suites. If there's any
+   functionality or logic missing from the framework, it should be implemented so that
+   the test suites can use one of these two ways.
+
 #. **Test case verification**
 
    Test case verification should be done with the ``verify`` method, which records the result.
@@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
    and used by the test suite via the ``sut_node`` field.
 
 
+.. _dts_dev_tools:
+
 DTS Developer Tools
 -------------------
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 04/21] dts: exceptions docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (2 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 03/21] dts: add basic developer docs Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 05/21] dts: settings " Juraj Linkeš
                                     ` (17 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/__init__.py  |  12 ++++-
 dts/framework/exception.py | 106 +++++++++++++++++++++++++------------
 2 files changed, 83 insertions(+), 35 deletions(-)

diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
index d551ad4bf0..662e6ccad2 100644
--- a/dts/framework/__init__.py
+++ b/dts/framework/__init__.py
@@ -1,3 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
+
+"""Libraries and utilities for running DPDK Test Suite (DTS).
+
+The various modules in the DTS framework offer:
+
+* Connections to nodes, both interactive and non-interactive,
+* A straightforward way to add support for different operating systems of remote nodes,
+* Test suite setup, execution and teardown, along with test case setup, execution and teardown,
+* Pre-test suite setup and post-test suite teardown.
+"""
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 151e4d3aa9..658eee2c38 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -3,8 +3,10 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-User-defined exceptions used across the framework.
+"""DTS exceptions.
+
+The exceptions all have different severities expressed as an integer.
+The highest severity of all raised exceptions is used as the exit code of DTS.
 """
 
 from enum import IntEnum, unique
@@ -13,59 +15,79 @@
 
 @unique
 class ErrorSeverity(IntEnum):
-    """
-    The severity of errors that occur during DTS execution.
+    """The severity of errors that occur during DTS execution.
+
     All exceptions are caught and the most severe error is used as return code.
     """
 
+    #:
     NO_ERR = 0
+    #:
     GENERIC_ERR = 1
+    #:
     CONFIG_ERR = 2
+    #:
     REMOTE_CMD_EXEC_ERR = 3
+    #:
     SSH_ERR = 4
+    #:
     DPDK_BUILD_ERR = 10
+    #:
     TESTCASE_VERIFY_ERR = 20
+    #:
     BLOCKING_TESTSUITE_ERR = 25
 
 
 class DTSError(Exception):
-    """
-    The base exception from which all DTS exceptions are derived.
-    Stores error severity.
+    """The base exception from which all DTS exceptions are subclassed.
+
+    Do not use this exception, only use subclassed exceptions.
     """
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
 class SSHTimeoutError(DTSError):
-    """
-    Command execution timeout.
-    """
+    """The SSH execution of a command timed out."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _command: str
 
     def __init__(self, command: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            command: The executed command.
+        """
         self._command = command
 
     def __str__(self) -> str:
-        return f"TIMEOUT on {self._command}"
+        """Add some context to the string representation."""
+        return f"{self._command} execution timed out."
 
 
 class SSHConnectionError(DTSError):
-    """
-    SSH connection error.
-    """
+    """An unsuccessful SSH connection."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
     _errors: list[str]
 
     def __init__(self, host: str, errors: list[str] | None = None):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            host: The hostname to which we're trying to connect.
+            errors: Any errors that occurred during the connection attempt.
+        """
         self._host = host
         self._errors = [] if errors is None else errors
 
     def __str__(self) -> str:
+        """Include the errors in the string representation."""
         message = f"Error trying to connect with {self._host}."
         if self._errors:
             message += f" Errors encountered while retrying: {', '.join(self._errors)}"
@@ -74,76 +96,92 @@ def __str__(self) -> str:
 
 
 class SSHSessionDeadError(DTSError):
-    """
-    SSH session is not alive.
-    It can no longer be used.
-    """
+    """The SSH session is no longer alive."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
     _host: str
 
     def __init__(self, host: str):
+        """Define the meaning of the first argument.
+
+        Args:
+            host: The hostname of the disconnected node.
+        """
         self._host = host
 
     def __str__(self) -> str:
-        return f"SSH session with {self._host} has died"
+        """Add some context to the string representation."""
+        return f"SSH session with {self._host} has died."
 
 
 class ConfigurationError(DTSError):
-    """
-    Raised when an invalid configuration is encountered.
-    """
+    """An invalid configuration."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.CONFIG_ERR
 
 
 class RemoteCommandExecutionError(DTSError):
-    """
-    Raised when a command executed on a Node returns a non-zero exit status.
-    """
+    """An unsuccessful execution of a remote command."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
+    #: The executed command.
     command: str
     _command_return_code: int
 
     def __init__(self, command: str, command_return_code: int):
+        """Define the meaning of the first two arguments.
+
+        Args:
+            command: The executed command.
+            command_return_code: The return code of the executed command.
+        """
         self.command = command
         self._command_return_code = command_return_code
 
     def __str__(self) -> str:
+        """Include both the command and return code in the string representation."""
         return f"Command {self.command} returned a non-zero exit code: {self._command_return_code}"
 
 
 class RemoteDirectoryExistsError(DTSError):
-    """
-    Raised when a remote directory to be created already exists.
-    """
+    """A directory that exists on a remote node."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.REMOTE_CMD_EXEC_ERR
 
 
 class DPDKBuildError(DTSError):
-    """
-    Raised when DPDK build fails for any reason.
-    """
+    """A DPDK build failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.DPDK_BUILD_ERR
 
 
 class TestCaseVerifyError(DTSError):
-    """
-    Used in test cases to verify the expected behavior.
-    """
+    """A test case failure."""
 
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.TESTCASE_VERIFY_ERR
 
 
 class BlockingTestSuiteError(DTSError):
+    """A failure in a blocking test suite."""
+
+    #:
     severity: ClassVar[ErrorSeverity] = ErrorSeverity.BLOCKING_TESTSUITE_ERR
     _suite_name: str
 
     def __init__(self, suite_name: str) -> None:
+        """Define the meaning of the first argument.
+
+        Args:
+            suite_name: The blocking test suite.
+        """
         self._suite_name = suite_name
 
     def __str__(self) -> str:
+        """Add some context to the string representation."""
         return f"Blocking suite {self._suite_name} failed."
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 05/21] dts: settings docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (3 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 04/21] dts: exceptions docstring update Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 06/21] dts: logger and utils " Juraj Linkeš
                                     ` (16 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/settings.py | 103 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 102 insertions(+), 1 deletion(-)

diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 25b5dcff22..41f98e8519 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -3,6 +3,72 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
+"""Environment variables and command line arguments parsing.
+
+This is a simple module utilizing the built-in argparse module to parse command line arguments,
+augment them with values from environment variables and make them available across the framework.
+
+The command line value takes precedence, followed by the environment variable value,
+followed by the default value defined in this module.
+
+The command line arguments along with the supported environment variables are:
+
+.. option:: --config-file
+.. envvar:: DTS_CFG_FILE
+
+    The path to the YAML test run configuration file.
+
+.. option:: --output-dir, --output
+.. envvar:: DTS_OUTPUT_DIR
+
+    The directory where DTS logs and results are saved.
+
+.. option:: --compile-timeout
+.. envvar:: DTS_COMPILE_TIMEOUT
+
+    The timeout for compiling DPDK.
+
+.. option:: -t, --timeout
+.. envvar:: DTS_TIMEOUT
+
+    The timeout for all DTS operation except for compiling DPDK.
+
+.. option:: -v, --verbose
+.. envvar:: DTS_VERBOSE
+
+    Set to any value to enable logging everything to the console.
+
+.. option:: -s, --skip-setup
+.. envvar:: DTS_SKIP_SETUP
+
+    Set to any value to skip building DPDK.
+
+.. option:: --tarball, --snapshot, --git-ref
+.. envvar:: DTS_DPDK_TARBALL
+
+    The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
+
+.. option:: --test-cases
+.. envvar:: DTS_TESTCASES
+
+    A comma-separated list of test cases to execute. Unknown test cases will be silently ignored.
+
+.. option:: --re-run, --re_run
+.. envvar:: DTS_RERUN
+
+    Re-run each test case this many times in case of a failure.
+
+The module provides one key module-level variable:
+
+Attributes:
+    SETTINGS: The module level variable storing framework-wide DTS settings.
+
+Typical usage example::
+
+  from framework.settings import SETTINGS
+  foo = SETTINGS.foo
+"""
+
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
@@ -16,6 +82,23 @@
 
 
 def _env_arg(env_var: str) -> Any:
+    """A helper method augmenting the argparse Action with environment variables.
+
+    If the supplied environment variable is defined, then the default value
+    of the argument is modified. This satisfies the priority order of
+    command line argument > environment variable > default value.
+
+    Arguments with no values (flags) should be defined using the const keyword argument
+    (True or False). When the argument is specified, it will be set to const, if not specified,
+    the default will be stored (possibly modified by the corresponding environment variable).
+
+    Other arguments work the same as default argparse arguments, that is using
+    the default 'store' action.
+
+    Returns:
+          The modified argparse.Action.
+    """
+
     class _EnvironmentArgument(argparse.Action):
         def __init__(
             self,
@@ -68,14 +151,28 @@ def __call__(
 
 @dataclass(slots=True)
 class Settings:
+    """Default framework-wide user settings.
+
+    The defaults may be modified at the start of the run.
+    """
+
+    #:
     config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+    #:
     output_dir: str = "output"
+    #:
     timeout: float = 15
+    #:
     verbose: bool = False
+    #:
     skip_setup: bool = False
+    #:
     dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+    #:
     compile_timeout: float = 1200
+    #:
     test_cases: list[str] = field(default_factory=list)
+    #:
     re_run: int = 0
 
 
@@ -166,7 +263,7 @@ def _get_parser() -> argparse.ArgumentParser:
         action=_env_arg("DTS_RERUN"),
         default=SETTINGS.re_run,
         type=int,
-        help="[DTS_RERUN] Re-run each test case the specified amount of times "
+        help="[DTS_RERUN] Re-run each test case the specified number of times "
         "if a test failure occurs",
     )
 
@@ -174,6 +271,10 @@ def _get_parser() -> argparse.ArgumentParser:
 
 
 def get_settings() -> Settings:
+    """Create new settings with inputs from the user.
+
+    The inputs are taken from the command line and from environment variables.
+    """
     parsed_args = _get_parser().parse_args()
     return Settings(
         config_file_path=parsed_args.config_file,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 06/21] dts: logger and utils docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (4 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 05/21] dts: settings " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 07/21] dts: dts runner and main " Juraj Linkeš
                                     ` (15 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/logger.py | 72 ++++++++++++++++++++++-----------
 dts/framework/utils.py  | 88 +++++++++++++++++++++++++++++------------
 2 files changed, 113 insertions(+), 47 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index bb2991e994..cfa6e8cd72 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -3,9 +3,9 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-DTS logger module with several log level. DTS framework and TestSuite logs
-are saved in different log files.
+"""DTS logger module.
+
+DTS framework and TestSuite logs are saved in different log files.
 """
 
 import logging
@@ -18,19 +18,21 @@
 stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 
 
-class LoggerDictType(TypedDict):
-    logger: "DTSLOG"
-    name: str
-    node: str
-
+class DTSLOG(logging.LoggerAdapter):
+    """DTS logger adapter class for framework and testsuites.
 
-# List for saving all using loggers
-Loggers: list[LoggerDictType] = []
+    The :option:`--verbose` command line argument and the :envvar:`DTS_VERBOSE` environment
+    variable control the verbosity of output. If enabled, all messages will be emitted to the
+    console.
 
+    The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+    variable modify the directory where the logs will be stored.
 
-class DTSLOG(logging.LoggerAdapter):
-    """
-    DTS log class for framework and testsuite.
+    Attributes:
+        node: The additional identifier. Currently unused.
+        sh: The handler which emits logs to console.
+        fh: The handler which emits logs to a file.
+        verbose_fh: Just as fh, but logs with a different, more verbose, format.
     """
 
     _logger: logging.Logger
@@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
     verbose_fh: logging.FileHandler
 
     def __init__(self, logger: logging.Logger, node: str = "suite"):
+        """Extend the constructor with additional handlers.
+
+        One handler logs to the console, the other one to a file, with either a regular or verbose
+        format.
+
+        Args:
+            logger: The logger from which to create the logger adapter.
+            node: An additional identifier. Currently unused.
+        """
         self._logger = logger
         # 1 means log everything, this will be used by file handlers if their level
         # is not set
@@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = "suite"):
         super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
 
     def logger_exit(self) -> None:
-        """
-        Remove stream handler and logfile handler.
-        """
+        """Remove the stream handler and the logfile handler."""
         for handler in (self.sh, self.fh, self.verbose_fh):
             handler.flush()
             self._logger.removeHandler(handler)
 
 
+class _LoggerDictType(TypedDict):
+    logger: DTSLOG
+    name: str
+    node: str
+
+
+# List for saving all loggers in use
+_Loggers: list[_LoggerDictType] = []
+
+
 def getLogger(name: str, node: str = "suite") -> DTSLOG:
+    """Get DTS logger adapter identified by name and node.
+
+    An existing logger will be returned if one with the exact name and node already exists.
+    A new one will be created and stored otherwise.
+
+    Args:
+        name: The name of the logger.
+        node: An additional identifier for the logger.
+
+    Returns:
+        A logger uniquely identified by both name and node.
     """
-    Get logger handler and if there's no handler for specified Node will create one.
-    """
-    global Loggers
+    global _Loggers
     # return saved logger
-    logger: LoggerDictType
-    for logger in Loggers:
+    logger: _LoggerDictType
+    for logger in _Loggers:
         if logger["name"] == name and logger["node"] == node:
             return logger["logger"]
 
     # return new logger
     dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
-    Loggers.append({"logger": dts_logger, "name": name, "node": node})
+    _Loggers.append({"logger": dts_logger, "name": name, "node": node})
     return dts_logger
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index a0f2173949..cc5e458cc8 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -3,6 +3,16 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Various utility classes and functions.
+
+These are used in multiple modules across the framework. They're here because
+they provide some non-specific functionality, greatly simplify imports or just don't
+fit elsewhere.
+
+Attributes:
+    REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. ``0000:00:08.0``.
+"""
+
 import atexit
 import json
 import os
@@ -19,12 +29,20 @@
 
 
 def expand_range(range_str: str) -> list[int]:
-    """
-    Process range string into a list of integers. There are two possible formats:
-    n - a single integer
-    n-m - a range of integers
+    """Process `range_str` into a list of integers.
+
+    There are two possible formats of `range_str`:
+
+        * ``n`` - a single integer,
+        * ``n-m`` - a range of integers.
 
-    The returned range includes both n and m. Empty string returns an empty list.
+    The returned range includes both ``n`` and ``m``. Empty string returns an empty list.
+
+    Args:
+        range_str: The range to expand.
+
+    Returns:
+        All the numbers from the range.
     """
     expanded_range: list[int] = []
     if range_str:
@@ -37,6 +55,14 @@ def expand_range(range_str: str) -> list[int]:
 
 
 def get_packet_summaries(packets: list[Packet]) -> str:
+    """Format a string summary from `packets`.
+
+    Args:
+        packets: The packets to format.
+
+    Returns:
+        The summary of `packets`.
+    """
     if len(packets) == 1:
         packet_summaries = packets[0].summary()
     else:
@@ -45,27 +71,36 @@ def get_packet_summaries(packets: list[Packet]) -> str:
 
 
 class StrEnum(Enum):
+    """Enum with members stored as strings."""
+
     @staticmethod
     def _generate_next_value_(name: str, start: int, count: int, last_values: object) -> str:
         return name
 
     def __str__(self) -> str:
+        """The string representation is the name of the member."""
         return self.name
 
 
 class MesonArgs(object):
-    """
-    Aggregate the arguments needed to build DPDK:
-    default_library: Default library type, Meson allows "shared", "static" and "both".
-               Defaults to None, in which case the argument won't be used.
-    Keyword arguments: The arguments found in meson_options.txt in root DPDK directory.
-               Do not use -D with them, for example:
-               meson_args = MesonArgs(enable_kmods=True).
-    """
+    """Aggregate the arguments needed to build DPDK."""
 
     _default_library: str
 
     def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
+        """Initialize the meson arguments.
+
+        Args:
+            default_library: The default library type, Meson supports ``shared``, ``static`` and
+                ``both``. Defaults to :data:`None`, in which case the argument won't be used.
+            dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Example:
+            ::
+
+                meson_args = MesonArgs(enable_kmods=True).
+        """
         self._default_library = f"--default-library={default_library}" if default_library else ""
         self._dpdk_args = " ".join(
             (
@@ -75,6 +110,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
         )
 
     def __str__(self) -> str:
+        """The actual args."""
         return " ".join(f"{self._default_library} {self._dpdk_args}".split())
 
 
@@ -96,24 +132,14 @@ class _TarCompressionFormat(StrEnum):
 
 
 class DPDKGitTarball(object):
-    """Create a compressed tarball of DPDK from the repository.
-
-    The DPDK version is specified with git object git_ref.
-    The tarball will be compressed with _TarCompressionFormat,
-    which must be supported by the DTS execution environment.
-    The resulting tarball will be put into output_dir.
+    """Compressed tarball of DPDK from the repository.
 
-    The class supports the os.PathLike protocol,
+    The class supports the :class:`os.PathLike` protocol,
     which is used to get the Path of the tarball::
 
         from pathlib import Path
         tarball = DPDKGitTarball("HEAD", "output")
         tarball_path = Path(tarball)
-
-    Arguments:
-        git_ref: A git commit ID, tag ID or tree ID.
-        output_dir: The directory where to put the resulting tarball.
-        tar_compression_format: The compression format to use.
     """
 
     _git_ref: str
@@ -128,6 +154,17 @@ def __init__(
         output_dir: str,
         tar_compression_format: _TarCompressionFormat = _TarCompressionFormat.xz,
     ):
+        """Create the tarball during initialization.
+
+        The DPDK version is specified with `git_ref`. The tarball will be compressed with
+        `tar_compression_format`, which must be supported by the DTS execution environment.
+        The resulting tarball will be put into `output_dir`.
+
+        Args:
+            git_ref: A git commit ID, tag ID or tree ID.
+            output_dir: The directory where to put the resulting tarball.
+            tar_compression_format: The compression format to use.
+        """
         self._git_ref = git_ref
         self._tar_compression_format = tar_compression_format
 
@@ -196,4 +233,5 @@ def _delete_tarball(self) -> None:
             os.remove(self._tarball_path)
 
     def __fspath__(self) -> str:
+        """The os.PathLike protocol implementation."""
         return str(self._tarball_path)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 07/21] dts: dts runner and main docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (5 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 06/21] dts: logger and utils " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 08/21] dts: test suite " Juraj Linkeš
                                     ` (14 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/dts.py | 131 ++++++++++++++++++++++++++++++++++++-------
 dts/main.py          |  10 ++--
 2 files changed, 116 insertions(+), 25 deletions(-)

diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 356368ef10..e16d4578a0 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,33 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+r"""Test suite runner module.
+
+A DTS run is split into stages:
+
+    #. Execution stage,
+    #. Build target stage,
+    #. Test suite stage,
+    #. Test case stage.
+
+The module is responsible for running tests on testbeds defined in the test run configuration.
+Each setup or teardown of each stage is recorded in a :class:`~.test_result.DTSResult` or
+one of its subclasses. The test case results are also recorded.
+
+If an error occurs, the current stage is aborted, the error is recorded and the run continues in
+the next iteration of the same stage. The return code is the highest `severity` of all
+:class:`~.exception.DTSError`\s.
+
+Example:
+    An error occurs in a build target setup. The current build target is aborted and the run
+    continues with the next build target. If the errored build target was the last one in the given
+    execution, the next execution begins.
+
+Attributes:
+    dts_logger: The logger instance used in this module.
+    result: The top level result used in the module.
+"""
+
 import sys
 
 from .config import (
@@ -23,9 +50,38 @@
 
 
 def run_all() -> None:
-    """
-    The main process of DTS. Runs all build targets in all executions from the main
-    config file.
+    """Run all build targets in all executions from the test run configuration.
+
+    Before running test suites, executions and build targets are first set up.
+    The executions and build targets defined in the test run configuration are iterated over.
+    The executions define which tests to run and where to run them and build targets define
+    the DPDK build setup.
+
+    The tests suites are set up for each execution/build target tuple and each scheduled
+    test case within the test suite is set up, executed and torn down. After all test cases
+    have been executed, the test suite is torn down and the next build target will be tested.
+
+    All the nested steps look like this:
+
+        #. Execution setup
+
+            #. Build target setup
+
+                #. Test suite setup
+
+                    #. Test case setup
+                    #. Test case logic
+                    #. Test case teardown
+
+                #. Test suite teardown
+
+            #. Build target teardown
+
+        #. Execution teardown
+
+    The test cases are filtered according to the specification in the test run configuration and
+    the :option:`--test-cases` command line argument or
+    the :envvar:`DTS_TESTCASES` environment variable.
     """
     global dts_logger
     global result
@@ -87,6 +143,8 @@ def run_all() -> None:
 
 
 def _check_dts_python_version() -> None:
+    """Check the required Python version - v3.10."""
+
     def RED(text: str) -> str:
         return f"\u001B[31;1m{str(text)}\u001B[0m"
 
@@ -109,9 +167,16 @@ def _run_execution(
     execution: ExecutionConfiguration,
     result: DTSResult,
 ) -> None:
-    """
-    Run the given execution. This involves running the execution setup as well as
-    running all build targets in the given execution.
+    """Run the given execution.
+
+    This involves running the execution setup as well as running all build targets
+    in the given execution. After that, execution teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: An execution's test run configuration.
+        result: The top level result object.
     """
     dts_logger.info(f"Running execution with SUT '{execution.system_under_test_node.name}'.")
     execution_result = result.add_execution(sut_node.config)
@@ -144,8 +209,18 @@ def _run_build_target(
     execution: ExecutionConfiguration,
     execution_result: ExecutionResult,
 ) -> None:
-    """
-    Run the given build target.
+    """Run the given build target.
+
+    This involves running the build target setup as well as running all test suites
+    in the given execution the build target is defined in.
+    After that, build target teardown is run.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        build_target: A build target's test run configuration.
+        execution: The build target's execution's test run configuration.
+        execution_result: The execution level result object associated with the execution.
     """
     dts_logger.info(f"Running build target '{build_target.name}'.")
     build_target_result = execution_result.add_build_target(build_target)
@@ -177,10 +252,20 @@ def _run_all_suites(
     execution: ExecutionConfiguration,
     build_target_result: BuildTargetResult,
 ) -> None:
-    """
-    Use the given build_target to run execution's test suites
-    with possibly only a subset of test cases.
-    If no subset is specified, run all test cases.
+    """Run the execution's (possibly a subset) test suites using the current build target.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
+
+    If a blocking test suite (such as the smoke test suite) fails, the rest of the test suites
+    in the current build target won't be executed.
+
+    Args:
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
     """
     end_build_target = False
     if not execution.skip_smoke_tests:
@@ -206,16 +291,22 @@ def _run_single_suite(
     build_target_result: BuildTargetResult,
     test_suite_config: TestSuiteConfig,
 ) -> None:
-    """Runs a single test suite.
+    """Run all test suite in a single test suite module.
+
+    The function assumes the build target we're testing has already been built on the SUT node.
+    The current build target thus corresponds to the current DPDK build present on the SUT node.
 
     Args:
-        sut_node: Node to run tests on.
-        execution: Execution the test case belongs to.
-        build_target_result: Build target configuration test case is run on
-        test_suite_config: Test suite configuration
+        sut_node: The execution's SUT node.
+        tg_node: The execution's TG node.
+        execution: The execution's test run configuration associated with the current build target.
+        build_target_result: The build target level result object associated
+            with the current build target.
+        test_suite_config: Test suite test run configuration specifying the test suite module
+            and possibly a subset of test cases of test suites in that module.
 
     Raises:
-        BlockingTestSuiteError: If a test suite that was marked as blocking fails.
+        BlockingTestSuiteError: If a blocking test suite fails.
     """
     try:
         full_suite_path = f"tests.TestSuite_{test_suite_config.test_suite}"
@@ -239,9 +330,7 @@ def _run_single_suite(
 
 
 def _exit_dts() -> None:
-    """
-    Process all errors and exit with the proper exit code.
-    """
+    """Process all errors and exit with the proper exit code."""
     result.process()
 
     if dts_logger:
diff --git a/dts/main.py b/dts/main.py
index 5d4714b0c3..b856ba86be 100755
--- a/dts/main.py
+++ b/dts/main.py
@@ -1,12 +1,10 @@
 #!/usr/bin/env python3
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
-"""
-A test framework for testing DPDK.
-"""
+"""The DTS executable."""
 
 import logging
 
@@ -17,6 +15,10 @@ def main() -> None:
     """Set DTS settings, then run DTS.
 
     The DTS settings are taken from the command line arguments and the environment variables.
+    The settings object is stored in the module-level variable settings.SETTINGS which the entire
+    framework uses. After importing the module (or the variable), any changes to the variable are
+    not going to be reflected without a re-import. This means that the SETTINGS variable must
+    be modified before the settings module is imported anywhere else in the framework.
     """
     settings.SETTINGS = settings.get_settings()
     from framework import dts
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 08/21] dts: test suite docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (6 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 07/21] dts: dts runner and main " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 09/21] dts: test result " Juraj Linkeš
                                     ` (13 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_suite.py | 231 +++++++++++++++++++++++++++---------
 1 file changed, 175 insertions(+), 56 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index f9e66e814a..dfb391ffbd 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -2,8 +2,19 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Base class for creating DTS test cases.
+"""Features common to all test suites.
+
+The module defines the :class:`TestSuite` class which doesn't contain any test cases, and as such
+must be extended by subclasses which add test cases. The :class:`TestSuite` contains the basics
+needed by subclasses:
+
+    * Test suite and test case execution flow,
+    * Testbed (SUT, TG) configuration,
+    * Packet sending and verification,
+    * Test case verification.
+
+The module also defines a function, :func:`get_test_suites`,
+for gathering test suites from a Python module.
 """
 
 import importlib
@@ -11,7 +22,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Any, Union
+from typing import Any, ClassVar, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -31,25 +42,44 @@
 
 
 class TestSuite(object):
-    """
-    The base TestSuite class provides methods for handling basic flow of a test suite:
-    * test case filtering and collection
-    * test suite setup/cleanup
-    * test setup/cleanup
-    * test case execution
-    * error handling and results storage
-    Test cases are implemented by derived classes. Test cases are all methods
-    starting with test_, further divided into performance test cases
-    (starting with test_perf_) and functional test cases (all other test cases).
-    By default, all test cases will be executed. A list of testcase str names
-    may be specified in conf.yaml or on the command line
-    to filter which test cases to run.
-    The methods named [set_up|tear_down]_[suite|test_case] should be overridden
-    in derived classes if the appropriate suite/test case fixtures are needed.
+    """The base class with methods for handling the basic flow of a test suite.
+
+        * Test case filtering and collection,
+        * Test suite setup/cleanup,
+        * Test setup/cleanup,
+        * Test case execution,
+        * Error handling and results storage.
+
+    Test cases are implemented by subclasses. Test cases are all methods starting with ``test_``,
+    further divided into performance test cases (starting with ``test_perf_``)
+    and functional test cases (all other test cases).
+
+    By default, all test cases will be executed. A list of testcase names may be specified
+    in the YAML test run configuration file and in the :option:`--test-cases` command line argument
+    or in the :envvar:`DTS_TESTCASES` environment variable to filter which test cases to run.
+    The union of both lists will be used. Any unknown test cases from the latter lists
+    will be silently ignored.
+
+    If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` environment variable
+    is set, in case of a test case failure, the test case will be executed again until it passes
+    or it fails that many times in addition of the first failure.
+
+    The methods named ``[set_up|tear_down]_[suite|test_case]`` should be overridden in subclasses
+    if the appropriate test suite/test case fixtures are needed.
+
+    The test suite is aware of the testbed (the SUT and TG) it's running on. From this, it can
+    properly choose the IP addresses and other configuration that must be tailored to the testbed.
+
+    Attributes:
+        sut_node: The SUT node where the test suite is running.
+        tg_node: The TG node where the test suite is running.
     """
 
     sut_node: SutNode
-    is_blocking = False
+    tg_node: TGNode
+    #: Whether the test suite is blocking. A failure of a blocking test suite
+    #: will block the execution of all subsequent test suites in the current build target.
+    is_blocking: ClassVar[bool] = False
     _logger: DTSLOG
     _test_cases_to_run: list[str]
     _func: bool
@@ -72,6 +102,20 @@ def __init__(
         func: bool,
         build_target_result: BuildTargetResult,
     ):
+        """Initialize the test suite testbed information and basic configuration.
+
+        Process what test cases to run, create the associated
+        :class:`~.test_result.TestSuiteResult`, find links between ports
+        and set up default IP addresses to be used when configuring them.
+
+        Args:
+            sut_node: The SUT node where the test suite will run.
+            tg_node: The TG node where the test suite will run.
+            test_cases: The list of test cases to execute.
+                If empty, all test cases will be executed.
+            func: Whether to run functional tests.
+            build_target_result: The build target result this test suite is run in.
+        """
         self.sut_node = sut_node
         self.tg_node = tg_node
         self._logger = getLogger(self.__class__.__name__)
@@ -95,6 +139,7 @@ def __init__(
         self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
 
     def _process_links(self) -> None:
+        """Construct links between SUT and TG ports."""
         for sut_port in self.sut_node.ports:
             for tg_port in self.tg_node.ports:
                 if (sut_port.identifier, sut_port.peer) == (
@@ -104,27 +149,42 @@ def _process_links(self) -> None:
                     self._port_links.append(PortLink(sut_port=sut_port, tg_port=tg_port))
 
     def set_up_suite(self) -> None:
-        """
-        Set up test fixtures common to all test cases; this is done before
-        any test case is run.
+        """Set up test fixtures common to all test cases.
+
+        This is done before any test case has been run.
         """
 
     def tear_down_suite(self) -> None:
-        """
-        Tear down the previously created test fixtures common to all test cases.
+        """Tear down the previously created test fixtures common to all test cases.
+
+        This is done after all test have been run.
         """
 
     def set_up_test_case(self) -> None:
-        """
-        Set up test fixtures before each test case.
+        """Set up test fixtures before each test case.
+
+        This is done before *each* test case.
         """
 
     def tear_down_test_case(self) -> None:
-        """
-        Tear down the previously created test fixtures after each test case.
+        """Tear down the previously created test fixtures after each test case.
+
+        This is done after *each* test case.
         """
 
     def configure_testbed_ipv4(self, restore: bool = False) -> None:
+        """Configure IPv4 addresses on all testbed ports.
+
+        The configured ports are:
+
+        * SUT ingress port,
+        * SUT egress port,
+        * TG ingress port,
+        * TG egress port.
+
+        Args:
+            restore: If :data:`True`, will remove the configuration instead.
+        """
         delete = True if restore else False
         enable = False if restore else True
         self._configure_ipv4_forwarding(enable)
@@ -149,11 +209,17 @@ def _configure_ipv4_forwarding(self, enable: bool) -> None:
         self.sut_node.configure_ipv4_forwarding(enable)
 
     def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[Packet]:
-        """
-        Send a packet through the appropriate interface and
-        receive on the appropriate interface.
-        Modify the packet with l3/l2 addresses corresponding
-        to the testbed and desired traffic.
+        """Send and receive `packet` using the associated TG.
+
+        Send `packet` through the appropriate interface and receive on the appropriate interface.
+        Modify the packet with l3/l2 addresses corresponding to the testbed and desired traffic.
+
+        Args:
+            packet: The packet to send.
+            duration: Capture traffic for this amount of time after sending `packet`.
+
+        Returns:
+            A list of received packets.
         """
         packet = self._adjust_addresses(packet)
         return self.tg_node.send_packet_and_capture(
@@ -161,13 +227,26 @@ def send_packet_and_capture(self, packet: Packet, duration: float = 1) -> list[P
         )
 
     def get_expected_packet(self, packet: Packet) -> Packet:
+        """Inject the proper L2/L3 addresses into `packet`.
+
+        Args:
+            packet: The packet to modify.
+
+        Returns:
+            `packet` with injected L2/L3 addresses.
+        """
         return self._adjust_addresses(packet, expected=True)
 
     def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
-        """
+        """L2 and L3 address additions in both directions.
+
         Assumptions:
-            Two links between SUT and TG, one link is TG -> SUT,
-            the other SUT -> TG.
+            Two links between SUT and TG, one link is TG -> SUT, the other SUT -> TG.
+
+        Args:
+            packet: The packet to modify.
+            expected: If :data:`True`, the direction is SUT -> TG,
+                otherwise the direction is TG -> SUT.
         """
         if expected:
             # The packet enters the TG from SUT
@@ -193,6 +272,19 @@ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
         return Ether(packet.build())
 
     def verify(self, condition: bool, failure_description: str) -> None:
+        """Verify `condition` and handle failures.
+
+        When `condition` is :data:`False`, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            condition: The condition to check.
+            failure_description: A short description of the failure
+                that will be stored in the raised exception.
+
+        Raises:
+            TestCaseVerifyError: `condition` is :data:`False`.
+        """
         if not condition:
             self._fail_test_case_verify(failure_description)
 
@@ -206,6 +298,19 @@ def _fail_test_case_verify(self, failure_description: str) -> None:
         raise TestCaseVerifyError(failure_description)
 
     def verify_packets(self, expected_packet: Packet, received_packets: list[Packet]) -> None:
+        """Verify that `expected_packet` has been received.
+
+        Go through `received_packets` and check that `expected_packet` is among them.
+        If not, raise an exception and log the last 10 commands
+        executed on both the SUT and TG.
+
+        Args:
+            expected_packet: The packet we're expecting to receive.
+            received_packets: The packets where we're looking for `expected_packet`.
+
+        Raises:
+            TestCaseVerifyError: `expected_packet` is not among `received_packets`.
+        """
         for received_packet in received_packets:
             if self._compare_packets(expected_packet, received_packet):
                 break
@@ -280,10 +385,14 @@ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
         return True
 
     def run(self) -> None:
-        """
-        Setup, execute and teardown the whole suite.
-        Suite execution consists of running all test cases scheduled to be executed.
-        A test cast run consists of setup, execution and teardown of said test case.
+        """Set up, execute and tear down the whole suite.
+
+        Test suite execution consists of running all test cases scheduled to be executed.
+        A test case run consists of setup, execution and teardown of said test case.
+
+        Record the setup and the teardown and handle failures.
+
+        The list of scheduled test cases is constructed when creating the :class:`TestSuite` object.
         """
         test_suite_name = self.__class__.__name__
 
@@ -315,9 +424,7 @@ def run(self) -> None:
                 raise BlockingTestSuiteError(test_suite_name)
 
     def _execute_test_suite(self) -> None:
-        """
-        Execute all test cases scheduled to be executed in this suite.
-        """
+        """Execute all test cases scheduled to be executed in this suite."""
         if self._func:
             for test_case_method in self._get_functional_test_cases():
                 test_case_name = test_case_method.__name__
@@ -334,14 +441,18 @@ def _execute_test_suite(self) -> None:
                     self._run_test_case(test_case_method, test_case_result)
 
     def _get_functional_test_cases(self) -> list[MethodType]:
-        """
-        Get all functional test cases.
+        """Get all functional test cases defined in this TestSuite.
+
+        Returns:
+            The list of functional test cases of this TestSuite.
         """
         return self._get_test_cases(r"test_(?!perf_)")
 
     def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
-        """
-        Return a list of test cases matching test_case_regex.
+        """Return a list of test cases matching test_case_regex.
+
+        Returns:
+            The list of test cases matching test_case_regex of this TestSuite.
         """
         self._logger.debug(f"Searching for test cases in {self.__class__.__name__}.")
         filtered_test_cases = []
@@ -353,9 +464,7 @@ def _get_test_cases(self, test_case_regex: str) -> list[MethodType]:
         return filtered_test_cases
 
     def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool:
-        """
-        Check whether the test case should be executed.
-        """
+        """Check whether the test case should be scheduled to be executed."""
         match = bool(re.match(test_case_regex, test_case_name))
         if self._test_cases_to_run:
             return match and test_case_name in self._test_cases_to_run
@@ -365,9 +474,9 @@ def _should_be_executed(self, test_case_name: str, test_case_regex: str) -> bool
     def _run_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Setup, execute and teardown a test case in this suite.
-        Exceptions are caught and recorded in logs and results.
+        """Setup, execute and teardown a test case in this suite.
+
+        Record the result of the setup and the teardown and handle failures.
         """
         test_case_name = test_case_method.__name__
 
@@ -402,9 +511,7 @@ def _run_test_case(
     def _execute_test_case(
         self, test_case_method: MethodType, test_case_result: TestCaseResult
     ) -> None:
-        """
-        Execute one test case and handle failures.
-        """
+        """Execute one test case, record the result and handle failures."""
         test_case_name = test_case_method.__name__
         try:
             self._logger.info(f"Starting test case execution: {test_case_name}")
@@ -425,6 +532,18 @@ def _execute_test_case(
 
 
 def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
+    r"""Find all :class:`TestSuite`\s in a Python module.
+
+    Args:
+        testsuite_module_path: The path to the Python module.
+
+    Returns:
+        The list of :class:`TestSuite`\s found within the Python module.
+
+    Raises:
+        ConfigurationError: The test suite module was not found.
+    """
+
     def is_test_suite(object: Any) -> bool:
         try:
             if issubclass(object, TestSuite) and object is not TestSuite:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 09/21] dts: test result docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (7 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 08/21] dts: test suite " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 10/21] dts: config " Juraj Linkeš
                                     ` (12 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/test_result.py | 297 ++++++++++++++++++++++++++++-------
 1 file changed, 239 insertions(+), 58 deletions(-)

diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 57090feb04..4467749a9d 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -2,8 +2,25 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-Generic result container and reporters
+r"""Record and process DTS results.
+
+The results are recorded in a hierarchical manner:
+
+    * :class:`DTSResult` contains
+    * :class:`ExecutionResult` contains
+    * :class:`BuildTargetResult` contains
+    * :class:`TestSuiteResult` contains
+    * :class:`TestCaseResult`
+
+Each result may contain multiple lower level results, e.g. there are multiple
+:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
+The results have common parts, such as setup and teardown results, captured in :class:`BaseResult`,
+which also defines some common behaviors in its methods.
+
+Each result class has its own idiosyncrasies which they implement in overridden methods.
+
+The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` environment
+variable modify the directory where the files with results will be stored.
 """
 
 import os.path
@@ -26,26 +43,34 @@
 
 
 class Result(Enum):
-    """
-    An Enum defining the possible states that
-    a setup, a teardown or a test case may end up in.
-    """
+    """The possible states that a setup, a teardown or a test case may end up in."""
 
+    #:
     PASS = auto()
+    #:
     FAIL = auto()
+    #:
     ERROR = auto()
+    #:
     SKIP = auto()
 
     def __bool__(self) -> bool:
+        """Only PASS is True."""
         return self is self.PASS
 
 
 class FixtureResult(object):
-    """
-    A record that stored the result of a setup or a teardown.
-    The default is FAIL because immediately after creating the object
-    the setup of the corresponding stage will be executed, which also guarantees
-    the execution of teardown.
+    """A record that stores the result of a setup or a teardown.
+
+    :attr:`~Result.FAIL` is a sensible default since it prevents false positives (which could happen
+    if the default was :attr:`~Result.PASS`).
+
+    Preventing false positives or other false results is preferable since a failure
+    is mostly likely to be investigated (the other false results may not be investigated at all).
+
+    Attributes:
+        result: The associated result.
+        error: The error in case of a failure.
     """
 
     result: Result
@@ -56,21 +81,37 @@ def __init__(
         result: Result = Result.FAIL,
         error: Exception | None = None,
     ):
+        """Initialize the constructor with the fixture result and store a possible error.
+
+        Args:
+            result: The result to store.
+            error: The error which happened when a failure occurred.
+        """
         self.result = result
         self.error = error
 
     def __bool__(self) -> bool:
+        """A wrapper around the stored :class:`Result`."""
         return bool(self.result)
 
 
 class Statistics(dict):
-    """
-    A helper class used to store the number of test cases by its result
-    along a few other basic information.
-    Using a dict provides a convenient way to format the data.
+    """How many test cases ended in which result state along some other basic information.
+
+    Subclassing :class:`dict` provides a convenient way to format the data.
+
+    The data are stored in the following keys:
+
+    * **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases.
+    * **DPDK VERSION** (:class:`str`) -- The tested DPDK version.
     """
 
     def __init__(self, dpdk_version: str | None):
+        """Extend the constructor with keys in which the data are stored.
+
+        Args:
+            dpdk_version: The version of tested DPDK.
+        """
         super(Statistics, self).__init__()
         for result in Result:
             self[result.name] = 0
@@ -78,8 +119,17 @@ def __init__(self, dpdk_version: str | None):
         self["DPDK VERSION"] = dpdk_version
 
     def __iadd__(self, other: Result) -> "Statistics":
-        """
-        Add a Result to the final count.
+        """Add a Result to the final count.
+
+        Example:
+            stats: Statistics = Statistics()  # empty Statistics
+            stats += Result.PASS  # add a Result to `stats`
+
+        Args:
+            other: The Result to add to this statistics object.
+
+        Returns:
+            The modified statistics object.
         """
         self[other.name] += 1
         self["PASS RATE"] = (
@@ -88,9 +138,7 @@ def __iadd__(self, other: Result) -> "Statistics":
         return self
 
     def __str__(self) -> str:
-        """
-        Provide a string representation of the data.
-        """
+        """Each line contains the formatted key = value pair."""
         stats_str = ""
         for key, value in self.items():
             stats_str += f"{key:<12} = {value}\n"
@@ -100,10 +148,16 @@ def __str__(self) -> str:
 
 
 class BaseResult(object):
-    """
-    The Base class for all results. Stores the results of
-    the setup and teardown portions of the corresponding stage
-    and a list of results from each inner stage in _inner_results.
+    """Common data and behavior of DTS results.
+
+    Stores the results of the setup and teardown portions of the corresponding stage.
+    The hierarchical nature of DTS results is captured recursively in an internal list.
+    A stage is each level in this particular hierarchy (pre-execution or the top-most level,
+    execution, build target, test suite and test case.)
+
+    Attributes:
+        setup_result: The result of the setup of the particular stage.
+        teardown_result: The results of the teardown of the particular stage.
     """
 
     setup_result: FixtureResult
@@ -111,15 +165,28 @@ class BaseResult(object):
     _inner_results: MutableSequence["BaseResult"]
 
     def __init__(self):
+        """Initialize the constructor."""
         self.setup_result = FixtureResult()
         self.teardown_result = FixtureResult()
         self._inner_results = []
 
     def update_setup(self, result: Result, error: Exception | None = None) -> None:
+        """Store the setup result.
+
+        Args:
+            result: The result of the setup.
+            error: The error that occurred in case of a failure.
+        """
         self.setup_result.result = result
         self.setup_result.error = error
 
     def update_teardown(self, result: Result, error: Exception | None = None) -> None:
+        """Store the teardown result.
+
+        Args:
+            result: The result of the teardown.
+            error: The error that occurred in case of a failure.
+        """
         self.teardown_result.result = result
         self.teardown_result.error = error
 
@@ -137,27 +204,55 @@ def _get_inner_errors(self) -> list[Exception]:
         ]
 
     def get_errors(self) -> list[Exception]:
+        """Compile errors from the whole result hierarchy.
+
+        Returns:
+            The errors from setup, teardown and all errors found in the whole result hierarchy.
+        """
         return self._get_setup_teardown_errors() + self._get_inner_errors()
 
     def add_stats(self, statistics: Statistics) -> None:
+        """Collate stats from the whole result hierarchy.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be collated.
+        """
         for inner_result in self._inner_results:
             inner_result.add_stats(statistics)
 
 
 class TestCaseResult(BaseResult, FixtureResult):
-    """
-    The test case specific result.
-    Stores the result of the actual test case.
-    Also stores the test case name.
+    r"""The test case specific result.
+
+    Stores the result of the actual test case. This is done by adding an extra superclass
+    in :class:`FixtureResult`. The setup and teardown results are :class:`FixtureResult`\s and
+    the class is itself a record of the test case.
+
+    Attributes:
+        test_case_name: The test case name.
     """
 
     test_case_name: str
 
     def __init__(self, test_case_name: str):
+        """Extend the constructor with `test_case_name`.
+
+        Args:
+            test_case_name: The test case's name.
+        """
         super(TestCaseResult, self).__init__()
         self.test_case_name = test_case_name
 
     def update(self, result: Result, error: Exception | None = None) -> None:
+        """Update the test case result.
+
+        This updates the result of the test case itself and doesn't affect
+        the results of the setup and teardown steps in any way.
+
+        Args:
+            result: The result of the test case.
+            error: The error that occurred in case of a failure.
+        """
         self.result = result
         self.error = error
 
@@ -167,36 +262,64 @@ def _get_inner_errors(self) -> list[Exception]:
         return []
 
     def add_stats(self, statistics: Statistics) -> None:
+        r"""Add the test case result to statistics.
+
+        The base method goes through the hierarchy recursively and this method is here to stop
+        the recursion, as the :class:`TestCaseResult`\s are the leaves of the hierarchy tree.
+
+        Args:
+            statistics: The :class:`Statistics` object where the stats will be added.
+        """
         statistics += self.result
 
     def __bool__(self) -> bool:
+        """The test case passed only if setup, teardown and the test case itself passed."""
         return bool(self.setup_result) and bool(self.teardown_result) and bool(self.result)
 
 
 class TestSuiteResult(BaseResult):
-    """
-    The test suite specific result.
-    The _inner_results list stores results of test cases in a given test suite.
-    Also stores the test suite name.
+    """The test suite specific result.
+
+    The internal list stores the results of all test cases in a given test suite.
+
+    Attributes:
+        suite_name: The test suite name.
     """
 
     suite_name: str
 
     def __init__(self, suite_name: str):
+        """Extend the constructor with `suite_name`.
+
+        Args:
+            suite_name: The test suite's name.
+        """
         super(TestSuiteResult, self).__init__()
         self.suite_name = suite_name
 
     def add_test_case(self, test_case_name: str) -> TestCaseResult:
+        """Add and return the inner result (test case).
+
+        Returns:
+            The test case's result.
+        """
         test_case_result = TestCaseResult(test_case_name)
         self._inner_results.append(test_case_result)
         return test_case_result
 
 
 class BuildTargetResult(BaseResult):
-    """
-    The build target specific result.
-    The _inner_results list stores results of test suites in a given build target.
-    Also stores build target specifics, such as compiler used to build DPDK.
+    """The build target specific result.
+
+    The internal list stores the results of all test suites in a given build target.
+
+    Attributes:
+        arch: The DPDK build target architecture.
+        os: The DPDK build target operating system.
+        cpu: The DPDK build target CPU.
+        compiler: The DPDK build target compiler.
+        compiler_version: The DPDK build target compiler version.
+        dpdk_version: The built DPDK version.
     """
 
     arch: Architecture
@@ -207,6 +330,11 @@ class BuildTargetResult(BaseResult):
     dpdk_version: str | None
 
     def __init__(self, build_target: BuildTargetConfiguration):
+        """Extend the constructor with the `build_target`'s build target config.
+
+        Args:
+            build_target: The build target's test run configuration.
+        """
         super(BuildTargetResult, self).__init__()
         self.arch = build_target.arch
         self.os = build_target.os
@@ -216,20 +344,35 @@ def __init__(self, build_target: BuildTargetConfiguration):
         self.dpdk_version = None
 
     def add_build_target_info(self, versions: BuildTargetInfo) -> None:
+        """Add information about the build target gathered at runtime.
+
+        Args:
+            versions: The additional information.
+        """
         self.compiler_version = versions.compiler_version
         self.dpdk_version = versions.dpdk_version
 
     def add_test_suite(self, test_suite_name: str) -> TestSuiteResult:
+        """Add and return the inner result (test suite).
+
+        Returns:
+            The test suite's result.
+        """
         test_suite_result = TestSuiteResult(test_suite_name)
         self._inner_results.append(test_suite_result)
         return test_suite_result
 
 
 class ExecutionResult(BaseResult):
-    """
-    The execution specific result.
-    The _inner_results list stores results of build targets in a given execution.
-    Also stores the SUT node configuration.
+    """The execution specific result.
+
+    The internal list stores the results of all build targets in a given execution.
+
+    Attributes:
+        sut_node: The SUT node used in the execution.
+        sut_os_name: The operating system of the SUT node.
+        sut_os_version: The operating system version of the SUT node.
+        sut_kernel_version: The operating system kernel version of the SUT node.
     """
 
     sut_node: NodeConfiguration
@@ -238,34 +381,53 @@ class ExecutionResult(BaseResult):
     sut_kernel_version: str
 
     def __init__(self, sut_node: NodeConfiguration):
+        """Extend the constructor with the `sut_node`'s config.
+
+        Args:
+            sut_node: The SUT node's test run configuration used in the execution.
+        """
         super(ExecutionResult, self).__init__()
         self.sut_node = sut_node
 
     def add_build_target(self, build_target: BuildTargetConfiguration) -> BuildTargetResult:
+        """Add and return the inner result (build target).
+
+        Args:
+            build_target: The build target's test run configuration.
+
+        Returns:
+            The build target's result.
+        """
         build_target_result = BuildTargetResult(build_target)
         self._inner_results.append(build_target_result)
         return build_target_result
 
     def add_sut_info(self, sut_info: NodeInfo) -> None:
+        """Add SUT information gathered at runtime.
+
+        Args:
+            sut_info: The additional SUT node information.
+        """
         self.sut_os_name = sut_info.os_name
         self.sut_os_version = sut_info.os_version
         self.sut_kernel_version = sut_info.kernel_version
 
 
 class DTSResult(BaseResult):
-    """
-    Stores environment information and test results from a DTS run, which are:
-    * Execution level information, such as SUT and TG hardware.
-    * Build target level information, such as compiler, target OS and cpu.
-    * Test suite results.
-    * All errors that are caught and recorded during DTS execution.
+    """Stores environment information and test results from a DTS run.
 
-    The information is stored in nested objects.
+        * Execution level information, such as testbed and the test suite list,
+        * Build target level information, such as compiler, target OS and cpu,
+        * Test suite and test case results,
+        * All errors that are caught and recorded during DTS execution.
 
-    The class is capable of computing the return code used to exit DTS with
-    from the stored error.
+    The information is stored hierarchically. This is the first level of the hierarchy
+    and as such is where the data form the whole hierarchy is collated or processed.
 
-    It also provides a brief statistical summary of passed/failed test cases.
+    The internal list stores the results of all executions.
+
+    Attributes:
+        dpdk_version: The DPDK version to record.
     """
 
     dpdk_version: str | None
@@ -276,6 +438,11 @@ class DTSResult(BaseResult):
     _stats_filename: str
 
     def __init__(self, logger: DTSLOG):
+        """Extend the constructor with top-level specifics.
+
+        Args:
+            logger: The logger instance the whole result will use.
+        """
         super(DTSResult, self).__init__()
         self.dpdk_version = None
         self._logger = logger
@@ -285,21 +452,33 @@ def __init__(self, logger: DTSLOG):
         self._stats_filename = os.path.join(SETTINGS.output_dir, "statistics.txt")
 
     def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult:
+        """Add and return the inner result (execution).
+
+        Args:
+            sut_node: The SUT node's test run configuration.
+
+        Returns:
+            The execution's result.
+        """
         execution_result = ExecutionResult(sut_node)
         self._inner_results.append(execution_result)
         return execution_result
 
     def add_error(self, error: Exception) -> None:
+        """Record an error that occurred outside any execution.
+
+        Args:
+            error: The exception to record.
+        """
         self._errors.append(error)
 
     def process(self) -> None:
-        """
-        Process the data after a DTS run.
-        The data is added to nested objects during runtime and this parent object
-        is not updated at that time. This requires us to process the nested data
-        after it's all been gathered.
+        """Process the data after a whole DTS run.
+
+        The data is added to inner objects during runtime and this object is not updated
+        at that time. This requires us to process the inner data after it's all been gathered.
 
-        The processing gathers all errors and the result statistics of test cases.
+        The processing gathers all errors and the statistics of test case results.
         """
         self._errors += self.get_errors()
         if self._errors and self._logger:
@@ -313,8 +492,10 @@ def process(self) -> None:
             stats_file.write(str(self._stats_result))
 
     def get_return_code(self) -> int:
-        """
-        Go through all stored Exceptions and return the highest error code found.
+        """Go through all stored Exceptions and return the final DTS error code.
+
+        Returns:
+            The highest error code found.
         """
         for error in self._errors:
             error_return_code = ErrorSeverity.GENERIC_ERR
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 10/21] dts: config docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (8 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 09/21] dts: test result " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 11/21] dts: remote session " Juraj Linkeš
                                     ` (11 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/config/__init__.py | 369 ++++++++++++++++++++++++++-----
 dts/framework/config/types.py    | 132 +++++++++++
 2 files changed, 444 insertions(+), 57 deletions(-)
 create mode 100644 dts/framework/config/types.py

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ef25a463c0..62eded7f04 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,8 +3,34 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Yaml config parsing methods
+"""Testbed configuration and test suite specification.
+
+This package offers classes that hold real-time information about the testbed, hold test run
+configuration describing the tested testbed and a loader function, :func:`load_config`, which loads
+the YAML test run configuration file
+and validates it according to :download:`the schema <conf_yaml_schema.json>`.
+
+The YAML test run configuration file is parsed into a dictionary, parts of which are used throughout
+this package. The allowed keys and types inside this dictionary are defined in
+the :doc:`types <framework.config.types>` module.
+
+The test run configuration has two main sections:
+
+    * The :class:`ExecutionConfiguration` which defines what tests are going to be run
+      and how DPDK will be built. It also references the testbed where these tests and DPDK
+      are going to be run,
+    * The nodes of the testbed are defined in the other section,
+      a :class:`list` of :class:`NodeConfiguration` objects.
+
+The real-time information about testbed is supposed to be gathered at runtime.
+
+The classes defined in this package make heavy use of :mod:`dataclasses`.
+All of them use slots and are frozen:
+
+    * Slots enables some optimizations, by pre-allocating space for the defined
+      attributes in the underlying data structure,
+    * Frozen makes the object immutable. This enables further optimizations,
+      and makes it thread safe should we ever want to move in that direction.
 """
 
 import json
@@ -12,11 +38,20 @@
 import pathlib
 from dataclasses import dataclass
 from enum import auto, unique
-from typing import Any, TypedDict, Union
+from typing import Union
 
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.config.types import (
+    BuildTargetConfigDict,
+    ConfigurationDict,
+    ExecutionConfigDict,
+    NodeConfigDict,
+    PortConfigDict,
+    TestSuiteConfigDict,
+    TrafficGeneratorConfigDict,
+)
 from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
@@ -24,55 +59,97 @@
 
 @unique
 class Architecture(StrEnum):
+    r"""The supported architectures of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     i686 = auto()
+    #:
     x86_64 = auto()
+    #:
     x86_32 = auto()
+    #:
     arm64 = auto()
+    #:
     ppc64le = auto()
 
 
 @unique
 class OS(StrEnum):
+    r"""The supported operating systems of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     linux = auto()
+    #:
     freebsd = auto()
+    #:
     windows = auto()
 
 
 @unique
 class CPUType(StrEnum):
+    r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     native = auto()
+    #:
     armv8a = auto()
+    #:
     dpaa2 = auto()
+    #:
     thunderx = auto()
+    #:
     xgene1 = auto()
 
 
 @unique
 class Compiler(StrEnum):
+    r"""The supported compilers of :class:`~framework.testbed_model.node.Node`\s."""
+
+    #:
     gcc = auto()
+    #:
     clang = auto()
+    #:
     icc = auto()
+    #:
     msvc = auto()
 
 
 @unique
 class TrafficGeneratorType(StrEnum):
+    """The supported traffic generators."""
+
+    #:
     SCAPY = auto()
 
 
-# Slots enables some optimizations, by pre-allocating space for the defined
-# attributes in the underlying data structure.
-#
-# Frozen makes the object immutable. This enables further optimizations,
-# and makes it thread safe should we every want to move in that direction.
 @dataclass(slots=True, frozen=True)
 class HugepageConfiguration:
+    r"""The hugepage configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        amount: The number of hugepages.
+        force_first_numa: If :data:`True`, the hugepages will be configured on the first NUMA node.
+    """
+
     amount: int
     force_first_numa: bool
 
 
 @dataclass(slots=True, frozen=True)
 class PortConfig:
+    r"""The port configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        node: The :class:`~framework.testbed_model.node.Node` where this port exists.
+        pci: The PCI address of the port.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK.
+        os_driver: The operating system driver name when the operating system controls the port.
+        peer_node: The :class:`~framework.testbed_model.node.Node` of the port
+            connected to this port.
+        peer_pci: The PCI address of the port connected to this port.
+    """
+
     node: str
     pci: str
     os_driver_for_dpdk: str
@@ -81,18 +158,44 @@ class PortConfig:
     peer_pci: str
 
     @staticmethod
-    def from_dict(node: str, d: dict) -> "PortConfig":
+    def from_dict(node: str, d: PortConfigDict) -> "PortConfig":
+        """A convenience method that creates the object from fewer inputs.
+
+        Args:
+            node: The node where this port exists.
+            d: The configuration dictionary.
+
+        Returns:
+            The port configuration instance.
+        """
         return PortConfig(node=node, **d)
 
 
 @dataclass(slots=True, frozen=True)
 class TrafficGeneratorConfig:
+    """The configuration of traffic generators.
+
+    The class will be expanded when more configuration is needed.
+
+    Attributes:
+        traffic_generator_type: The type of the traffic generator.
+    """
+
     traffic_generator_type: TrafficGeneratorType
 
     @staticmethod
-    def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
-        # This looks useless now, but is designed to allow expansion to traffic
-        # generators that require more configuration later.
+    def from_dict(d: TrafficGeneratorConfigDict) -> "ScapyTrafficGeneratorConfig":
+        """A convenience method that produces traffic generator config of the proper type.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The traffic generator configuration instance.
+
+        Raises:
+            ConfigurationError: An unknown traffic generator type was encountered.
+        """
         match TrafficGeneratorType(d["type"]):
             case TrafficGeneratorType.SCAPY:
                 return ScapyTrafficGeneratorConfig(
@@ -104,11 +207,31 @@ def from_dict(d: dict) -> "ScapyTrafficGeneratorConfig":
 
 @dataclass(slots=True, frozen=True)
 class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+    """Scapy traffic generator specific configuration."""
+
     pass
 
 
 @dataclass(slots=True, frozen=True)
 class NodeConfiguration:
+    r"""The configuration of :class:`~framework.testbed_model.node.Node`\s.
+
+    Attributes:
+        name: The name of the :class:`~framework.testbed_model.node.Node`.
+        hostname: The hostname of the :class:`~framework.testbed_model.node.Node`.
+            Can be an IP or a domain name.
+        user: The name of the user used to connect to
+            the :class:`~framework.testbed_model.node.Node`.
+        password: The password of the user. The use of passwords is heavily discouraged.
+            Please use keys instead.
+        arch: The architecture of the :class:`~framework.testbed_model.node.Node`.
+        os: The operating system of the :class:`~framework.testbed_model.node.Node`.
+        lcores: A comma delimited list of logical cores to use when running DPDK.
+        use_first_core: If :data:`True`, the first logical core won't be used.
+        hugepages: An optional hugepage configuration.
+        ports: The ports that can be used in testing.
+    """
+
     name: str
     hostname: str
     user: str
@@ -121,55 +244,89 @@ class NodeConfiguration:
     ports: list[PortConfig]
 
     @staticmethod
-    def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
-        hugepage_config = d.get("hugepages")
-        if hugepage_config:
-            if "force_first_numa" not in hugepage_config:
-                hugepage_config["force_first_numa"] = False
-            hugepage_config = HugepageConfiguration(**hugepage_config)
-
-        common_config = {
-            "name": d["name"],
-            "hostname": d["hostname"],
-            "user": d["user"],
-            "password": d.get("password"),
-            "arch": Architecture(d["arch"]),
-            "os": OS(d["os"]),
-            "lcores": d.get("lcores", "1"),
-            "use_first_core": d.get("use_first_core", False),
-            "hugepages": hugepage_config,
-            "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
-        }
-
+    def from_dict(
+        d: NodeConfigDict,
+    ) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
+        """A convenience method that processes the inputs before creating a specialized instance.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            Either an SUT or TG configuration instance.
+        """
+        hugepage_config = None
+        if "hugepages" in d:
+            hugepage_config_dict = d["hugepages"]
+            if "force_first_numa" not in hugepage_config_dict:
+                hugepage_config_dict["force_first_numa"] = False
+            hugepage_config = HugepageConfiguration(**hugepage_config_dict)
+
+        # The calls here contain duplicated code which is here because Mypy doesn't
+        # properly support dictionary unpacking with TypedDicts
         if "traffic_generator" in d:
             return TGNodeConfiguration(
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
                 traffic_generator=TrafficGeneratorConfig.from_dict(d["traffic_generator"]),
-                **common_config,
             )
         else:
             return SutNodeConfiguration(
-                memory_channels=d.get("memory_channels", 1), **common_config
+                name=d["name"],
+                hostname=d["hostname"],
+                user=d["user"],
+                password=d.get("password"),
+                arch=Architecture(d["arch"]),
+                os=OS(d["os"]),
+                lcores=d.get("lcores", "1"),
+                use_first_core=d.get("use_first_core", False),
+                hugepages=hugepage_config,
+                ports=[PortConfig.from_dict(d["name"], port) for port in d["ports"]],
+                memory_channels=d.get("memory_channels", 1),
             )
 
 
 @dataclass(slots=True, frozen=True)
 class SutNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.sut_node.SutNode` specific configuration.
+
+    Attributes:
+        memory_channels: The number of memory channels to use when running DPDK.
+    """
+
     memory_channels: int
 
 
 @dataclass(slots=True, frozen=True)
 class TGNodeConfiguration(NodeConfiguration):
+    """:class:`~framework.testbed_model.tg_node.TGNode` specific configuration.
+
+    Attributes:
+        traffic_generator: The configuration of the traffic generator present on the TG node.
+    """
+
     traffic_generator: ScapyTrafficGeneratorConfig
 
 
 @dataclass(slots=True, frozen=True)
 class NodeInfo:
-    """Class to hold important versions within the node.
-
-    This class, unlike the NodeConfiguration class, cannot be generated at the start.
-    This is because we need to initialize a connection with the node before we can
-    collect the information needed in this class. Therefore, it cannot be a part of
-    the configuration class above.
+    """Supplemental node information.
+
+    Attributes:
+        os_name: The name of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        os_version: The version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
+        kernel_version: The kernel version of the running operating system of
+            the :class:`~framework.testbed_model.node.Node`.
     """
 
     os_name: str
@@ -179,6 +336,20 @@ class NodeInfo:
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetConfiguration:
+    """DPDK build configuration.
+
+    The configuration used for building DPDK.
+
+    Attributes:
+        arch: The target architecture to build for.
+        os: The target os to build for.
+        cpu: The target CPU to build for.
+        compiler: The compiler executable to use.
+        compiler_wrapper: This string will be put in front of the compiler when
+            executing the build. Useful for adding wrapper commands, such as ``ccache``.
+        name: The name of the compiler.
+    """
+
     arch: Architecture
     os: OS
     cpu: CPUType
@@ -187,7 +358,18 @@ class BuildTargetConfiguration:
     name: str
 
     @staticmethod
-    def from_dict(d: dict) -> "BuildTargetConfiguration":
+    def from_dict(d: BuildTargetConfigDict) -> "BuildTargetConfiguration":
+        r"""A convenience method that processes the inputs before creating an instance.
+
+        `arch`, `os`, `cpu` and `compiler` are converted to :class:`Enum`\s and
+        `name` is constructed from `arch`, `os`, `cpu` and `compiler`.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The build target configuration instance.
+        """
         return BuildTargetConfiguration(
             arch=Architecture(d["arch"]),
             os=OS(d["os"]),
@@ -200,23 +382,29 @@ def from_dict(d: dict) -> "BuildTargetConfiguration":
 
 @dataclass(slots=True, frozen=True)
 class BuildTargetInfo:
-    """Class to hold important versions within the build target.
+    """Various versions and other information about a build target.
 
-    This is very similar to the NodeInfo class, it just instead holds information
-    for the build target.
+    Attributes:
+        dpdk_version: The DPDK version that was built.
+        compiler_version: The version of the compiler used to build DPDK.
     """
 
     dpdk_version: str
     compiler_version: str
 
 
-class TestSuiteConfigDict(TypedDict):
-    suite: str
-    cases: list[str]
-
-
 @dataclass(slots=True, frozen=True)
 class TestSuiteConfig:
+    """Test suite configuration.
+
+    Information about a single test suite to be executed.
+
+    Attributes:
+        test_suite: The name of the test suite module without the starting ``TestSuite_``.
+        test_cases: The names of test cases from this test suite to execute.
+            If empty, all test cases will be executed.
+    """
+
     test_suite: str
     test_cases: list[str]
 
@@ -224,6 +412,14 @@ class TestSuiteConfig:
     def from_dict(
         entry: str | TestSuiteConfigDict,
     ) -> "TestSuiteConfig":
+        """Create an instance from two different types.
+
+        Args:
+            entry: Either a suite name or a dictionary containing the config.
+
+        Returns:
+            The test suite configuration instance.
+        """
         if isinstance(entry, str):
             return TestSuiteConfig(test_suite=entry, test_cases=[])
         elif isinstance(entry, dict):
@@ -234,19 +430,49 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class ExecutionConfiguration:
+    """The configuration of an execution.
+
+    The configuration contains testbed information, what tests to execute
+    and with what DPDK build.
+
+    Attributes:
+        build_targets: A list of DPDK builds to test.
+        perf: Whether to run performance tests.
+        func: Whether to run functional tests.
+        skip_smoke_tests: Whether to skip smoke tests.
+        test_suites: The names of test suites and/or test cases to execute.
+        system_under_test_node: The SUT node to use in this execution.
+        traffic_generator_node: The TG node to use in this execution.
+        vdevs: The names of virtual devices to test.
+    """
+
     build_targets: list[BuildTargetConfiguration]
     perf: bool
     func: bool
+    skip_smoke_tests: bool
     test_suites: list[TestSuiteConfig]
     system_under_test_node: SutNodeConfiguration
     traffic_generator_node: TGNodeConfiguration
     vdevs: list[str]
-    skip_smoke_tests: bool
 
     @staticmethod
     def from_dict(
-        d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
+        d: ExecutionConfigDict,
+        node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]],
     ) -> "ExecutionConfiguration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        The build target and the test suite config are transformed into their respective objects.
+        SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes
+        are just stored.
+
+        Args:
+            d: The configuration dictionary.
+            node_map: A dictionary mapping node names to their config objects.
+
+        Returns:
+            The execution configuration instance.
+        """
         build_targets: list[BuildTargetConfiguration] = list(
             map(BuildTargetConfiguration.from_dict, d["build_targets"])
         )
@@ -283,10 +509,31 @@ def from_dict(
 
 @dataclass(slots=True, frozen=True)
 class Configuration:
+    """DTS testbed and test configuration.
+
+    The node configuration is not stored in this object. Rather, all used node configurations
+    are stored inside the execution configuration where the nodes are actually used.
+
+    Attributes:
+        executions: Execution configurations.
+    """
+
     executions: list[ExecutionConfiguration]
 
     @staticmethod
-    def from_dict(d: dict) -> "Configuration":
+    def from_dict(d: ConfigurationDict) -> "Configuration":
+        """A convenience method that processes the inputs before creating an instance.
+
+        Build target and test suite config are transformed into their respective objects.
+        SUT and TG configurations are taken from `node_map`. The other (:class:`bool`) attributes
+        are just stored.
+
+        Args:
+            d: The configuration dictionary.
+
+        Returns:
+            The whole configuration instance.
+        """
         nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
             map(NodeConfiguration.from_dict, d["nodes"])
         )
@@ -303,9 +550,17 @@ def from_dict(d: dict) -> "Configuration":
 
 
 def load_config() -> Configuration:
-    """
-    Loads the configuration file and the configuration file schema,
-    validates the configuration file, and creates a configuration object.
+    """Load DTS test run configuration from a file.
+
+    Load the YAML test run configuration file
+    and :download:`the configuration file schema <conf_yaml_schema.json>`,
+    validate the test run configuration file, and create a test run configuration object.
+
+    The YAML test run configuration file is specified in the :option:`--config-file` command line
+    argument or the :envvar:`DTS_CFG_FILE` environment variable.
+
+    Returns:
+        The parsed test run configuration.
     """
     with open(SETTINGS.config_file_path, "r") as f:
         config_data = yaml.safe_load(f)
@@ -314,6 +569,6 @@ def load_config() -> Configuration:
 
     with open(schema_path, "r") as f:
         schema = json.load(f)
-    config: dict[str, Any] = warlock.model_factory(schema, name="_Config")(config_data)
-    config_obj: Configuration = Configuration.from_dict(dict(config))
+    config = warlock.model_factory(schema, name="_Config")(config_data)
+    config_obj: Configuration = Configuration.from_dict(dict(config))  # type: ignore[arg-type]
     return config_obj
diff --git a/dts/framework/config/types.py b/dts/framework/config/types.py
new file mode 100644
index 0000000000..1927910d88
--- /dev/null
+++ b/dts/framework/config/types.py
@@ -0,0 +1,132 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Configuration dictionary contents specification.
+
+These type definitions serve as documentation of the configuration dictionary contents.
+
+The definitions use the built-in :class:`~typing.TypedDict` construct.
+"""
+
+from typing import TypedDict
+
+
+class PortConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    pci: str
+    #:
+    os_driver_for_dpdk: str
+    #:
+    os_driver: str
+    #:
+    peer_node: str
+    #:
+    peer_pci: str
+
+
+class TrafficGeneratorConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    type: str
+
+
+class HugepageConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    amount: int
+    #:
+    force_first_numa: bool
+
+
+class NodeConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    hugepages: HugepageConfigurationDict
+    #:
+    name: str
+    #:
+    hostname: str
+    #:
+    user: str
+    #:
+    password: str
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    lcores: str
+    #:
+    use_first_core: bool
+    #:
+    ports: list[PortConfigDict]
+    #:
+    memory_channels: int
+    #:
+    traffic_generator: TrafficGeneratorConfigDict
+
+
+class BuildTargetConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    arch: str
+    #:
+    os: str
+    #:
+    cpu: str
+    #:
+    compiler: str
+    #:
+    compiler_wrapper: str
+
+
+class TestSuiteConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    suite: str
+    #:
+    cases: list[str]
+
+
+class ExecutionSUTConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    node_name: str
+    #:
+    vdevs: list[str]
+
+
+class ExecutionConfigDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    build_targets: list[BuildTargetConfigDict]
+    #:
+    perf: bool
+    #:
+    func: bool
+    #:
+    skip_smoke_tests: bool
+    #:
+    test_suites: TestSuiteConfigDict
+    #:
+    system_under_test_node: ExecutionSUTConfigDict
+    #:
+    traffic_generator_node: str
+
+
+class ConfigurationDict(TypedDict):
+    """Allowed keys and values."""
+
+    #:
+    nodes: list[NodeConfigDict]
+    #:
+    executions: list[ExecutionConfigDict]
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 11/21] dts: remote session docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (9 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 10/21] dts: config " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 12/21] dts: interactive " Juraj Linkeš
                                     ` (10 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/remote_session/__init__.py      |  39 +++++-
 .../remote_session/remote_session.py          | 130 +++++++++++++-----
 dts/framework/remote_session/ssh_session.py   |  16 +--
 3 files changed, 137 insertions(+), 48 deletions(-)

diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 5e7ddb2b05..51a01d6b5e 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -2,12 +2,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-The package provides modules for managing remote connections to a remote host (node),
-differentiated by OS.
-The package provides a factory function, create_session, that returns the appropriate
-remote connection based on the passed configuration. The differences are in the
-underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""Remote interactive and non-interactive sessions.
+
+This package provides modules for managing remote connections to a remote host (node).
+
+The non-interactive sessions send commands and return their output and exit code.
+
+The interactive sessions open an interactive shell which is continuously open,
+allowing it to send and receive data within that particular shell.
 """
 
 # pylama:ignore=W0611
@@ -26,10 +28,35 @@
 def create_remote_session(
     node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> RemoteSession:
+    """Factory for non-interactive remote sessions.
+
+    The function returns an SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The SSH remote session.
+    """
     return SSHSession(node_config, name, logger)
 
 
 def create_interactive_session(
     node_config: NodeConfiguration, logger: DTSLOG
 ) -> InteractiveRemoteSession:
+    """Factory for interactive remote sessions.
+
+    The function returns an interactive SSH session, but will be extended if support
+    for other protocols is added.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        logger: The logger instance this session will use.
+
+    Returns:
+        The interactive SSH remote session.
+    """
     return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 719f7d1ef7..2059f9a981 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -3,6 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Base remote session.
+
+This module contains the abstract base class for remote sessions and defines
+the structure of the result of a command execution.
+"""
+
+
 import dataclasses
 from abc import ABC, abstractmethod
 from pathlib import PurePath
@@ -15,8 +22,14 @@
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class CommandResult:
-    """
-    The result of remote execution of a command.
+    """The result of remote execution of a command.
+
+    Attributes:
+        name: The name of the session that executed the command.
+        command: The executed command.
+        stdout: The standard output the command produced.
+        stderr: The standard error output the command produced.
+        return_code: The return code the command exited with.
     """
 
     name: str
@@ -26,6 +39,7 @@ class CommandResult:
     return_code: int
 
     def __str__(self) -> str:
+        """Format the command outputs."""
         return (
             f"stdout: '{self.stdout}'\n"
             f"stderr: '{self.stderr}'\n"
@@ -34,13 +48,24 @@ def __str__(self) -> str:
 
 
 class RemoteSession(ABC):
-    """
-    The base class for defining which methods must be implemented in order to connect
-    to a remote host (node) and maintain a remote session. The derived classes are
-    supposed to implement/use some underlying transport protocol (e.g. SSH) to
-    implement the methods. On top of that, it provides some basic services common to
-    all derived classes, such as keeping history and logging what's being executed
-    on the remote node.
+    """Non-interactive remote session.
+
+    The abstract methods must be implemented in order to connect to a remote host (node)
+    and maintain a remote session.
+    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
+    to implement the methods. On top of that, it provides some basic services common to all
+    subclasses, such as keeping history and logging what's being executed on the remote node.
+
+    Attributes:
+        name: The name of the session.
+        hostname: The node's hostname. Could be an IP (possibly with port, separated by a colon)
+            or a domain name.
+        ip: The IP address of the node or a domain name, whichever was used in `hostname`.
+        port: The port of the node, if given in `hostname`.
+        username: The username used in the connection.
+        password: The password used in the connection. Most frequently empty,
+            as the use of passwords is discouraged.
+        history: The executed commands during this session.
     """
 
     name: str
@@ -59,6 +84,16 @@ def __init__(
         session_name: str,
         logger: DTSLOG,
     ):
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            session_name: The name of the session.
+            logger: The logger instance this session will use.
+
+        Raises:
+            SSHConnectionError: If the connection to the node was not successful.
+        """
         self._node_config = node_config
 
         self.name = session_name
@@ -79,8 +114,13 @@ def __init__(
 
     @abstractmethod
     def _connect(self) -> None:
-        """
-        Create connection to assigned node.
+        """Create a connection to the node.
+
+        The implementation must assign the established session to self.session.
+
+        The implementation must except all exceptions and convert them to an SSHConnectionError.
+
+        The implementation may optionally implement retry attempts.
         """
 
     def send_command(
@@ -90,11 +130,24 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        Send a command to the connected node using optional env vars
-        and return CommandResult.
-        If verify is True, check the return code of the executed command
-        and raise a RemoteCommandExecutionError if the command failed.
+        """Send `command` to the connected node.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds for `command` execution to complete.
+            verify: If :data:`True`, will check the exit code of `command`.
+            env: A dictionary with environment variables to be used with `command` execution.
+
+        Raises:
+            SSHSessionDeadError: If the session isn't alive when sending `command`.
+            SSHTimeoutError: If `command` execution timed out.
+            RemoteCommandExecutionError: If verify is :data:`True` and `command` execution failed.
+
+        Returns:
+            The output of the command along with the return code.
         """
         self._logger.info(f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else ""))
         result = self._send_command(command, timeout, env)
@@ -111,29 +164,38 @@ def send_command(
 
     @abstractmethod
     def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
-        """
-        Use the underlying protocol to execute the command using optional env vars
-        and return CommandResult.
+        """Send a command to the connected node.
+
+        The implementation must execute the command remotely with `env` environment variables
+        and return the result.
+
+        The implementation must except all exceptions and raise:
+
+            * SSHSessionDeadError if the session is not alive,
+            * SSHTimeoutError if the command execution times out.
         """
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session and free all used resources.
+        """Close the remote session and free all used resources.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
         """
         self._logger.logger_exit()
         self._close(force)
 
     @abstractmethod
     def _close(self, force: bool = False) -> None:
-        """
-        Execute protocol specific steps needed to close the session properly.
+        """Protocol specific steps needed to close the session properly.
+
+        Args:
+            force: Force the closure of the connection. This may not clean up all resources.
+                This doesn't have to be implemented in the overloaded method.
         """
 
     @abstractmethod
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the remote session is still responding."""
 
     @abstractmethod
     def copy_from(
@@ -143,12 +205,12 @@ def copy_from(
     ) -> None:
         """Copy a file from the remote Node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote Node associated with this remote session
+        to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
-            destination_file: a file or directory path on the local filesystem.
+            source_file: The file on the remote Node.
+            destination_file: A file or directory path on the local filesystem.
         """
 
     @abstractmethod
@@ -159,10 +221,10 @@ def copy_to(
     ) -> None:
         """Copy a file from local filesystem to the remote Node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file` on the remote Node
+        associated with this remote session.
 
         Args:
-            source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            source_file: The file on the local filesystem.
+            destination_file: A file or directory path on the remote Node.
         """
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
index a467033a13..782220092c 100644
--- a/dts/framework/remote_session/ssh_session.py
+++ b/dts/framework/remote_session/ssh_session.py
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""SSH remote session."""
+
 import socket
 import traceback
 from pathlib import PurePath
@@ -26,13 +28,8 @@
 class SSHSession(RemoteSession):
     """A persistent SSH connection to a remote Node.
 
-    The connection is implemented with the Fabric Python library.
-
-    Args:
-        node_config: The configuration of the Node to connect to.
-        session_name: The name of the session.
-        logger: The logger used for logging.
-            This should be passed from the parent OSSession.
+    The connection is implemented with
+    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
 
     Attributes:
         session: The underlying Fabric SSH connection.
@@ -78,6 +75,7 @@ def _connect(self) -> None:
             raise SSHConnectionError(self.hostname, errors)
 
     def is_alive(self) -> bool:
+        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
         return self.session.is_connected
 
     def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
@@ -85,7 +83,7 @@ def _send_command(self, command: str, timeout: float, env: dict | None) -> Comma
 
         Args:
             command: The command to execute.
-            timeout: Wait at most this many seconds for the execution to complete.
+            timeout: Wait at most this long in seconds for the command execution to complete.
             env: Extra environment variables that will be used in command execution.
 
         Raises:
@@ -110,6 +108,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
         self.session.get(str(destination_file), str(source_file))
 
     def copy_to(
@@ -117,6 +116,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
         self.session.put(str(source_file), str(destination_file))
 
     def _close(self, force: bool = False) -> None:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 12/21] dts: interactive remote session docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (10 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 11/21] dts: remote session " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 13/21] dts: port and virtual device " Juraj Linkeš
                                     ` (9 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../interactive_remote_session.py             | 36 +++----
 .../remote_session/interactive_shell.py       | 99 +++++++++++--------
 dts/framework/remote_session/python_shell.py  | 26 ++++-
 dts/framework/remote_session/testpmd_shell.py | 59 +++++++++--
 4 files changed, 150 insertions(+), 70 deletions(-)

diff --git a/dts/framework/remote_session/interactive_remote_session.py b/dts/framework/remote_session/interactive_remote_session.py
index 098ded1bb0..1cc82e3377 100644
--- a/dts/framework/remote_session/interactive_remote_session.py
+++ b/dts/framework/remote_session/interactive_remote_session.py
@@ -22,27 +22,23 @@
 class InteractiveRemoteSession:
     """SSH connection dedicated to interactive applications.
 
-    This connection is created using paramiko and is a persistent connection to the
-    host. This class defines methods for connecting to the node and configures this
-    connection to send "keep alive" packets every 30 seconds. Because paramiko attempts
-    to use SSH keys to establish a connection first, providing a password is optional.
-    This session is utilized by InteractiveShells and cannot be interacted with
-    directly.
-
-    Arguments:
-        node_config: Configuration class for the node you are connecting to.
-        _logger: Desired logger for this session to use.
+    The connection is created using `paramiko <https://docs.paramiko.org/en/latest/>`_
+    and is a persistent connection to the host. This class defines the methods for connecting
+    to the node and configures the connection to send "keep alive" packets every 30 seconds.
+    Because paramiko attempts to use SSH keys to establish a connection first, providing
+    a password is optional. This session is utilized by InteractiveShells
+    and cannot be interacted with directly.
 
     Attributes:
-        hostname: Hostname that will be used to initialize a connection to the node.
-        ip: A subsection of hostname that removes the port for the connection if there
+        hostname: The hostname that will be used to initialize a connection to the node.
+        ip: A subsection of `hostname` that removes the port for the connection if there
             is one. If there is no port, this will be the same as hostname.
-        port: Port to use for the ssh connection. This will be extracted from the
-            hostname if there is a port included, otherwise it will default to 22.
+        port: Port to use for the ssh connection. This will be extracted from `hostname`
+            if there is a port included, otherwise it will default to ``22``.
         username: User to connect to the node with.
         password: Password of the user connecting to the host. This will default to an
             empty string if a password is not provided.
-        session: Underlying paramiko connection.
+        session: The underlying paramiko connection.
 
     Raises:
         SSHConnectionError: There is an error creating the SSH connection.
@@ -58,9 +54,15 @@ class InteractiveRemoteSession:
     _node_config: NodeConfiguration
     _transport: Transport | None
 
-    def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None:
+    def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
+        """Connect to the node during initialization.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            logger: The logger instance this session will use.
+        """
         self._node_config = node_config
-        self._logger = _logger
+        self._logger = logger
         self.hostname = node_config.hostname
         self.username = node_config.user
         self.password = node_config.password if node_config.password else ""
diff --git a/dts/framework/remote_session/interactive_shell.py b/dts/framework/remote_session/interactive_shell.py
index 4db19fb9b3..b158f963b6 100644
--- a/dts/framework/remote_session/interactive_shell.py
+++ b/dts/framework/remote_session/interactive_shell.py
@@ -3,18 +3,20 @@
 
 """Common functionality for interactive shell handling.
 
-This base class, InteractiveShell, is meant to be extended by other classes that
-contain functionality specific to that shell type. These derived classes will often
-modify things like the prompt to expect or the arguments to pass into the application,
-but still utilize the same method for sending a command and collecting output. How
-this output is handled however is often application specific. If an application needs
-elevated privileges to start it is expected that the method for gaining those
-privileges is provided when initializing the class.
+The base class, :class:`InteractiveShell`, is meant to be extended by subclasses that contain
+functionality specific to that shell type. These subclasses will often modify things like
+the prompt to expect or the arguments to pass into the application, but still utilize
+the same method for sending a command and collecting output. How this output is handled however
+is often application specific. If an application needs elevated privileges to start it is expected
+that the method for gaining those privileges is provided when initializing the class.
+
+The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+environment variable configure the timeout of getting the output from command execution.
 """
 
 from abc import ABC
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from paramiko import Channel, SSHClient, channel  # type: ignore[import]
 
@@ -30,28 +32,6 @@ class InteractiveShell(ABC):
     and collecting input until reaching a certain prompt. All interactive applications
     will use the same SSH connection, but each will create their own channel on that
     session.
-
-    Arguments:
-        interactive_session: The SSH session dedicated to interactive shells.
-        logger: Logger used for displaying information in the console.
-        get_privileged_command: Method for modifying a command to allow it to use
-            elevated privileges. If this is None, the application will not be started
-            with elevated privileges.
-        app_args: Command line arguments to be passed to the application on startup.
-        timeout: Timeout used for the SSH channel that is dedicated to this interactive
-            shell. This timeout is for collecting output, so if reading from the buffer
-            and no output is gathered within the timeout, an exception is thrown.
-
-    Attributes
-        _default_prompt: Prompt to expect at the end of output when sending a command.
-            This is often overridden by derived classes.
-        _command_extra_chars: Extra characters to add to the end of every command
-            before sending them. This is often overridden by derived classes and is
-            most commonly an additional newline character.
-        path: Path to the executable to start the interactive application.
-        dpdk_app: Whether this application is a DPDK app. If it is, the build
-            directory for DPDK on the node will be prepended to the path to the
-            executable.
     """
 
     _interactive_session: SSHClient
@@ -61,10 +41,22 @@ class InteractiveShell(ABC):
     _logger: DTSLOG
     _timeout: float
     _app_args: str
-    _default_prompt: str = ""
-    _command_extra_chars: str = ""
-    path: PurePath
-    dpdk_app: bool = False
+
+    #: Prompt to expect at the end of output when sending a command.
+    #: This is often overridden by subclasses.
+    _default_prompt: ClassVar[str] = ""
+
+    #: Extra characters to add to the end of every command
+    #: before sending them. This is often overridden by subclasses and is
+    #: most commonly an additional newline character.
+    _command_extra_chars: ClassVar[str] = ""
+
+    #: Path to the executable to start the interactive application.
+    path: ClassVar[PurePath]
+
+    #: Whether this application is a DPDK app. If it is, the build directory
+    #: for DPDK on the node will be prepended to the path to the executable.
+    dpdk_app: ClassVar[bool] = False
 
     def __init__(
         self,
@@ -74,6 +66,19 @@ def __init__(
         app_args: str = "",
         timeout: float = SETTINGS.timeout,
     ) -> None:
+        """Create an SSH channel during initialization.
+
+        Args:
+            interactive_session: The SSH session dedicated to interactive shells.
+            logger: The logger instance this session will use.
+            get_privileged_command: A method for modifying a command to allow it to use
+                elevated privileges. If :data:`None`, the application will not be started
+                with elevated privileges.
+            app_args: The command line arguments to be passed to the application on startup.
+            timeout: The timeout used for the SSH channel that is dedicated to this interactive
+                shell. This timeout is for collecting output, so if reading from the buffer
+                and no output is gathered within the timeout, an exception is thrown.
+        """
         self._interactive_session = interactive_session
         self._ssh_channel = self._interactive_session.invoke_shell()
         self._stdin = self._ssh_channel.makefile_stdin("w")
@@ -90,6 +95,10 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None
 
         This method is often overridden by subclasses as their process for
         starting may look different.
+
+        Args:
+            get_privileged_command: A function (but could be any callable) that produces
+                the version of the command with elevated privileges.
         """
         start_command = f"{self.path} {self._app_args}"
         if get_privileged_command is not None:
@@ -97,16 +106,24 @@ def _start_application(self, get_privileged_command: Callable[[str], str] | None
         self.send_command(start_command)
 
     def send_command(self, command: str, prompt: str | None = None) -> str:
-        """Send a command and get all output before the expected ending string.
+        """Send `command` and get all output before the expected ending string.
 
         Lines that expect input are not included in the stdout buffer, so they cannot
-        be used for expect. For example, if you were prompted to log into something
-        with a username and password, you cannot expect "username:" because it won't
-        yet be in the stdout buffer. A workaround for this could be consuming an
-        extra newline character to force the current prompt into the stdout buffer.
+        be used for expect.
+
+        Example:
+            If you were prompted to log into something with a username and password,
+            you cannot expect ``username:`` because it won't yet be in the stdout buffer.
+            A workaround for this could be consuming an extra newline character to force
+            the current `prompt` into the stdout buffer.
+
+        Args:
+            command: The command to send.
+            prompt: After sending the command, `send_command` will be expecting this string.
+                If :data:`None`, will use the class's default prompt.
 
         Returns:
-            All output in the buffer before expected string
+            All output in the buffer before expected string.
         """
         self._logger.info(f"Sending: '{command}'")
         if prompt is None:
@@ -124,8 +141,10 @@ def send_command(self, command: str, prompt: str | None = None) -> str:
         return out
 
     def close(self) -> None:
+        """Properly free all resources."""
         self._stdin.close()
         self._ssh_channel.close()
 
     def __del__(self) -> None:
+        """Make sure the session is properly closed before deleting the object."""
         self.close()
diff --git a/dts/framework/remote_session/python_shell.py b/dts/framework/remote_session/python_shell.py
index cc3ad48a68..ccfd3783e8 100644
--- a/dts/framework/remote_session/python_shell.py
+++ b/dts/framework/remote_session/python_shell.py
@@ -1,12 +1,32 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Python interactive shell.
+
+Typical usage example in a TestSuite::
+
+    from framework.remote_session import PythonShell
+    python_shell = self.tg_node.create_interactive_shell(
+        PythonShell, timeout=5, privileged=True
+    )
+    python_shell.send_command("print('Hello World')")
+    python_shell.close()
+"""
+
 from pathlib import PurePath
+from typing import ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class PythonShell(InteractiveShell):
-    _default_prompt: str = ">>>"
-    _command_extra_chars: str = "\n"
-    path: PurePath = PurePath("python3")
+    """Python interactive shell."""
+
+    #: Python's prompt.
+    _default_prompt: ClassVar[str] = ">>>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
+
+    #: The Python executable.
+    path: ClassVar[PurePath] = PurePath("python3")
diff --git a/dts/framework/remote_session/testpmd_shell.py b/dts/framework/remote_session/testpmd_shell.py
index 08ac311016..0184cc2e71 100644
--- a/dts/framework/remote_session/testpmd_shell.py
+++ b/dts/framework/remote_session/testpmd_shell.py
@@ -1,41 +1,80 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Testpmd interactive shell.
+
+Typical usage example in a TestSuite::
+
+    testpmd_shell = self.sut_node.create_interactive_shell(
+            TestPmdShell, privileged=True
+        )
+    devices = testpmd_shell.get_devices()
+    for device in devices:
+        print(device)
+    testpmd_shell.close()
+"""
 
 from pathlib import PurePath
-from typing import Callable
+from typing import Callable, ClassVar
 
 from .interactive_shell import InteractiveShell
 
 
 class TestPmdDevice(object):
+    """The data of a device that testpmd can recognize.
+
+    Attributes:
+        pci_address: The PCI address of the device.
+    """
+
     pci_address: str
 
     def __init__(self, pci_address_line: str):
+        """Initialize the device from the testpmd output line string.
+
+        Args:
+            pci_address_line: A line of testpmd output that contains a device.
+        """
         self.pci_address = pci_address_line.strip().split(": ")[1].strip()
 
     def __str__(self) -> str:
+        """The PCI address captures what the device is."""
         return self.pci_address
 
 
 class TestPmdShell(InteractiveShell):
-    path: PurePath = PurePath("app", "dpdk-testpmd")
-    dpdk_app: bool = True
-    _default_prompt: str = "testpmd>"
-    _command_extra_chars: str = "\n"  # We want to append an extra newline to every command
+    """Testpmd interactive shell.
+
+    The testpmd shell users should never use
+    the :meth:`~.interactive_shell.InteractiveShell.send_command` method directly, but rather
+    call specialized methods. If there isn't one that satisfies a need, it should be added.
+    """
+
+    #: The path to the testpmd executable.
+    path: ClassVar[PurePath] = PurePath("app", "dpdk-testpmd")
+
+    #: Flag this as a DPDK app so that it's clear this is not a system app and
+    #: needs to be looked in a specific path.
+    dpdk_app: ClassVar[bool] = True
+
+    #: The testpmd's prompt.
+    _default_prompt: ClassVar[str] = "testpmd>"
+
+    #: This forces the prompt to appear after sending a command.
+    _command_extra_chars: ClassVar[str] = "\n"
 
     def _start_application(self, get_privileged_command: Callable[[str], str] | None) -> None:
-        """See "_start_application" in InteractiveShell."""
         self._app_args += " -- -i"
         super()._start_application(get_privileged_command)
 
     def get_devices(self) -> list[TestPmdDevice]:
-        """Get a list of device names that are known to testpmd
+        """Get a list of device names that are known to testpmd.
 
-        Uses the device info listed in testpmd and then parses the output to
-        return only the names of the devices.
+        Uses the device info listed in testpmd and then parses the output.
 
         Returns:
-            A list of strings representing device names (e.g. 0000:14:00.1)
+            A list of devices.
         """
         dev_info: str = self.send_command("show device info all")
         dev_list: list[TestPmdDevice] = []
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 13/21] dts: port and virtual device docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (11 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 12/21] dts: interactive " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 14/21] dts: cpu " Juraj Linkeš
                                     ` (8 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/__init__.py       | 17 ++++--
 dts/framework/testbed_model/port.py           | 53 +++++++++++++++----
 dts/framework/testbed_model/virtual_device.py | 17 +++++-
 3 files changed, 72 insertions(+), 15 deletions(-)

diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index 8ced05653b..6086512ca2 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,9 +2,20 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-This package contains the classes used to model the physical traffic generator,
-system under test and any other components that need to be interacted with.
+"""Testbed modelling.
+
+This package defines the testbed elements DTS works with:
+
+    * A system under test node: :class:`~.sut_node.SutNode`,
+    * A traffic generator node: :class:`~.tg_node.TGNode`,
+    * The ports of network interface cards (NICs) present on nodes: :class:`~.port.Port`,
+    * The logical cores of CPUs present on nodes: :class:`~.cpu.LogicalCore`,
+    * The virtual devices that can be created on nodes: :class:`~.virtual_device.VirtualDevice`,
+    * The operating systems running on nodes: :class:`~.linux_session.LinuxSession`
+      and :class:`~.posix_session.PosixSession`.
+
+DTS needs to be able to connect to nodes and understand some of the hardware present on these nodes
+to properly build and test DPDK.
 """
 
 # pylama:ignore=W0611
diff --git a/dts/framework/testbed_model/port.py b/dts/framework/testbed_model/port.py
index 680c29bfe3..817405bea4 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""NIC port model.
+
+Basic port information, such as location (the port are identified by their PCI address on a node),
+drivers and address.
+"""
+
+
 from dataclasses import dataclass
 
 from framework.config import PortConfig
@@ -9,24 +16,35 @@
 
 @dataclass(slots=True, frozen=True)
 class PortIdentifier:
+    """The port identifier.
+
+    Attributes:
+        node: The node where the port resides.
+        pci: The PCI address of the port on `node`.
+    """
+
     node: str
     pci: str
 
 
 @dataclass(slots=True)
 class Port:
-    """
-    identifier: The PCI address of the port on a node.
-
-    os_driver: The driver used by this port when the OS is controlling it.
-        Example: i40e
-    os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it,
-        Example: vfio-pci.
+    """Physical port on a node.
 
-    Note: os_driver and os_driver_for_dpdk may be the same thing.
-        Example: mlx5_core
+    The ports are identified by the node they're on and their PCI addresses. The port on the other
+    side of the connection is also captured here.
+    Each port is serviced by a driver, which may be different for the operating system (`os_driver`)
+    and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, e.g.: ``mlx5_core``.
 
-    peer: The identifier of a port this port is connected with.
+    Attributes:
+        identifier: The PCI address of the port on a node.
+        os_driver: The operating system driver name when the operating system controls the port,
+            e.g.: ``i40e``.
+        os_driver_for_dpdk: The operating system driver name for use with DPDK, e.g.: ``vfio-pci``.
+        peer: The identifier of a port this port is connected with.
+            The `peer` is on a different node.
+        mac_address: The MAC address of the port.
+        logical_name: The logical name of the port. Must be discovered.
     """
 
     identifier: PortIdentifier
@@ -37,6 +55,12 @@ class Port:
     logical_name: str = ""
 
     def __init__(self, node_name: str, config: PortConfig):
+        """Initialize the port from `node_name` and `config`.
+
+        Args:
+            node_name: The name of the port's node.
+            config: The test run configuration of the port.
+        """
         self.identifier = PortIdentifier(
             node=node_name,
             pci=config.pci,
@@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig):
 
     @property
     def node(self) -> str:
+        """The node where the port resides."""
         return self.identifier.node
 
     @property
     def pci(self) -> str:
+        """The PCI address of the port."""
         return self.identifier.pci
 
 
 @dataclass(slots=True, frozen=True)
 class PortLink:
+    """The physical, cabled connection between the ports.
+
+    Attributes:
+        sut_port: The port on the SUT node connected to `tg_port`.
+        tg_port: The port on the TG node connected to `sut_port`.
+    """
+
     sut_port: Port
     tg_port: Port
diff --git a/dts/framework/testbed_model/virtual_device.py b/dts/framework/testbed_model/virtual_device.py
index eb664d9f17..e9b5e9c3be 100644
--- a/dts/framework/testbed_model/virtual_device.py
+++ b/dts/framework/testbed_model/virtual_device.py
@@ -1,16 +1,29 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""Virtual devices model.
+
+Alongside support for physical hardware, DPDK can create various virtual devices.
+"""
+
 
 class VirtualDevice(object):
-    """
-    Base class for virtual devices used by DPDK.
+    """Base class for virtual devices used by DPDK.
+
+    Attributes:
+        name: The name of the virtual device.
     """
 
     name: str
 
     def __init__(self, name: str):
+        """Initialize the virtual device.
+
+        Args:
+            name: The name of the virtual device.
+        """
         self.name = name
 
     def __str__(self) -> str:
+        """This corresponds to the name used for DPDK devices."""
         return self.name
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 14/21] dts: cpu docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (12 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 13/21] dts: port and virtual device " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 15/21] dts: os session " Juraj Linkeš
                                     ` (7 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/cpu.py | 196 +++++++++++++++++++++--------
 1 file changed, 144 insertions(+), 52 deletions(-)

diff --git a/dts/framework/testbed_model/cpu.py b/dts/framework/testbed_model/cpu.py
index 1b392689f5..9e33b2825d 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -1,6 +1,22 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""CPU core representation and filtering.
+
+This module provides a unified representation of logical CPU cores along
+with filtering capabilities.
+
+When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
+the physical CPU cores are split into logical CPU cores with different IDs.
+
+:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's possible to specify
+the socket from which to filter the number of logical cores. It's also possible to not use all
+logical CPU cores from each physical core (e.g. only the first logical core of each physical core).
+
+:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks that
+the logical cores are actually present on the server.
+"""
+
 import dataclasses
 from abc import ABC, abstractmethod
 from collections.abc import Iterable, ValuesView
@@ -11,9 +27,17 @@
 
 @dataclass(slots=True, frozen=True)
 class LogicalCore(object):
-    """
-    Representation of a CPU core. A physical core is represented in OS
-    by multiple logical cores (lcores) if CPU multithreading is enabled.
+    """Representation of a logical CPU core.
+
+    A physical core is represented in OS by multiple logical cores (lcores)
+    if CPU multithreading is enabled. When multithreading is disabled, their IDs are the same.
+
+    Attributes:
+        lcore: The logical core ID of a CPU core. It's the same as `core` with
+            disabled multithreading.
+        core: The physical core ID of a CPU core.
+        socket: The physical socket ID where the CPU resides.
+        node: The NUMA node ID where the CPU resides.
     """
 
     lcore: int
@@ -22,27 +46,36 @@ class LogicalCore(object):
     node: int
 
     def __int__(self) -> int:
+        """The CPU is best represented by the logical core, as that's what we configure in EAL."""
         return self.lcore
 
 
 class LogicalCoreList(object):
-    """
-    Convert these options into a list of logical core ids.
-    lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
-    lcore_list=[0,1,2,3] - a list of int indices
-    lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
-    lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are supported
-
-    The class creates a unified format used across the framework and allows
-    the user to use either a str representation (using str(instance) or directly
-    in f-strings) or a list representation (by accessing instance.lcore_list).
-    Empty lcore_list is allowed.
+    r"""A unified way to store :class:`LogicalCore`\s.
+
+    Create a unified format used across the framework and allow the user to use
+    either a :class:`str` representation (using ``str(instance)`` or directly in f-strings)
+    or a :class:`list` representation (by accessing the `lcore_list` property,
+    which stores logical core IDs).
     """
 
     _lcore_list: list[int]
     _lcore_str: str
 
     def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
+        """Process `lcore_list`, then sort.
+
+        There are four supported logical core list formats::
+
+            lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
+            lcore_list=[0,1,2,3]        # a list of int indices
+            lcore_list=['0','1','2-3']  # a list of str indices; ranges are supported
+            lcore_list='0,1,2-3'        # a comma delimited str of indices; ranges are supported
+
+        Args:
+            lcore_list: Various ways to represent multiple logical cores.
+                Empty `lcore_list` is allowed.
+        """
         self._lcore_list = []
         if isinstance(lcore_list, str):
             lcore_list = lcore_list.split(",")
@@ -58,6 +91,7 @@ def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | str):
 
     @property
     def lcore_list(self) -> list[int]:
+        """The logical core IDs."""
         return self._lcore_list
 
     def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
@@ -83,28 +117,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> list[str]:
         return formatted_core_list
 
     def __str__(self) -> str:
+        """The consecutive ranges of logical core IDs."""
         return self._lcore_str
 
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class LogicalCoreCount(object):
-    """
-    Define the number of logical cores to use.
-    If sockets is not None, socket_count is ignored.
-    """
+    """Define the number of logical cores per physical cores per sockets."""
 
+    #: Use this many logical cores per each physical core.
     lcores_per_core: int = 1
+    #: Use this many physical cores per each socket.
     cores_per_socket: int = 2
+    #: Use this many sockets.
     socket_count: int = 1
+    #: Use exactly these sockets. This takes precedence over `socket_count`,
+    #: so when `sockets` is not :data:`None`, `socket_count` is ignored.
     sockets: list[int] | None = None
 
 
 class LogicalCoreFilter(ABC):
-    """
-    Filter according to the input filter specifier. Each filter needs to be
-    implemented in a derived class.
-    This class only implements operations common to all filters, such as sorting
-    the list to be filtered beforehand.
+    """Common filtering class.
+
+    Each filter needs to be implemented in a subclass. This base class sorts the list of cores
+    and defines the filtering method, which must be implemented by subclasses.
     """
 
     _filter_specifier: LogicalCoreCount | LogicalCoreList
@@ -116,6 +152,17 @@ def __init__(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ):
+        """Filter according to the input filter specifier.
+
+        The input `lcore_list` is copied and sorted by physical core before filtering.
+        The list is copied so that the original is left intact.
+
+        Args:
+            lcore_list: The logical CPU cores to filter.
+            filter_specifier: Filter cores from `lcore_list` according to this filter.
+            ascending: Sort cores in ascending order (lowest to highest IDs). If data:`False`,
+                sort in descending order.
+        """
         self._filter_specifier = filter_specifier
 
         # sorting by core is needed in case hyperthreading is enabled
@@ -124,31 +171,45 @@ def __init__(
 
     @abstractmethod
     def filter(self) -> list[LogicalCore]:
-        """
-        Use self._filter_specifier to filter self._lcores_to_filter
-        and return the list of filtered LogicalCores.
-        self._lcores_to_filter is a sorted copy of the original list,
-        so it may be modified.
+        r"""Filter the cores.
+
+        Use `self._filter_specifier` to filter `self._lcores_to_filter` and return
+        the filtered :class:`LogicalCore`\s.
+        `self._lcores_to_filter` is a sorted copy of the original list, so it may be modified.
+
+        Returns:
+            The filtered cores.
         """
 
 
 class LogicalCoreCountFilter(LogicalCoreFilter):
-    """
+    """Filter cores by specified counts.
+
     Filter the input list of LogicalCores according to specified rules:
-    Use cores from the specified number of sockets or from the specified socket ids.
-    If sockets is specified, it takes precedence over socket_count.
-    From each of those sockets, use only cores_per_socket of cores.
-    And for each core, use lcores_per_core of logical cores. Hypertheading
-    must be enabled for this to take effect.
-    If ascending is True, use cores with the lowest numerical id first
-    and continue in ascending order. If False, start with the highest
-    id and continue in descending order. This ordering affects which
-    sockets to consider first as well.
+
+        * The input `filter_specifier` is :class:`LogicalCoreCount`,
+        * Use cores from the specified number of sockets or from the specified socket ids,
+        * If `sockets` is specified, it takes precedence over `socket_count`,
+        * From each of those sockets, use only `cores_per_socket` of cores,
+        * And for each core, use `lcores_per_core` of logical cores. Hypertheading
+          must be enabled for this to take effect.
     """
 
     _filter_specifier: LogicalCoreCount
 
     def filter(self) -> list[LogicalCore]:
+        """Filter the cores according to :class:`LogicalCoreCount`.
+
+        Start by filtering the allowed sockets. The cores matching the allowed sockets are returned.
+        The cores of each socket are stored in separate lists.
+
+        Then filter the allowed physical cores from those lists of cores per socket. When filtering
+        physical cores, store the desired number of logical cores per physical core which then
+        together constitute the final filtered list.
+
+        Returns:
+            The filtered cores.
+        """
         sockets_to_filter = self._filter_sockets(self._lcores_to_filter)
         filtered_lcores = []
         for socket_to_filter in sockets_to_filter:
@@ -158,24 +219,37 @@ def filter(self) -> list[LogicalCore]:
     def _filter_sockets(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> ValuesView[list[LogicalCore]]:
-        """
-        Remove all lcores that don't match the specified socket(s).
-        If self._filter_specifier.sockets is not None, keep lcores from those sockets,
-        otherwise keep lcores from the first
-        self._filter_specifier.socket_count sockets.
+        """Filter a list of cores per each allowed socket.
+
+        The sockets may be specified in two ways, either a number or a specific list of sockets.
+        In case of a specific list, we just need to return the cores from those sockets.
+        If filtering a number of cores, we need to go through all cores and note which sockets
+        appear and only filter from the first n that appear.
+
+        Args:
+            lcores_to_filter: The cores to filter. These must be sorted by the physical core.
+
+        Returns:
+            A list of lists of logical CPU cores. Each list contains cores from one socket.
         """
         allowed_sockets: set[int] = set()
         socket_count = self._filter_specifier.socket_count
         if self._filter_specifier.sockets:
+            # when sockets in filter is specified, the sockets are already set
             socket_count = len(self._filter_specifier.sockets)
             allowed_sockets = set(self._filter_specifier.sockets)
 
+        # filter socket_count sockets from all sockets by checking the socket of each CPU
         filtered_lcores: dict[int, list[LogicalCore]] = {}
         for lcore in lcores_to_filter:
             if not self._filter_specifier.sockets:
+                # this is when sockets is not set, so we do the actual filtering
+                # when it is set, allowed_sockets is already defined and can't be changed
                 if len(allowed_sockets) < socket_count:
+                    # allowed_sockets is a set, so adding an existing socket won't re-add it
                     allowed_sockets.add(lcore.socket)
             if lcore.socket in allowed_sockets:
+                # separate lcores into sockets; this makes it easier in further processing
                 if lcore.socket in filtered_lcores:
                     filtered_lcores[lcore.socket].append(lcore)
                 else:
@@ -192,12 +266,13 @@ def _filter_sockets(
     def _filter_cores_from_socket(
         self, lcores_to_filter: Iterable[LogicalCore]
     ) -> list[LogicalCore]:
-        """
-        Keep only the first self._filter_specifier.cores_per_socket cores.
-        In multithreaded environments, keep only
-        the first self._filter_specifier.lcores_per_core lcores of those cores.
-        """
+        """Filter a list of cores from the given socket.
+
+        Go through the cores and note how many logical cores per physical core have been filtered.
 
+        Returns:
+            The filtered logical CPU cores.
+        """
         # no need to use ordered dict, from Python3.7 the dict
         # insertion order is preserved (LIFO).
         lcore_count_per_core_map: dict[int, int] = {}
@@ -238,15 +313,21 @@ def _filter_cores_from_socket(
 
 
 class LogicalCoreListFilter(LogicalCoreFilter):
-    """
-    Filter the input list of Logical Cores according to the input list of
-    lcore indices.
-    An empty LogicalCoreList won't filter anything.
+    """Filter the logical CPU cores by logical CPU core IDs.
+
+    This is a simple filter that looks at logical CPU IDs and only filter those that match.
+
+    The input filter is :class:`LogicalCoreList`. An empty LogicalCoreList won't filter anything.
     """
 
     _filter_specifier: LogicalCoreList
 
     def filter(self) -> list[LogicalCore]:
+        """Filter based on logical CPU core ID.
+
+        Return:
+            The filtered logical CPU cores.
+        """
         if not len(self._filter_specifier.lcore_list):
             return self._lcores_to_filter
 
@@ -269,6 +350,17 @@ def lcore_filter(
     filter_specifier: LogicalCoreCount | LogicalCoreList,
     ascending: bool,
 ) -> LogicalCoreFilter:
+    """Factory for providing the filter that corresponds to `filter_specifier`.
+
+    Args:
+        core_list: The logical CPU cores to filter.
+        filter_specifier: The filter to use.
+        ascending: Sort cores in ascending order (lowest to highest IDs). If :data:`False`,
+            sort in descending order.
+
+    Returns:
+        The filter that corresponds to `filter_specifier`.
+    """
     if isinstance(filter_specifier, LogicalCoreList):
         return LogicalCoreListFilter(core_list, filter_specifier, ascending)
     elif isinstance(filter_specifier, LogicalCoreCount):
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 15/21] dts: os session docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (13 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 14/21] dts: cpu " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 16/21] dts: posix and linux sessions " Juraj Linkeš
                                     ` (6 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/os_session.py | 272 ++++++++++++++++------
 1 file changed, 205 insertions(+), 67 deletions(-)

diff --git a/dts/framework/testbed_model/os_session.py b/dts/framework/testbed_model/os_session.py
index 76e595a518..ac6bb5e112 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -2,6 +2,26 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""OS-aware remote session.
+
+DPDK supports multiple different operating systems, meaning it can run on these different operating
+systems. This module defines the common API that OS-unaware layers use and translates the API into
+OS-aware calls/utility usage.
+
+Note:
+    Running commands with administrative privileges requires OS awareness. This is the only layer
+    that's aware of OS differences, so this is where non-privileged command get converted
+    to privileged commands.
+
+Example:
+    A user wishes to remove a directory on a remote :class:`~.sut_node.SutNode`.
+    The :class:`~.sut_node.SutNode` object isn't aware what OS the node is running - it delegates
+    the OS translation logic to :attr:`~.node.Node.main_session`. The SUT node calls
+    :meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
+    the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the node's OS is Linux
+    and other commands for other OSs. It also translates the path to match the underlying OS.
+"""
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from ipaddress import IPv4Interface, IPv6Interface
@@ -28,10 +48,16 @@
 
 
 class OSSession(ABC):
-    """
-    The OS classes create a DTS node remote session and implement OS specific
+    """OS-unaware to OS-aware translation API definition.
+
+    The OSSession classes create a remote session to a DTS node and implement OS specific
     behavior. There a few control methods implemented by the base class, the rest need
-    to be implemented by derived classes.
+    to be implemented by subclasses.
+
+    Attributes:
+        name: The name of the session.
+        remote_session: The remote session maintaining the connection to the node.
+        interactive_session: The interactive remote session maintaining the connection to the node.
     """
 
     _config: NodeConfiguration
@@ -46,6 +72,15 @@ def __init__(
         name: str,
         logger: DTSLOG,
     ):
+        """Initialize the OS-aware session.
+
+        Connect to the node right away and also create an interactive remote session.
+
+        Args:
+            node_config: The test run configuration of the node to connect to.
+            name: The name of the session.
+            logger: The logger instance this session will use.
+        """
         self._config = node_config
         self.name = name
         self._logger = logger
@@ -53,15 +88,15 @@ def __init__(
         self.interactive_session = create_interactive_session(node_config, logger)
 
     def close(self, force: bool = False) -> None:
-        """
-        Close the remote session.
+        """Close the underlying remote session.
+
+        Args:
+            force: Force the closure of the connection.
         """
         self.remote_session.close(force)
 
     def is_alive(self) -> bool:
-        """
-        Check whether the remote session is still responding.
-        """
+        """Check whether the underlying remote session is still responding."""
         return self.remote_session.is_alive()
 
     def send_command(
@@ -72,10 +107,23 @@ def send_command(
         verify: bool = False,
         env: dict | None = None,
     ) -> CommandResult:
-        """
-        An all-purpose API in case the command to be executed is already
-        OS-agnostic, such as when the path to the executed command has been
-        constructed beforehand.
+        """An all-purpose API for OS-agnostic commands.
+
+        This can be used for an execution of a portable command that's executed the same way
+        on all operating systems, such as Python.
+
+        The :option:`--timeout` command line argument and the :envvar:`DTS_TIMEOUT`
+        environment variable configure the timeout of command execution.
+
+        Args:
+            command: The command to execute.
+            timeout: Wait at most this long in seconds for `command` execution to complete.
+            privileged: Whether to run the command with administrative privileges.
+            verify: If :data:`True`, will check the exit code of the command.
+            env: A dictionary with environment variables to be used with the command execution.
+
+        Raises:
+            RemoteCommandExecutionError: If verify is :data:`True` and the command failed.
         """
         if privileged:
             command = self._get_privileged_command(command)
@@ -89,8 +137,20 @@ def create_interactive_shell(
         privileged: bool,
         app_args: str,
     ) -> InteractiveShellType:
-        """
-        See "create_interactive_shell" in SutNode
+        """Factory for interactive session handlers.
+
+        Instantiate `shell_cls` according to the remote OS specifics.
+
+        Args:
+            shell_cls: The class of the shell.
+            timeout: Timeout for reading output from the SSH channel. If you are
+                reading from the buffer and don't receive any data within the timeout
+                it will throw an error.
+            privileged: Whether to run the shell with administrative privileges.
+            app_args: The arguments to be passed to the application.
+
+        Returns:
+            An instance of the desired interactive application shell.
         """
         return shell_cls(
             self.interactive_session.session,
@@ -114,27 +174,42 @@ def _get_privileged_command(command: str) -> str:
 
     @abstractmethod
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePath:
-        """
-        Try to find DPDK remote dir in remote_dir.
+        """Try to find DPDK directory in `remote_dir`.
+
+        The directory is the one which is created after the extraction of the tarball. The files
+        are usually extracted into a directory starting with ``dpdk-``.
+
+        Returns:
+            The absolute path of the DPDK remote directory, empty path if not found.
         """
 
     @abstractmethod
     def get_remote_tmp_dir(self) -> PurePath:
-        """
-        Get the path of the temporary directory of the remote OS.
+        """Get the path of the temporary directory of the remote OS.
+
+        Returns:
+            The absolute path of the temporary directory.
         """
 
     @abstractmethod
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for the target architecture. Get
-        information from the node if needed.
+        """Create extra environment variables needed for the target architecture.
+
+        Different architectures may require different configuration, such as setting 32-bit CFLAGS.
+
+        Returns:
+            A dictionary with keys as environment variables.
         """
 
     @abstractmethod
     def join_remote_path(self, *args: str | PurePath) -> PurePath:
-        """
-        Join path parts using the path separator that fits the remote OS.
+        """Join path parts using the path separator that fits the remote OS.
+
+        Args:
+            args: Any number of paths to join.
+
+        Returns:
+            The resulting joined path.
         """
 
     @abstractmethod
@@ -143,13 +218,13 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from the remote Node to the local filesystem.
+        """Copy a file from the remote node to the local filesystem.
 
-        Copy source_file from the remote Node associated with this remote
-        session to destination_file on the local filesystem.
+        Copy `source_file` from the remote node associated with this remote
+        session to `destination_file` on the local filesystem.
 
         Args:
-            source_file: the file on the remote Node.
+            source_file: the file on the remote node.
             destination_file: a file or directory path on the local filesystem.
         """
 
@@ -159,14 +234,14 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
-        """Copy a file from local filesystem to the remote Node.
+        """Copy a file from local filesystem to the remote node.
 
-        Copy source_file from local filesystem to destination_file
-        on the remote Node associated with this remote session.
+        Copy `source_file` from local filesystem to `destination_file`
+        on the remote node associated with this remote session.
 
         Args:
             source_file: the file on the local filesystem.
-            destination_file: a file or directory path on the remote Node.
+            destination_file: a file or directory path on the remote node.
         """
 
     @abstractmethod
@@ -176,8 +251,12 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
-        """
-        Remove remote directory, by default remove recursively and forcefully.
+        """Remove remote directory, by default remove recursively and forcefully.
+
+        Args:
+            remote_dir_path: The path of the directory to remove.
+            recursive: If :data:`True`, also remove all contents inside the directory.
+            force: If :data:`True`, ignore all warnings and try to remove at all costs.
         """
 
     @abstractmethod
@@ -186,9 +265,12 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
-        """
-        Extract remote tarball in place. If expected_dir is a non-empty string, check
-        whether the dir exists after extracting the archive.
+        """Extract remote tarball in its remote directory.
+
+        Args:
+            remote_tarball_path: The path of the tarball on the remote node.
+            expected_dir: If non-empty, check whether `expected_dir` exists after extracting
+                the archive.
         """
 
     @abstractmethod
@@ -201,69 +283,119 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
-        """
-        Build DPDK in the input dir with specified environment variables and meson
-        arguments.
+        """Build DPDK on the remote node.
+
+        An extracted DPDK tarball must be present on the node. The build consists of two steps::
+
+            meson setup <meson args> remote_dpdk_dir remote_dpdk_build_dir
+            ninja -C remote_dpdk_build_dir
+
+        The :option:`--compile-timeout` command line argument and the :envvar:`DTS_COMPILE_TIMEOUT`
+        environment variable configure the timeout of DPDK build.
+
+        Args:
+            env_vars: Use these environment variables when building DPDK.
+            meson_args: Use these meson arguments when building DPDK.
+            remote_dpdk_dir: The directory on the remote node where DPDK will be built.
+            remote_dpdk_build_dir: The target build directory on the remote node.
+            rebuild: If :data:`True`, do a subsequent build with ``meson configure`` instead
+                of ``meson setup``.
+            timeout: Wait at most this long in seconds for the build execution to complete.
         """
 
     @abstractmethod
     def get_dpdk_version(self, version_path: str | PurePath) -> str:
-        """
-        Inspect DPDK version on the remote node from version_path.
+        """Inspect the DPDK version on the remote node.
+
+        Args:
+            version_path: The path to the VERSION file containing the DPDK version.
+
+        Returns:
+            The DPDK version.
         """
 
     @abstractmethod
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
-        """
-        Compose a list of LogicalCores present on the remote node.
-        If use_first_core is False, the first physical core won't be used.
+        r"""Get the list of :class:`~.cpu.LogicalCore`\s on the remote node.
+
+        Args:
+            use_first_core: If :data:`False`, the first physical core won't be used.
+
+        Returns:
+            The logical cores present on the node.
         """
 
     @abstractmethod
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
-        """
-        Kill and cleanup all DPDK apps identified by dpdk_prefix_list. If
-        dpdk_prefix_list is empty, attempt to find running DPDK apps to kill and clean.
+        """Kill and cleanup all DPDK apps.
+
+        Args:
+            dpdk_prefix_list: Kill all apps identified by `dpdk_prefix_list`.
+                If `dpdk_prefix_list` is empty, attempt to find running DPDK apps to kill and clean.
         """
 
     @abstractmethod
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
-        """
-        Get the DPDK file prefix that will be used when running DPDK apps.
+        """Make OS-specific modification to the DPDK file prefix.
+
+        Args:
+           dpdk_prefix: The OS-unaware file prefix.
+
+        Returns:
+            The OS-specific file prefix.
         """
 
     @abstractmethod
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
-        """
-        Get the node's Hugepage Size, configure the specified amount of hugepages
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Configure hugepages on the node.
+
+        Get the node's Hugepage Size, configure the specified count of hugepages
         if needed and mount the hugepages if needed.
-        If force_first_numa is True, configure hugepages just on the first socket.
+
+        Args:
+            hugepage_count: Configure this many hugepages.
+            force_first_numa:  If :data:`True`, configure hugepages just on the first numa node.
         """
 
     @abstractmethod
     def get_compiler_version(self, compiler_name: str) -> str:
-        """
-        Get installed version of compiler used for DPDK
+        """Get installed version of compiler used for DPDK.
+
+        Args:
+            compiler_name: The name of the compiler executable.
+
+        Returns:
+            The compiler's version.
         """
 
     @abstractmethod
     def get_node_info(self) -> NodeInfo:
-        """
-        Collect information about the node
+        """Collect additional information about the node.
+
+        Returns:
+            Node information.
         """
 
     @abstractmethod
     def update_ports(self, ports: list[Port]) -> None:
-        """
-        Get additional information about ports:
-            Logical name (e.g. enp7s0) if applicable
-            Mac address
+        """Get additional information about ports from the operating system and update them.
+
+        The additional information is:
+
+            * Logical name (e.g. ``enp7s0``) if applicable,
+            * Mac address.
+
+        Args:
+            ports: The ports to update.
         """
 
     @abstractmethod
     def configure_port_state(self, port: Port, enable: bool) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port` in the operating system.
+
+        Args:
+            port: The port to configure.
+            enable: If :data:`True`, enable the port, otherwise shut it down.
         """
 
     @abstractmethod
@@ -273,12 +405,18 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
-        """
-        Configure (add or delete) an IP address of the input port.
+        """Configure an IP address on `port` in the operating system.
+
+        Args:
+            address: The address to configure.
+            port: The port to configure.
+            delete: If :data:`True`, remove the IP address, otherwise configure it.
         """
 
     @abstractmethod
     def configure_ipv4_forwarding(self, enable: bool) -> None:
-        """
-        Enable IPv4 forwarding in the underlying OS.
+        """Enable IPv4 forwarding in the operating system.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
         """
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 16/21] dts: posix and linux sessions docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (14 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 15/21] dts: os session " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 17/21] dts: node " Juraj Linkeš
                                     ` (5 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/linux_session.py | 64 +++++++++++-----
 dts/framework/testbed_model/posix_session.py | 81 +++++++++++++++++---
 2 files changed, 114 insertions(+), 31 deletions(-)

diff --git a/dts/framework/testbed_model/linux_session.py b/dts/framework/testbed_model/linux_session.py
index 055765ba2d..0ab59cef85 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""Linux OS translator.
+
+Translate OS-unaware calls into Linux calls/utilities. Most of Linux distributions are mostly
+compliant with POSIX standards, so this module only implements the parts that aren't.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import json
 from ipaddress import IPv4Interface, IPv6Interface
 from typing import TypedDict, Union
@@ -17,43 +24,52 @@
 
 
 class LshwConfigurationOutput(TypedDict):
+    """The relevant parts of ``lshw``'s ``configuration`` section."""
+
+    #:
     link: str
 
 
 class LshwOutput(TypedDict):
-    """
-    A model of the relevant information from json lshw output, e.g.:
-    {
-    ...
-    "businfo" : "pci@0000:08:00.0",
-    "logicalname" : "enp8s0",
-    "version" : "00",
-    "serial" : "52:54:00:59:e1:ac",
-    ...
-    "configuration" : {
-      ...
-      "link" : "yes",
-      ...
-    },
-    ...
+    """A model of the relevant information from ``lshw``'s json output.
+
+    Example:
+        ::
+
+            {
+            ...
+            "businfo" : "pci@0000:08:00.0",
+            "logicalname" : "enp8s0",
+            "version" : "00",
+            "serial" : "52:54:00:59:e1:ac",
+            ...
+            "configuration" : {
+              ...
+              "link" : "yes",
+              ...
+            },
+            ...
     """
 
+    #:
     businfo: str
+    #:
     logicalname: NotRequired[str]
+    #:
     serial: NotRequired[str]
+    #:
     configuration: LshwConfigurationOutput
 
 
 class LinuxSession(PosixSession):
-    """
-    The implementation of non-Posix compliant parts of Linux remote sessions.
-    """
+    """The implementation of non-Posix compliant parts of Linux."""
 
     @staticmethod
     def _get_privileged_command(command: str) -> str:
         return f"sudo -- sh -c '{command}'"
 
     def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
         cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v \\#").stdout
         lcores = []
         for cpu_line in cpu_info.splitlines():
@@ -65,18 +81,20 @@ def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
         return lcores
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return dpdk_prefix
 
-    def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
+    def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
         self._logger.info("Getting Hugepage information.")
         hugepage_size = self._get_hugepage_size()
         hugepages_total = self._get_hugepages_total()
         self._numa_nodes = self._get_numa_nodes()
 
-        if force_first_numa or hugepages_total != hugepage_amount:
+        if force_first_numa or hugepages_total != hugepage_count:
             # when forcing numa, we need to clear existing hugepages regardless
             # of size, so they can be moved to the first numa node
-            self._configure_huge_pages(hugepage_amount, hugepage_size, force_first_numa)
+            self._configure_huge_pages(hugepage_count, hugepage_size, force_first_numa)
         else:
             self._logger.info("Hugepages already configured.")
         self._mount_huge_pages()
@@ -132,6 +150,7 @@ def _configure_huge_pages(self, amount: int, size: int, force_first_numa: bool)
         self.send_command(f"echo {amount} | tee {hugepage_config_path}", privileged=True)
 
     def update_ports(self, ports: list[Port]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.update_ports`."""
         self._logger.debug("Gathering port info.")
         for port in ports:
             assert port.node == self.name, "Attempted to gather port info on the wrong node"
@@ -161,6 +180,7 @@ def _update_port_attr(self, port: Port, attr_value: str | None, attr_name: str)
             )
 
     def configure_port_state(self, port: Port, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
         state = "up" if enable else "down"
         self.send_command(f"ip link set dev {port.logical_name} {state}", privileged=True)
 
@@ -170,6 +190,7 @@ def configure_port_ip_address(
         port: Port,
         delete: bool,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_port_ip_address`."""
         command = "del" if delete else "add"
         self.send_command(
             f"ip address {command} {address} dev {port.logical_name}",
@@ -178,5 +199,6 @@ def configure_port_ip_address(
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Overrides :meth:`~.os_session.OSSession.configure_ipv4_forwarding`."""
         state = 1 if enable else 0
         self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
diff --git a/dts/framework/testbed_model/posix_session.py b/dts/framework/testbed_model/posix_session.py
index 5657cc0bc9..d279bb8b53 100644
--- a/dts/framework/testbed_model/posix_session.py
+++ b/dts/framework/testbed_model/posix_session.py
@@ -2,6 +2,15 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""POSIX compliant OS translator.
+
+Translates OS-unaware calls into POSIX compliant calls/utilities. POSIX is a set of standards
+for portability between Unix operating systems which not all Linux distributions
+(or the tools most frequently bundled with said distributions) adhere to. Most of Linux
+distributions are mostly compliant though.
+This intermediate module implements the common parts of mostly POSIX compliant distributions.
+"""
+
 import re
 from collections.abc import Iterable
 from pathlib import PurePath, PurePosixPath
@@ -15,13 +24,21 @@
 
 
 class PosixSession(OSSession):
-    """
-    An intermediary class implementing the Posix compliant parts of
-    Linux and other OS remote sessions.
-    """
+    """An intermediary class implementing the POSIX standard."""
 
     @staticmethod
     def combine_short_options(**opts: bool) -> str:
+        """Combine shell options into one argument.
+
+        These are options such as ``-x``, ``-v``, ``-f`` which are combined into ``-xvf``.
+
+        Args:
+            opts: The keys are option names (usually one letter) and the bool values indicate
+                whether to include the option in the resulting argument.
+
+        Returns:
+            The options combined into one argument.
+        """
         ret_opts = ""
         for opt, include in opts.items():
             if include:
@@ -33,17 +50,19 @@ def combine_short_options(**opts: bool) -> str:
         return ret_opts
 
     def guess_dpdk_remote_dir(self, remote_dir: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.guess_dpdk_remote_dir`."""
         remote_guess = self.join_remote_path(remote_dir, "dpdk-*")
         result = self.send_command(f"ls -d {remote_guess} | tail -1")
         return PurePosixPath(result.stdout)
 
     def get_remote_tmp_dir(self) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.get_remote_tmp_dir`."""
         return PurePosixPath("/tmp")
 
     def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
-        """
-        Create extra environment variables needed for i686 arch build. Get information
-        from the node if needed.
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_build_env_vars`.
+
+        Supported architecture: ``i686``.
         """
         env_vars = {}
         if arch == Architecture.i686:
@@ -63,6 +82,7 @@ def get_dpdk_build_env_vars(self, arch: Architecture) -> dict:
         return env_vars
 
     def join_remote_path(self, *args: str | PurePath) -> PurePosixPath:
+        """Overrides :meth:`~.os_session.OSSession.join_remote_path`."""
         return PurePosixPath(*args)
 
     def copy_from(
@@ -70,6 +90,7 @@ def copy_from(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_from`."""
         self.remote_session.copy_from(source_file, destination_file)
 
     def copy_to(
@@ -77,6 +98,7 @@ def copy_to(
         source_file: str | PurePath,
         destination_file: str | PurePath,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.copy_to`."""
         self.remote_session.copy_to(source_file, destination_file)
 
     def remove_remote_dir(
@@ -85,6 +107,7 @@ def remove_remote_dir(
         recursive: bool = True,
         force: bool = True,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.remove_remote_dir`."""
         opts = PosixSession.combine_short_options(r=recursive, f=force)
         self.send_command(f"rm{opts} {remote_dir_path}")
 
@@ -93,6 +116,7 @@ def extract_remote_tarball(
         remote_tarball_path: str | PurePath,
         expected_dir: str | PurePath | None = None,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.extract_remote_tarball`."""
         self.send_command(
             f"tar xfm {remote_tarball_path} -C {PurePosixPath(remote_tarball_path).parent}",
             60,
@@ -109,6 +133,7 @@ def build_dpdk(
         rebuild: bool = False,
         timeout: float = SETTINGS.compile_timeout,
     ) -> None:
+        """Overrides :meth:`~.os_session.OSSession.build_dpdk`."""
         try:
             if rebuild:
                 # reconfigure, then build
@@ -138,10 +163,12 @@ def build_dpdk(
             raise DPDKBuildError(f"DPDK build failed when doing '{e.command}'.")
 
     def get_dpdk_version(self, build_dir: str | PurePath) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_version`."""
         out = self.send_command(f"cat {self.join_remote_path(build_dir, 'VERSION')}", verify=True)
         return out.stdout
 
     def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
+        """Overrides :meth:`~.os_session.OSSession.kill_cleanup_dpdk_apps`."""
         self._logger.info("Cleaning up DPDK apps.")
         dpdk_runtime_dirs = self._get_dpdk_runtime_dirs(dpdk_prefix_list)
         if dpdk_runtime_dirs:
@@ -153,6 +180,14 @@ def kill_cleanup_dpdk_apps(self, dpdk_prefix_list: Iterable[str]) -> None:
             self._remove_dpdk_runtime_dirs(dpdk_runtime_dirs)
 
     def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePosixPath]:
+        """Find runtime directories DPDK apps are currently using.
+
+        Args:
+              dpdk_prefix_list: The prefixes DPDK apps were started with.
+
+        Returns:
+            The paths of DPDK apps' runtime dirs.
+        """
         prefix = PurePosixPath("/var", "run", "dpdk")
         if not dpdk_prefix_list:
             remote_prefixes = self._list_remote_dirs(prefix)
@@ -164,9 +199,13 @@ def _get_dpdk_runtime_dirs(self, dpdk_prefix_list: Iterable[str]) -> list[PurePo
         return [PurePosixPath(prefix, dpdk_prefix) for dpdk_prefix in dpdk_prefix_list]
 
     def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
-        """
-        Return a list of directories of the remote_dir.
-        If remote_path doesn't exist, return None.
+        """Contents of remote_path.
+
+        Args:
+            remote_path: List the contents of this path.
+
+        Returns:
+            The contents of remote_path. If remote_path doesn't exist, return None.
         """
         out = self.send_command(f"ls -l {remote_path} | awk '/^d/ {{print $NF}}'").stdout
         if "No such file or directory" in out:
@@ -175,6 +214,17 @@ def _list_remote_dirs(self, remote_path: str | PurePath) -> list[str] | None:
             return out.splitlines()
 
     def _get_dpdk_pids(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> list[int]:
+        """Find PIDs of running DPDK apps.
+
+        Look at each "config" file found in dpdk_runtime_dirs and find the PIDs of processes
+        that opened those file.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+
+        Returns:
+            The PIDs of running DPDK apps.
+        """
         pids = []
         pid_regex = r"p(\d+)"
         for dpdk_runtime_dir in dpdk_runtime_dirs:
@@ -193,6 +243,14 @@ def _remote_files_exists(self, remote_path: PurePath) -> bool:
         return not result.return_code
 
     def _check_dpdk_hugepages(self, dpdk_runtime_dirs: Iterable[str | PurePath]) -> None:
+        """Check there aren't any leftover hugepages.
+
+        If any hugepages are found, emit a warning. The hugepages are investigated in the
+        "hugepage_info" file of dpdk_runtime_dirs.
+
+        Args:
+            dpdk_runtime_dirs: The paths of DPDK apps' runtime dirs.
+        """
         for dpdk_runtime_dir in dpdk_runtime_dirs:
             hugepage_info = PurePosixPath(dpdk_runtime_dir, "hugepage_info")
             if self._remote_files_exists(hugepage_info):
@@ -208,9 +266,11 @@ def _remove_dpdk_runtime_dirs(self, dpdk_runtime_dirs: Iterable[str | PurePath])
             self.remove_remote_dir(dpdk_runtime_dir)
 
     def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
         return ""
 
     def get_compiler_version(self, compiler_name: str) -> str:
+        """Overrides :meth:`~.os_session.OSSession.get_compiler_version`."""
         match compiler_name:
             case "gcc":
                 return self.send_command(
@@ -228,6 +288,7 @@ def get_compiler_version(self, compiler_name: str) -> str:
                 raise ValueError(f"Unknown compiler {compiler_name}")
 
     def get_node_info(self) -> NodeInfo:
+        """Overrides :meth:`~.os_session.OSSession.get_node_info`."""
         os_release_info = self.send_command(
             "awk -F= '$1 ~ /^NAME$|^VERSION$/ {print $2}' /etc/os-release",
             SETTINGS.timeout,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 17/21] dts: node docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (15 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 16/21] dts: posix and linux sessions " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 18/21] dts: sut and tg nodes " Juraj Linkeš
                                     ` (4 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/node.py | 191 +++++++++++++++++++---------
 1 file changed, 131 insertions(+), 60 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index b313b5ad54..1a55fadf78 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+A node is any host/server DTS connects to.
+
+The base class, :class:`Node`, provides features common to all nodes and is supposed
+to be extended by subclasses with features specific to each node type.
+The :func:`~Node.skip_setup` decorator can be used without subclassing.
 """
 
 from abc import ABC
@@ -35,10 +40,22 @@
 
 
 class Node(ABC):
-    """
-    Basic class for node management. This class implements methods that
-    manage a node, such as information gathering (of CPU/PCI/NIC) and
-    environment setup.
+    """The base class for node management.
+
+    It shouldn't be instantiated, but rather subclassed.
+    It implements common methods to manage any node:
+
+        * Connection to the node,
+        * Hugepages setup.
+
+    Attributes:
+        main_session: The primary OS-aware remote session used to communicate with the node.
+        config: The node configuration.
+        name: The name of the node.
+        lcores: The list of logical cores that DTS can use on the node.
+            It's derived from logical cores present on the node and the test run configuration.
+        ports: The ports of this node specified in the test run configuration.
+        virtual_devices: The virtual devices used on the node.
     """
 
     main_session: OSSession
@@ -52,6 +69,17 @@ class Node(ABC):
     virtual_devices: list[VirtualDevice]
 
     def __init__(self, node_config: NodeConfiguration):
+        """Connect to the node and gather info during initialization.
+
+        Extra gathered information:
+
+        * The list of available logical CPUs. This is then filtered by
+          the ``lcores`` configuration in the YAML test run configuration file,
+        * Information about ports from the YAML test run configuration file.
+
+        Args:
+            node_config: The node's test run configuration.
+        """
         self.config = node_config
         self.name = node_config.name
         self._logger = getLogger(self.name)
@@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
         self._logger.info(f"Connected to node: {self.name}")
 
         self._get_remote_cpus()
-        # filter the node lcores according to user config
+        # filter the node lcores according to the test run configuration
         self.lcores = LogicalCoreListFilter(
             self.lcores, LogicalCoreList(self.config.lcores)
         ).filter()
@@ -76,9 +104,14 @@ def _init_ports(self) -> None:
             self.configure_port_state(port)
 
     def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        Perform the execution setup that will be done for each execution
-        this node is part of.
+        """Execution setup steps.
+
+        Configure hugepages and call :meth:`_set_up_execution` where
+        the rest of the configuration steps (if any) are implemented.
+
+        Args:
+            execution_config: The execution test run configuration according to which
+                the setup steps will be taken.
         """
         self._setup_hugepages()
         self._set_up_execution(execution_config)
@@ -87,54 +120,70 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
             self.virtual_devices.append(VirtualDevice(vdev))
 
     def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution setup steps.
         """
 
     def tear_down_execution(self) -> None:
-        """
-        Perform the execution teardown that will be done after each execution
-        this node is part of concludes.
+        """Execution teardown steps.
+
+        There are currently no common execution teardown steps common to all DTS node types.
         """
         self.virtual_devices = []
         self._tear_down_execution()
 
     def _tear_down_execution(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional execution teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional execution teardown steps.
         """
 
     def set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        Perform the build target setup that will be done for each build target
-        tested on this node.
+        """Build target setup steps.
+
+        There are currently no common build target setup steps common to all DTS node types.
+
+        Args:
+            build_target_config: The build target test run configuration according to which
+                the setup steps will be taken.
         """
         self._set_up_build_target(build_target_config)
 
     def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target setup steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target setup steps.
         """
 
     def tear_down_build_target(self) -> None:
-        """
-        Perform the build target teardown that will be done after each build target
-        tested on this node.
+        """Build target teardown steps.
+
+        There are currently no common build target teardown steps common to all DTS node types.
         """
         self._tear_down_build_target()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Optional additional build target teardown steps for subclasses.
+
+        Subclasses should override this if they need to add additional build target teardown steps.
         """
 
     def create_session(self, name: str) -> OSSession:
-        """
-        Create and return a new OSSession tailored to the remote OS.
+        """Create and return a new OS-aware remote session.
+
+        The returned session won't be used by the node creating it. The session must be used by
+        the caller. The session will be maintained for the entire lifecycle of the node object,
+        at the end of which the session will be cleaned up automatically.
+
+        Note:
+            Any number of these supplementary sessions may be created.
+
+        Args:
+            name: The name of the session.
+
+        Returns:
+            A new OS-aware remote session.
         """
         session_name = f"{self.name} {name}"
         connection = create_session(
@@ -152,19 +201,19 @@ def create_interactive_shell(
         privileged: bool = False,
         app_args: str = "",
     ) -> InteractiveShellType:
-        """Create a handler for an interactive session.
+        """Factory for interactive session handlers.
 
-        Instantiate shell_cls according to the remote OS specifics.
+        Instantiate `shell_cls` according to the remote OS specifics.
 
         Args:
             shell_cls: The class of the shell.
-            timeout: Timeout for reading output from the SSH channel. If you are
-                reading from the buffer and don't receive any data within the timeout
-                it will throw an error.
+            timeout: Timeout for reading output from the SSH channel. If you are reading from
+                the buffer and don't receive any data within the timeout it will throw an error.
             privileged: Whether to run the shell with administrative privileges.
             app_args: The arguments to be passed to the application.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not shell_cls.dpdk_app:
             shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
@@ -181,14 +230,22 @@ def filter_lcores(
         filter_specifier: LogicalCoreCount | LogicalCoreList,
         ascending: bool = True,
     ) -> list[LogicalCore]:
-        """
-        Filter the LogicalCores found on the Node according to
-        a LogicalCoreCount or a LogicalCoreList.
+        """Filter the node's logical cores that DTS can use.
+
+        Logical cores that DTS can use are the ones that are present on the node, but filtered
+        according to the test run configuration. The `filter_specifier` will filter cores from
+        those logical cores.
+
+        Args:
+            filter_specifier: Two different filters can be used, one that specifies the number
+                of logical cores per core, cores per socket and the number of sockets,
+                and another one that specifies a logical core list.
+            ascending: If :data:`True`, use cores with the lowest numerical id first and continue
+                in ascending order. If :data:`False`, start with the highest id and continue
+                in descending order. This ordering affects which sockets to consider first as well.
 
-        If ascending is True, use cores with the lowest numerical id first
-        and continue in ascending order. If False, start with the highest
-        id and continue in descending order. This ordering affects which
-        sockets to consider first as well.
+        Returns:
+            The filtered logical cores.
         """
         self._logger.debug(f"Filtering {filter_specifier} from {self.lcores}.")
         return lcore_filter(
@@ -198,17 +255,14 @@ def filter_lcores(
         ).filter()
 
     def _get_remote_cpus(self) -> None:
-        """
-        Scan CPUs in the remote OS and store a list of LogicalCores.
-        """
+        """Scan CPUs in the remote OS and store a list of LogicalCores."""
         self._logger.info("Getting CPU information.")
         self.lcores = self.main_session.get_remote_cpus(self.config.use_first_core)
 
     def _setup_hugepages(self) -> None:
-        """
-        Setup hugepages on the Node. Different architectures can supply different
-        amounts of memory for hugepages and numa-based hugepage allocation may need
-        to be considered.
+        """Setup hugepages on the node.
+
+        Configure the hugepages only if they're specified in the node's test run configuration.
         """
         if self.config.hugepages:
             self.main_session.setup_hugepages(
@@ -216,8 +270,11 @@ def _setup_hugepages(self) -> None:
             )
 
     def configure_port_state(self, port: Port, enable: bool = True) -> None:
-        """
-        Enable/disable port.
+        """Enable/disable `port`.
+
+        Args:
+            port: The port to enable/disable.
+            enable: :data:`True` to enable, :data:`False` to disable.
         """
         self.main_session.configure_port_state(port, enable)
 
@@ -227,15 +284,17 @@ def configure_port_ip_address(
         port: Port,
         delete: bool = False,
     ) -> None:
-        """
-        Configure the IP address of a port on this node.
+        """Add an IP address to `port` on this node.
+
+        Args:
+            address: The IP address with mask in CIDR format. Can be either IPv4 or IPv6.
+            port: The port to which to add the address.
+            delete: If :data:`True`, will delete the address from the port instead of adding it.
         """
         self.main_session.configure_port_ip_address(address, port, delete)
 
     def close(self) -> None:
-        """
-        Close all connections and free other resources.
-        """
+        """Close all connections and free other resources."""
         if self.main_session:
             self.main_session.close()
         for session in self._other_sessions:
@@ -244,6 +303,11 @@ def close(self) -> None:
 
     @staticmethod
     def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
+        """Skip the decorated function.
+
+        The :option:`--skip-setup` command line argument and the :envvar:`DTS_SKIP_SETUP`
+        environment variable enable the decorator.
+        """
         if SETTINGS.skip_setup:
             return lambda *args: None
         else:
@@ -251,6 +315,13 @@ def skip_setup(func: Callable[..., Any]) -> Callable[..., Any]:
 
 
 def create_session(node_config: NodeConfiguration, name: str, logger: DTSLOG) -> OSSession:
+    """Factory for OS-aware sessions.
+
+    Args:
+        node_config: The test run configuration of the node to connect to.
+        name: The name of the session.
+        logger: The logger instance this session will use.
+    """
     match node_config.os:
         case OS.linux:
             return LinuxSession(node_config, name, logger)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 18/21] dts: sut and tg nodes docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (16 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 17/21] dts: node " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 19/21] dts: base traffic generators " Juraj Linkeš
                                     ` (3 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/framework/testbed_model/sut_node.py | 230 ++++++++++++++++--------
 dts/framework/testbed_model/tg_node.py  |  42 +++--
 2 files changed, 176 insertions(+), 96 deletions(-)

diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 5ce9446dba..c4acea38d1 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -3,6 +3,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""System under test (DPDK + hardware) node.
+
+A system under test (SUT) is the combination of DPDK
+and the hardware we're testing with DPDK (NICs, crypto and other devices).
+An SUT node is where this SUT runs.
+"""
+
+
 import os
 import tarfile
 import time
@@ -26,6 +34,11 @@
 
 
 class EalParameters(object):
+    """The environment abstraction layer parameters.
+
+    The string representation can be created by converting the instance to a string.
+    """
+
     def __init__(
         self,
         lcore_list: LogicalCoreList,
@@ -35,21 +48,23 @@ def __init__(
         vdevs: list[VirtualDevice],
         other_eal_param: str,
     ):
-        """
-        Generate eal parameters character string;
-        :param lcore_list: the list of logical cores to use.
-        :param memory_channels: the number of memory channels to use.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
+        """Initialize the parameters according to inputs.
+
+        Process the parameters into the format used on the command line.
+
+        Args:
+            lcore_list: The list of logical cores to use.
+            memory_channels: The number of memory channels to use.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``
         """
         self._lcore_list = f"-l {lcore_list}"
         self._memory_channels = f"-n {memory_channels}"
@@ -61,6 +76,7 @@ def __init__(
         self._other_eal_param = other_eal_param
 
     def __str__(self) -> str:
+        """Create the EAL string."""
         return (
             f"{self._lcore_list} "
             f"{self._memory_channels} "
@@ -72,11 +88,21 @@ def __str__(self) -> str:
 
 
 class SutNode(Node):
-    """
-    A class for managing connections to the System under Test, providing
-    methods that retrieve the necessary information about the node (such as
-    CPU, memory and NIC details) and configuration capabilities.
-    Another key capability is building DPDK according to given build target.
+    """The system under test node.
+
+    The SUT node extends :class:`Node` with DPDK specific features:
+
+        * DPDK build,
+        * Gathering of DPDK build info,
+        * The running of DPDK apps, interactively or one-time execution,
+        * DPDK apps cleanup.
+
+    The :option:`--tarball` command line argument and the :envvar:`DTS_DPDK_TARBALL`
+    environment variable configure the path to the DPDK tarball
+    or the git commit ID, tag ID or tree ID to test.
+
+    Attributes:
+        config: The SUT node configuration
     """
 
     config: SutNodeConfiguration
@@ -94,6 +120,11 @@ class SutNode(Node):
     _path_to_devbind_script: PurePath | None
 
     def __init__(self, node_config: SutNodeConfiguration):
+        """Extend the constructor with SUT node specifics.
+
+        Args:
+            node_config: The SUT node's test run configuration.
+        """
         super(SutNode, self).__init__(node_config)
         self._dpdk_prefix_list = []
         self._build_target_config = None
@@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration):
 
     @property
     def _remote_dpdk_dir(self) -> PurePath:
+        """The remote DPDK dir.
+
+        This internal property should be set after extracting the DPDK tarball. If it's not set,
+        that implies the DPDK setup step has been skipped, in which case we can guess where
+        a previous build was located.
+        """
         if self.__remote_dpdk_dir is None:
             self.__remote_dpdk_dir = self._guess_dpdk_remote_dir()
         return self.__remote_dpdk_dir
@@ -123,6 +160,11 @@ def _remote_dpdk_dir(self, value: PurePath) -> None:
 
     @property
     def remote_dpdk_build_dir(self) -> PurePath:
+        """The remote DPDK build directory.
+
+        This is the directory where DPDK was built.
+        We assume it was built in a subdirectory of the extracted tarball.
+        """
         if self._build_target_config:
             return self.main_session.join_remote_path(
                 self._remote_dpdk_dir, self._build_target_config.name
@@ -132,18 +174,21 @@ def remote_dpdk_build_dir(self) -> PurePath:
 
     @property
     def dpdk_version(self) -> str:
+        """Last built DPDK version."""
         if self._dpdk_version is None:
             self._dpdk_version = self.main_session.get_dpdk_version(self._remote_dpdk_dir)
         return self._dpdk_version
 
     @property
     def node_info(self) -> NodeInfo:
+        """Additional node information."""
         if self._node_info is None:
             self._node_info = self.main_session.get_node_info()
         return self._node_info
 
     @property
     def compiler_version(self) -> str:
+        """The node's compiler version."""
         if self._compiler_version is None:
             if self._build_target_config is not None:
                 self._compiler_version = self.main_session.get_compiler_version(
@@ -158,6 +203,7 @@ def compiler_version(self) -> str:
 
     @property
     def path_to_devbind_script(self) -> PurePath:
+        """The path to the dpdk-devbind.py script on the node."""
         if self._path_to_devbind_script is None:
             self._path_to_devbind_script = self.main_session.join_remote_path(
                 self._remote_dpdk_dir, "usertools", "dpdk-devbind.py"
@@ -165,6 +211,11 @@ def path_to_devbind_script(self) -> PurePath:
         return self._path_to_devbind_script
 
     def get_build_target_info(self) -> BuildTargetInfo:
+        """Get additional build target information.
+
+        Returns:
+            The build target information,
+        """
         return BuildTargetInfo(
             dpdk_version=self.dpdk_version, compiler_version=self.compiler_version
         )
@@ -173,8 +224,9 @@ def _guess_dpdk_remote_dir(self) -> PurePath:
         return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir)
 
     def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        Setup DPDK on the SUT node.
+        """Setup DPDK on the SUT node.
+
+        Additional build target setup steps on top of those in :class:`Node`.
         """
         # we want to ensure that dpdk_version and compiler_version is reset for new
         # build targets
@@ -186,16 +238,14 @@ def _set_up_build_target(self, build_target_config: BuildTargetConfiguration) ->
         self.bind_ports_to_driver()
 
     def _tear_down_build_target(self) -> None:
-        """
-        This method exists to be optionally overwritten by derived classes and
-        is not decorated so that the derived class doesn't have to use the decorator.
+        """Bind ports to the operating system drivers.
+
+        Additional build target teardown steps on top of those in :class:`Node`.
         """
         self.bind_ports_to_driver(for_dpdk=False)
 
     def _configure_build_target(self, build_target_config: BuildTargetConfiguration) -> None:
-        """
-        Populate common environment variables and set build target config.
-        """
+        """Populate common environment variables and set build target config."""
         self._env_vars = {}
         self._build_target_config = build_target_config
         self._env_vars.update(self.main_session.get_dpdk_build_env_vars(build_target_config.arch))
@@ -207,9 +257,7 @@ def _configure_build_target(self, build_target_config: BuildTargetConfiguration)
 
     @Node.skip_setup
     def _copy_dpdk_tarball(self) -> None:
-        """
-        Copy to and extract DPDK tarball on the SUT node.
-        """
+        """Copy to and extract DPDK tarball on the SUT node."""
         self._logger.info("Copying DPDK tarball to SUT.")
         self.main_session.copy_to(SETTINGS.dpdk_tarball_path, self._remote_tmp_dir)
 
@@ -238,8 +286,9 @@ def _copy_dpdk_tarball(self) -> None:
 
     @Node.skip_setup
     def _build_dpdk(self) -> None:
-        """
-        Build DPDK. Uses the already configured target. Assumes that the tarball has
+        """Build DPDK.
+
+        Uses the already configured target. Assumes that the tarball has
         already been copied to and extracted on the SUT node.
         """
         self.main_session.build_dpdk(
@@ -250,15 +299,19 @@ def _build_dpdk(self) -> None:
         )
 
     def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePath:
-        """
-        Build one or all DPDK apps. Requires DPDK to be already built on the SUT node.
-        When app_name is 'all', build all example apps.
-        When app_name is any other string, tries to build that example app.
-        Return the directory path of the built app. If building all apps, return
-        the path to the examples directory (where all apps reside).
-        The meson_dpdk_args are keyword arguments
-        found in meson_option.txt in root DPDK directory. Do not use -D with them,
-        for example: enable_kmods=True.
+        """Build one or all DPDK apps.
+
+        Requires DPDK to be already built on the SUT node.
+
+        Args:
+            app_name: The name of the DPDK app to build.
+                When `app_name` is ``all``, build all example apps.
+            meson_dpdk_args: The arguments found in ``meson_options.txt`` in root DPDK directory.
+                Do not use ``-D`` with them.
+
+        Returns:
+            The directory path of the built app. If building all apps, return
+            the path to the examples directory (where all apps reside).
         """
         self.main_session.build_dpdk(
             self._env_vars,
@@ -277,9 +330,7 @@ def build_dpdk_app(self, app_name: str, **meson_dpdk_args: str | bool) -> PurePa
         )
 
     def kill_cleanup_dpdk_apps(self) -> None:
-        """
-        Kill all dpdk applications on the SUT. Cleanup hugepages.
-        """
+        """Kill all dpdk applications on the SUT, then clean up hugepages."""
         if self._dpdk_kill_session and self._dpdk_kill_session.is_alive():
             # we can use the session if it exists and responds
             self._dpdk_kill_session.kill_cleanup_dpdk_apps(self._dpdk_prefix_list)
@@ -298,33 +349,34 @@ def create_eal_parameters(
         vdevs: list[VirtualDevice] | None = None,
         other_eal_param: str = "",
     ) -> "EalParameters":
-        """
-        Generate eal parameters character string;
-        :param lcore_filter_specifier: a number of lcores/cores/sockets to use
-                        or a list of lcore ids to use.
-                        The default will select one lcore for each of two cores
-                        on one socket, in ascending order of core ids.
-        :param ascending_cores: True, use cores with the lowest numerical id first
-                        and continue in ascending order. If False, start with the
-                        highest id and continue in descending order. This ordering
-                        affects which sockets to consider first as well.
-        :param prefix: set file prefix string, eg:
-                        prefix='vf'
-        :param append_prefix_timestamp: if True, will append a timestamp to
-                        DPDK file prefix.
-        :param no_pci: switch of disable PCI bus eg:
-                        no_pci=True
-        :param vdevs: virtual device list, eg:
-                        vdevs=[
-                            VirtualDevice('net_ring0'),
-                            VirtualDevice('net_ring1')
-                        ]
-        :param other_eal_param: user defined DPDK eal parameters, eg:
-                        other_eal_param='--single-file-segments'
-        :return: eal param string, eg:
-                '-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420';
-        """
+        """Compose the EAL parameters.
+
+        Process the list of cores and the DPDK prefix and pass that along with
+        the rest of the arguments.
 
+        Args:
+            lcore_filter_specifier: A number of lcores/cores/sockets to use
+                or a list of lcore ids to use.
+                The default will select one lcore for each of two cores
+                on one socket, in ascending order of core ids.
+            ascending_cores: Sort cores in ascending order (lowest to highest IDs).
+                If :data:`False`, sort in descending order.
+            prefix: Set the file prefix string with which to start DPDK, e.g.: ``prefix='vf'``.
+            append_prefix_timestamp: If :data:`True`, will append a timestamp to DPDK file prefix.
+            no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+            vdevs: Virtual devices, e.g.::
+
+                vdevs=[
+                    VirtualDevice('net_ring0'),
+                    VirtualDevice('net_ring1')
+                ]
+            other_eal_param: user defined DPDK EAL parameters, e.g.:
+                ``other_eal_param='--single-file-segments'``.
+
+        Returns:
+            An EAL param string, such as
+            ``-c 0xf -a 0000:88:00.0 --file-prefix=dpdk_1112_20190809143420``.
+        """
         lcore_list = LogicalCoreList(self.filter_lcores(lcore_filter_specifier, ascending_cores))
 
         if append_prefix_timestamp:
@@ -348,14 +400,29 @@ def create_eal_parameters(
     def run_dpdk_app(
         self, app_path: PurePath, eal_args: "EalParameters", timeout: float = 30
     ) -> CommandResult:
-        """
-        Run DPDK application on the remote node.
+        """Run DPDK application on the remote node.
+
+        The application is not run interactively - the command that starts the application
+        is executed and then the call waits for it to finish execution.
+
+        Args:
+            app_path: The remote path to the DPDK application.
+            eal_args: EAL parameters to run the DPDK application with.
+            timeout: Wait at most this long in seconds for `command` execution to complete.
+
+        Returns:
+            The result of the DPDK app execution.
         """
         return self.main_session.send_command(
             f"{app_path} {eal_args}", timeout, privileged=True, verify=True
         )
 
     def configure_ipv4_forwarding(self, enable: bool) -> None:
+        """Enable/disable IPv4 forwarding on the node.
+
+        Args:
+            enable: If :data:`True`, enable the forwarding, otherwise disable it.
+        """
         self.main_session.configure_ipv4_forwarding(enable)
 
     def create_interactive_shell(
@@ -365,9 +432,13 @@ def create_interactive_shell(
         privileged: bool = False,
         eal_parameters: EalParameters | str | None = None,
     ) -> InteractiveShellType:
-        """Factory method for creating a handler for an interactive session.
+        """Extend the factory for interactive session handlers.
+
+        The extensions are SUT node specific:
 
-        Instantiate shell_cls according to the remote OS specifics.
+            * The default for `eal_parameters`,
+            * The interactive shell path `shell_cls.path` is prepended with path to the remote
+              DPDK build directory for DPDK apps.
 
         Args:
             shell_cls: The class of the shell.
@@ -377,9 +448,10 @@ def create_interactive_shell(
             privileged: Whether to run the shell with administrative privileges.
             eal_parameters: List of EAL parameters to use to launch the app. If this
                 isn't provided or an empty string is passed, it will default to calling
-                create_eal_parameters().
+                :meth:`create_eal_parameters`.
+
         Returns:
-            Instance of the desired interactive application.
+            An instance of the desired interactive application shell.
         """
         if not eal_parameters:
             eal_parameters = self.create_eal_parameters()
@@ -396,8 +468,8 @@ def bind_ports_to_driver(self, for_dpdk: bool = True) -> None:
         """Bind all ports on the SUT to a driver.
 
         Args:
-            for_dpdk: Boolean that, when True, binds ports to os_driver_for_dpdk
-            or, when False, binds to os_driver. Defaults to True.
+            for_dpdk: If :data:`True`, binds ports to os_driver_for_dpdk.
+                If :data:`False`, binds to os_driver.
         """
         for port in self.ports:
             driver = port.os_driver_for_dpdk if for_dpdk else port.os_driver
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
index 8a8f0019f3..f269d4c585 100644
--- a/dts/framework/testbed_model/tg_node.py
+++ b/dts/framework/testbed_model/tg_node.py
@@ -5,13 +5,8 @@
 
 """Traffic generator node.
 
-This is the node where the traffic generator resides.
-The distinction between a node and a traffic generator is as follows:
-A node is a host that DTS connects to. It could be a baremetal server,
-a VM or a container.
-A traffic generator is software running on the node.
-A traffic generator node is a node running a traffic generator.
-A node can be a traffic generator node as well as system under test node.
+A traffic generator (TG) generates traffic that's sent towards the SUT node.
+A TG node is where the TG runs.
 """
 
 from scapy.packet import Packet  # type: ignore[import]
@@ -24,13 +19,16 @@
 
 
 class TGNode(Node):
-    """Manage connections to a node with a traffic generator.
+    """The traffic generator node.
 
-    Apart from basic node management capabilities, the Traffic Generator node has
-    specialized methods for handling the traffic generator running on it.
+    The TG node extends :class:`Node` with TG specific features:
 
-    Arguments:
-        node_config: The user configuration of the traffic generator node.
+        * Traffic generator initialization,
+        * The sending of traffic and receiving packets,
+        * The sending of traffic without receiving packets.
+
+    Not all traffic generators are capable of capturing traffic, which is why there
+    must be a way to send traffic without that.
 
     Attributes:
         traffic_generator: The traffic generator running on the node.
@@ -39,6 +37,13 @@ class TGNode(Node):
     traffic_generator: CapturingTrafficGenerator
 
     def __init__(self, node_config: TGNodeConfiguration):
+        """Extend the constructor with TG node specifics.
+
+        Initialize the traffic generator on the TG node.
+
+        Args:
+            node_config: The TG node's test run configuration.
+        """
         super(TGNode, self).__init__(node_config)
         self.traffic_generator = create_traffic_generator(self, node_config.traffic_generator)
         self._logger.info(f"Created node: {self.name}")
@@ -50,17 +55,17 @@ def send_packet_and_capture(
         receive_port: Port,
         duration: float = 1,
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet`, return received traffic.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given duration. Also record the captured traffic
         in a pcap file.
 
         Args:
             packet: The packet to send.
             send_port: The egress port on the TG node.
             receive_port: The ingress port in the TG node.
-            duration: Capture traffic for this amount of time after sending the packet.
+            duration: Capture traffic for this amount of time after sending `packet`.
 
         Returns:
              A list of received packets. May be empty if no packets are captured.
@@ -70,6 +75,9 @@ def send_packet_and_capture(
         )
 
     def close(self) -> None:
-        """Free all resources used by the node"""
+        """Free all resources used by the node.
+
+        This extends the superclass method with TG cleanup.
+        """
         self.traffic_generator.close()
         super(TGNode, self).close()
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 19/21] dts: base traffic generators docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (17 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 18/21] dts: sut and tg nodes " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 20/21] dts: scapy tg " Juraj Linkeš
                                     ` (2 subsequent siblings)
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../traffic_generator/__init__.py             | 22 ++++++++-
 .../capturing_traffic_generator.py            | 45 +++++++++++--------
 .../traffic_generator/traffic_generator.py    | 33 ++++++++------
 3 files changed, 67 insertions(+), 33 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py b/dts/framework/testbed_model/traffic_generator/__init__.py
index 52888d03fa..11e2bd7d97 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -1,6 +1,19 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""DTS traffic generators.
+
+A traffic generator is capable of generating traffic and then monitor returning traffic.
+All traffic generators must count the number of received packets. Some may additionally capture
+individual packets.
+
+A traffic generator may be software running on generic hardware or it could be specialized hardware.
+
+The traffic generators that only count the number of received packets are suitable only for
+performance testing. In functional testing, we need to be able to dissect each arrived packet
+and a capturing traffic generator is required.
+"""
+
 from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
 from framework.exception import ConfigurationError
 from framework.testbed_model.node import Node
@@ -12,8 +25,15 @@
 def create_traffic_generator(
     tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
 ) -> CapturingTrafficGenerator:
-    """A factory function for creating traffic generator object from user config."""
+    """The factory function for creating traffic generator objects from the test run configuration.
+
+    Args:
+        tg_node: The traffic generator node where the created traffic generator will be running.
+        traffic_generator_config: The traffic generator config.
 
+    Returns:
+        A traffic generator capable of capturing received packets.
+    """
     match traffic_generator_config.traffic_generator_type:
         case TrafficGeneratorType.SCAPY:
             return ScapyTrafficGenerator(tg_node, traffic_generator_config)
diff --git a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index 1fc7f98c05..0246590333 100644
--- a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -23,19 +23,21 @@
 
 
 def _get_default_capture_name() -> str:
-    """
-    This is the function used for the default implementation of capture names.
-    """
     return str(uuid.uuid4())
 
 
 class CapturingTrafficGenerator(TrafficGenerator):
     """Capture packets after sending traffic.
 
-    A mixin interface which enables a packet generator to declare that it can capture
+    The intermediary interface which enables a packet generator to declare that it can capture
     packets and return them to the user.
 
+    Similarly to :class:`~.traffic_generator.TrafficGenerator`, this class exposes
+    the public methods specific to capturing traffic generators and defines a private method
+    that must implement the traffic generation and capturing logic in subclasses.
+
     The methods of capturing traffic generators obey the following workflow:
+
         1. send packets
         2. capture packets
         3. write the capture to a .pcap file
@@ -44,6 +46,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
 
     @property
     def is_capturing(self) -> bool:
+        """This traffic generator can capture traffic."""
         return True
 
     def send_packet_and_capture(
@@ -54,11 +57,12 @@ def send_packet_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send a packet, return received traffic.
+        """Send `packet` and capture received traffic.
+
+        Send `packet` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
 
-        Send a packet on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        The captured traffic is recorded in the `capture_name`.pcap file.
 
         Args:
             packet: The packet to send.
@@ -68,7 +72,7 @@ def send_packet_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         return self.send_packets_and_capture(
             [packet], send_port, receive_port, duration, capture_name
@@ -82,11 +86,14 @@ def send_packets_and_capture(
         duration: float,
         capture_name: str = _get_default_capture_name(),
     ) -> list[Packet]:
-        """Send packets, return received traffic.
+        """Send `packets` and capture received traffic.
 
-        Send packets on the send_port and then return all traffic captured
-        on the receive_port for the given duration. Also record the captured traffic
-        in a pcap file.
+        Send `packets` on `send_port` and then return all traffic captured
+        on `receive_port` for the given `duration`.
+
+        The captured traffic is recorded in the `capture_name`.pcap file. The target directory
+        can be configured with the :option:`--output-dir` command line argument or
+        the :envvar:`DTS_OUTPUT_DIR` environment variable.
 
         Args:
             packets: The packets to send.
@@ -96,7 +103,7 @@ def send_packets_and_capture(
             capture_name: The name of the .pcap file where to store the capture.
 
         Returns:
-             A list of received packets. May be empty if no packets are captured.
+             The received packets. May be empty if no packets are captured.
         """
         self._logger.debug(get_packet_summaries(packets))
         self._logger.debug(
@@ -121,10 +128,12 @@ def _send_packets_and_capture(
         receive_port: Port,
         duration: float,
     ) -> list[Packet]:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port and receives packets on the receive_port
-        for the specified duration. It must be able to handle no received packets.
+        """The implementation of :method:`send_packets_and_capture`.
+
+        The subclasses must implement this method which sends `packets` on `send_port`
+        and receives packets on `receive_port` for the specified `duration`.
+
+        It must be able to handle receiving no packets.
         """
 
     def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]) -> None:
diff --git a/dts/framework/testbed_model/traffic_generator/traffic_generator.py b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
index 0d9902ddb7..c49fbff488 100644
--- a/dts/framework/testbed_model/traffic_generator/traffic_generator.py
+++ b/dts/framework/testbed_model/traffic_generator/traffic_generator.py
@@ -22,7 +22,8 @@
 class TrafficGenerator(ABC):
     """The base traffic generator.
 
-    Defines the few basic methods that each traffic generator must implement.
+    Exposes the common public methods of all traffic generators and defines private methods
+    that must implement the traffic generation logic in subclasses.
     """
 
     _config: TrafficGeneratorConfig
@@ -30,14 +31,20 @@ class TrafficGenerator(ABC):
     _logger: DTSLOG
 
     def __init__(self, tg_node: Node, config: TrafficGeneratorConfig):
+        """Initialize the traffic generator.
+
+        Args:
+            tg_node: The traffic generator node where the created traffic generator will be running.
+            config: The traffic generator's test run configuration.
+        """
         self._config = config
         self._tg_node = tg_node
         self._logger = getLogger(f"{self._tg_node.name} {self._config.traffic_generator_type}")
 
     def send_packet(self, packet: Packet, port: Port) -> None:
-        """Send a packet and block until it is fully sent.
+        """Send `packet` and block until it is fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packet` on `port`, then wait until `packet` is fully sent.
 
         Args:
             packet: The packet to send.
@@ -46,9 +53,9 @@ def send_packet(self, packet: Packet, port: Port) -> None:
         self.send_packets([packet], port)
 
     def send_packets(self, packets: list[Packet], port: Port) -> None:
-        """Send packets and block until they are fully sent.
+        """Send `packets` and block until they are fully sent.
 
-        What fully sent means is defined by the traffic generator.
+        Send `packets` on `port`, then wait until `packets` are fully sent.
 
         Args:
             packets: The packets to send.
@@ -60,19 +67,17 @@ def send_packets(self, packets: list[Packet], port: Port) -> None:
 
     @abstractmethod
     def _send_packets(self, packets: list[Packet], port: Port) -> None:
-        """
-        The extended classes must implement this method which
-        sends packets on send_port. The method should block until all packets
-        are fully sent.
+        """The implementation of :method:`send_packets`.
+
+        The subclasses must implement this method which sends `packets` on `port`.
+        The method should block until all `packets` are fully sent.
+
+        What fully sent means is defined by the traffic generator.
         """
 
     @property
     def is_capturing(self) -> bool:
-        """Whether this traffic generator can capture traffic.
-
-        Returns:
-            True if the traffic generator can capture traffic, False otherwise.
-        """
+        """This traffic generator can't capture traffic."""
         return False
 
     @abstractmethod
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 20/21] dts: scapy tg docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (18 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 19/21] dts: base traffic generators " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-04 10:24                   ` [PATCH v9 21/21] dts: test suites " Juraj Linkeš
  2023-12-21 11:48                   ` [PATCH v9 00/21] dts: docstrings update Thomas Monjalon
  21 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .../testbed_model/traffic_generator/scapy.py  | 91 +++++++++++--------
 1 file changed, 54 insertions(+), 37 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py b/dts/framework/testbed_model/traffic_generator/scapy.py
index c88cf28369..5b60f66237 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -2,14 +2,15 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""Scapy traffic generator.
+"""The Scapy traffic generator.
 
-Traffic generator used for functional testing, implemented using the Scapy library.
+A traffic generator used for functional testing, implemented with
+`the Scapy library <https://scapy.readthedocs.io/en/latest/>`_.
 The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
 
-The XML-RPC server runs in an interactive remote SSH session running Python console,
-where we start the server. The communication with the server is facilitated with
-a local server proxy.
+The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC server
+in an interactive remote Python SSH session. The communication with the server is facilitated
+with a local server proxy from the :mod:`xmlrpc.client` module.
 """
 
 import inspect
@@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
     recv_iface: str,
     duration: float,
 ) -> list[bytes]:
-    """RPC function to send and capture packets.
+    """The RPC function to send and capture packets.
 
-    The function is meant to be executed on the remote TG node.
+    This function is meant to be executed on the remote TG node via the server proxy.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
         recv_iface: The logical name of the ingress interface.
         duration: Capture for this amount of time, in seconds.
 
     Returns:
         A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
+        to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     sniffer = scapy.all.AsyncSniffer(
@@ -96,19 +97,15 @@ def scapy_send_packets_and_capture(
 
 
 def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str) -> None:
-    """RPC function to send packets.
+    """The RPC function to send packets.
 
-    The function is meant to be executed on the remote TG node.
-    It doesn't return anything, only sends packets.
+    This function is meant to be executed on the remote TG node via the server proxy.
+    It only sends `xmlrpc_packets`, without capturing them.
 
     Args:
         xmlrpc_packets: The packets to send. These need to be converted to
-            xmlrpc.client.Binary before sending to the remote server.
+            :class:`~xmlrpc.client.Binary` objects before sending to the remote server.
         send_iface: The logical name of the egress interface.
-
-    Returns:
-        A list of bytes. Each item in the list represents one packet, which needs
-            to be converted back upon transfer from the remote node.
     """
     scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
     scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
@@ -128,11 +125,19 @@ def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: s
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-    """Basic XML-RPC server that may be extended
-    by functions serializable by the marshal module.
+    """Basic XML-RPC server.
+
+    The server may be augmented by functions serializable by the :mod:`marshal` module.
     """
 
     def __init__(self, *args, **kwargs):
+        """Extend the XML-RPC server initialization.
+
+        Args:
+            args: The positional arguments that will be passed to the superclass's constructor.
+            kwargs: The keyword arguments that will be passed to the superclass's constructor.
+                The `allow_none` argument will be set to :data:`True`.
+        """
         kwargs["allow_none"] = True
         super().__init__(*args, **kwargs)
         self.register_introspection_functions()
@@ -140,13 +145,12 @@ def __init__(self, *args, **kwargs):
         self.register_function(self.add_rpc_function)
 
     def quit(self) -> None:
+        """Quit the server."""
         self._BaseServer__shutdown_request = True
         return None
 
     def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> None:
-        """Add a function to the server.
-
-        This is meant to be executed remotely.
+        """Add a function to the server from the local server proxy.
 
         Args:
               name: The name of the function.
@@ -157,6 +161,11 @@ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary) -> N
         self.register_function(function)
 
     def serve_forever(self, poll_interval: float = 0.5) -> None:
+        """Extend the superclass method with an additional print.
+
+        Once executed in the local server proxy, the print gives us a clear string to expect
+        when starting the server. The print means this function was executed on the XML-RPC server.
+        """
         print("XMLRPC OK")
         super().serve_forever(poll_interval)
 
@@ -164,19 +173,12 @@ def serve_forever(self, poll_interval: float = 0.5) -> None:
 class ScapyTrafficGenerator(CapturingTrafficGenerator):
     """Provides access to scapy functions via an RPC interface.
 
-    The traffic generator first starts an XML-RPC on the remote TG node.
-    Then it populates the server with functions which use the Scapy library
-    to send/receive traffic.
-
-    Any packets sent to the remote server are first converted to bytes.
-    They are received as xmlrpc.client.Binary objects on the server side.
-    When the server sends the packets back, they are also received as
-    xmlrpc.client.Binary object on the client side, are converted back to Scapy
-    packets and only then returned from the methods.
+    This class extends the base with remote execution of scapy functions.
 
-    Arguments:
-        tg_node: The node where the traffic generator resides.
-        config: The user configuration of the traffic generator.
+    Any packets sent to the remote server are first converted to bytes. They are received as
+    :class:`~xmlrpc.client.Binary` objects on the server side. When the server sends the packets
+    back, they are also received as :class:`~xmlrpc.client.Binary` objects on the client side, are
+    converted back to :class:`~scapy.packet.Packet` objects and only then returned from the methods.
 
     Attributes:
         session: The exclusive interactive remote session created by the Scapy
@@ -190,6 +192,22 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
     _config: ScapyTrafficGeneratorConfig
 
     def __init__(self, tg_node: Node, config: ScapyTrafficGeneratorConfig):
+        """Extend the constructor with Scapy TG specifics.
+
+        The traffic generator first starts an XML-RPC on the remote `tg_node`.
+        Then it populates the server with functions which use the Scapy library
+        to send/receive traffic:
+
+            * :func:`scapy_send_packets_and_capture`
+            * :func:`scapy_send_packets`
+
+        To enable verbose logging from the xmlrpc client, use the :option:`--verbose`
+        command line argument or the :envvar:`DTS_VERBOSE` environment variable.
+
+        Args:
+            tg_node: The node where the traffic generator resides.
+            config: The traffic generator's test run configuration.
+        """
         super().__init__(tg_node, config)
 
         assert (
@@ -231,10 +249,8 @@ def _start_xmlrpc_server_in_remote_python(self, listen_port: int) -> None:
         # or class, so strip all lines containing only whitespace
         src = "\n".join([line for line in src.splitlines() if not line.isspace() and line != ""])
 
-        spacing = "\n" * 4
-
         # execute it in the python terminal
-        self.session.send_command(spacing + src + spacing)
+        self.session.send_command(src + "\n")
         self.session.send_command(
             f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));server.serve_forever()",
             "XMLRPC OK",
@@ -267,6 +283,7 @@ def _send_packets_and_capture(
         return scapy_packets
 
     def close(self) -> None:
+        """Close the traffic generator."""
         try:
             self.rpc_server_proxy.quit()
         except ConnectionRefusedError:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v9 21/21] dts: test suites docstring update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (19 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 20/21] dts: scapy tg " Juraj Linkeš
@ 2023-12-04 10:24                   ` Juraj Linkeš
  2023-12-05 18:39                     ` Jeremy Spewock
  2023-12-21 11:48                   ` [PATCH v9 00/21] dts: docstrings update Thomas Monjalon
  21 siblings, 1 reply; 255+ messages in thread
From: Juraj Linkeš @ 2023-12-04 10:24 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/tests/TestSuite_hello_world.py | 16 +++++---
 dts/tests/TestSuite_os_udp.py      | 20 ++++++----
 dts/tests/TestSuite_smoke_tests.py | 61 ++++++++++++++++++++++++------
 3 files changed, 72 insertions(+), 25 deletions(-)

diff --git a/dts/tests/TestSuite_hello_world.py b/dts/tests/TestSuite_hello_world.py
index 768ba1cfa8..fd7ff1534d 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
 
-"""
+"""The DPDK hello world app test suite.
+
 Run the helloworld example app and verify it prints a message for each used core.
 No other EAL parameters apart from cores are used.
 """
@@ -15,22 +16,25 @@
 
 
 class TestHelloWorld(TestSuite):
+    """DPDK hello world app test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
             Build the app we're about to test - helloworld.
         """
         self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
 
     def test_hello_world_single_core(self) -> None:
-        """
+        """Single core test case.
+
         Steps:
             Run the helloworld app on the first usable logical core.
         Verify:
             The app prints a message from the used core:
             "hello from core <core_id>"
         """
-
         # get the first usable core
         lcore_amount = LogicalCoreCount(1, 1, 1)
         lcores = LogicalCoreCountFilter(self.sut_node.lcores, lcore_amount).filter()
@@ -42,14 +46,14 @@ def test_hello_world_single_core(self) -> None:
         )
 
     def test_hello_world_all_cores(self) -> None:
-        """
+        """All cores test case.
+
         Steps:
             Run the helloworld app on all usable logical cores.
         Verify:
             The app prints a message from all used cores:
             "hello from core <core_id>"
         """
-
         # get the maximum logical core number
         eal_para = self.sut_node.create_eal_parameters(
             lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
index bf6b93deb5..2cf29d37bb 100644
--- a/dts/tests/TestSuite_os_udp.py
+++ b/dts/tests/TestSuite_os_udp.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
+"""Basic IPv4 OS routing test suite.
+
 Configure SUT node to route traffic from if1 to if2.
 Send a packet to the SUT node, verify it comes back on the second port on the TG node.
 """
@@ -13,24 +14,26 @@
 
 
 class TestOSUdp(TestSuite):
+    """IPv4 UDP OS routing test suite."""
+
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
-            Configure SUT ports and SUT to route traffic from if1 to if2.
+            Bind the SUT ports to the OS driver, configure the ports and configure the SUT
+            to route traffic from if1 to if2.
         """
-
-        # This test uses kernel drivers
         self.sut_node.bind_ports_to_driver(for_dpdk=False)
         self.configure_testbed_ipv4()
 
     def test_os_udp(self) -> None:
-        """
+        """Basic UDP IPv4 traffic test case.
+
         Steps:
             Send a UDP packet.
         Verify:
             The packet with proper addresses arrives at the other TG port.
         """
-
         packet = Ether() / IP() / UDP()
 
         received_packets = self.send_packet_and_capture(packet)
@@ -40,7 +43,8 @@ def test_os_udp(self) -> None:
         self.verify_packets(expected_packet, received_packets)
 
     def tear_down_suite(self) -> None:
-        """
+        """Tear down the test suite.
+
         Teardown:
             Remove the SUT port configuration configured in setup.
         """
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 8958f58dac..5e2bac14bd 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -1,6 +1,17 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Smoke test suite.
+
+Smoke tests are a class of tests which are used for validating a minimal set of important features.
+These are the most important features without which (or when they're faulty) the software wouldn't
+work properly. Thus, if any failure occurs while testing these features,
+there isn't that much of a reason to continue testing, as the software is fundamentally broken.
+
+These tests don't have to include only DPDK tests, as the reason for failures could be
+in the infrastructure (a faulty link between NICs or a misconfiguration).
+"""
+
 import re
 
 from framework.config import PortConfig
@@ -11,23 +22,39 @@
 
 
 class SmokeTests(TestSuite):
+    """DPDK and infrastructure smoke test suite.
+
+    The test cases validate the most basic DPDK functionality needed for all other test suites.
+    The infrastructure also needs to be tested, as that is also used by all other test suites.
+
+    Attributes:
+        is_blocking: This test suite will block the execution of all other test suites
+            in the build target after it.
+        nics_in_node: The NICs present on the SUT node.
+    """
+
     is_blocking = True
     # dicts in this list are expected to have two keys:
     # "pci_address" and "current_driver"
     nics_in_node: list[PortConfig] = []
 
     def set_up_suite(self) -> None:
-        """
+        """Set up the test suite.
+
         Setup:
-            Set the build directory path and generate a list of NICs in the SUT node.
+            Set the build directory path and a list of NICs in the SUT node.
         """
         self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
         self.nics_in_node = self.sut_node.config.ports
 
     def test_unit_tests(self) -> None:
-        """
+        """DPDK meson ``fast-tests`` unit tests.
+
+        Test that all unit test from the ``fast-tests`` suite pass.
+        The suite is a subset with only the most basic tests.
+
         Test:
-            Run the fast-test unit-test suite through meson.
+            Run the ``fast-tests`` unit test suite through meson.
         """
         self.sut_node.main_session.send_command(
             f"meson test -C {self.dpdk_build_dir_path} --suite fast-tests -t 60",
@@ -37,9 +64,14 @@ def test_unit_tests(self) -> None:
         )
 
     def test_driver_tests(self) -> None:
-        """
+        """DPDK meson ``driver-tests`` unit tests.
+
+        Test that all unit test from the ``driver-tests`` suite pass.
+        The suite is a subset with driver tests. This suite may be run with virtual devices
+        configured in the test run configuration.
+
         Test:
-            Run the driver-test unit-test suite through meson.
+            Run the ``driver-tests`` unit test suite through meson.
         """
         vdev_args = ""
         for dev in self.sut_node.virtual_devices:
@@ -60,9 +92,12 @@ def test_driver_tests(self) -> None:
         )
 
     def test_devices_listed_in_testpmd(self) -> None:
-        """
+        """Testpmd device discovery.
+
+        Test that the devices configured in the test run configuration are found in testpmd.
+
         Test:
-            Uses testpmd driver to verify that devices have been found by testpmd.
+            List all devices found in testpmd and verify the configured devices are among them.
         """
         testpmd_driver = self.sut_node.create_interactive_shell(TestPmdShell, privileged=True)
         dev_list = [str(x) for x in testpmd_driver.get_devices()]
@@ -74,10 +109,14 @@ def test_devices_listed_in_testpmd(self) -> None:
             )
 
     def test_device_bound_to_driver(self) -> None:
-        """
+        """Device driver in OS.
+
+        Test that the devices configured in the test run configuration are bound to
+        the proper driver.
+
         Test:
-            Ensure that all drivers listed in the config are bound to the correct
-            driver.
+            List all devices with the ``dpdk-devbind.py`` script and verify that
+            the configured devices are bound to the proper driver.
         """
         path_to_devbind = self.sut_node.path_to_devbind_script
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v8 18/21] dts: sut and tg nodes docstring update
  2023-12-04 10:02                     ` Juraj Linkeš
@ 2023-12-04 11:02                       ` Bruce Richardson
  0 siblings, 0 replies; 255+ messages in thread
From: Bruce Richardson @ 2023-12-04 11:02 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: Jeremy Spewock, thomas, Honnappa.Nagarahalli, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro, dev

On Mon, Dec 04, 2023 at 11:02:21AM +0100, Juraj Linkeš wrote:
> On Fri, Dec 1, 2023 at 7:06 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
> >
> >
> >
> > On Thu, Nov 23, 2023 at 10:14 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
> >>
> >> Format according to the Google format and PEP257, with slight
> >> deviations.
> >>
> >> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> >> ---
> >>  dts/framework/testbed_model/sut_node.py | 230 ++++++++++++++++--------
> >>  dts/framework/testbed_model/tg_node.py  |  42 +++--
> >>  2 files changed, 176 insertions(+), 96 deletions(-)
> >>
> >> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
> >> index 5ce9446dba..c4acea38d1 100644
> >> --- a/dts/framework/testbed_model/sut_node.py
> >> +++ b/dts/framework/testbed_model/sut_node.py
> >> @@ -3,6 +3,14 @@
> >>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
> >>  # Copyright(c) 2023 University of New Hampshire
> >>
> >> +"""System under test (DPDK + hardware) node.
> >> +
> >> +A system under test (SUT) is the combination of DPDK
> >> +and the hardware we're testing with DPDK (NICs, crypto and other devices).
> >> +An SUT node is where this SUT runs.
> >> +"""
> >
> >
> > I think this should just be "A SUT node"
> >
> 
> I always spell it out which is why I used "an" (an es, ju:, ti: node).
> From what I understand, the article is based on how the word is
> pronounced. If it's an initialism (it's spelled), we should use "an"
> and if it's an abbreviation (pronounced as the whole word), we should
> use "a". It always made sense to me as an initialism - I think that's
> the common usage.

+1 for using "an" instead of "a" in front of "SUT".

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v9 21/21] dts: test suites docstring update
  2023-12-04 10:24                   ` [PATCH v9 21/21] dts: test suites " Juraj Linkeš
@ 2023-12-05 18:39                     ` Jeremy Spewock
  0 siblings, 0 replies; 255+ messages in thread
From: Jeremy Spewock @ 2023-12-05 18:39 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

[-- Attachment #1: Type: text/plain, Size: 9406 bytes --]

Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>

On Mon, Dec 4, 2023 at 5:24 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Format according to the Google format and PEP257, with slight
> deviations.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  dts/tests/TestSuite_hello_world.py | 16 +++++---
>  dts/tests/TestSuite_os_udp.py      | 20 ++++++----
>  dts/tests/TestSuite_smoke_tests.py | 61 ++++++++++++++++++++++++------
>  3 files changed, 72 insertions(+), 25 deletions(-)
>
> diff --git a/dts/tests/TestSuite_hello_world.py
> b/dts/tests/TestSuite_hello_world.py
> index 768ba1cfa8..fd7ff1534d 100644
> --- a/dts/tests/TestSuite_hello_world.py
> +++ b/dts/tests/TestSuite_hello_world.py
> @@ -1,7 +1,8 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2010-2014 Intel Corporation
>
> -"""
> +"""The DPDK hello world app test suite.
> +
>  Run the helloworld example app and verify it prints a message for each
> used core.
>  No other EAL parameters apart from cores are used.
>  """
> @@ -15,22 +16,25 @@
>
>
>  class TestHelloWorld(TestSuite):
> +    """DPDK hello world app test suite."""
> +
>      def set_up_suite(self) -> None:
> -        """
> +        """Set up the test suite.
> +
>          Setup:
>              Build the app we're about to test - helloworld.
>          """
>          self.app_helloworld_path =
> self.sut_node.build_dpdk_app("helloworld")
>
>      def test_hello_world_single_core(self) -> None:
> -        """
> +        """Single core test case.
> +
>          Steps:
>              Run the helloworld app on the first usable logical core.
>          Verify:
>              The app prints a message from the used core:
>              "hello from core <core_id>"
>          """
> -
>          # get the first usable core
>          lcore_amount = LogicalCoreCount(1, 1, 1)
>          lcores = LogicalCoreCountFilter(self.sut_node.lcores,
> lcore_amount).filter()
> @@ -42,14 +46,14 @@ def test_hello_world_single_core(self) -> None:
>          )
>
>      def test_hello_world_all_cores(self) -> None:
> -        """
> +        """All cores test case.
> +
>          Steps:
>              Run the helloworld app on all usable logical cores.
>          Verify:
>              The app prints a message from all used cores:
>              "hello from core <core_id>"
>          """
> -
>          # get the maximum logical core number
>          eal_para = self.sut_node.create_eal_parameters(
>              lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
> diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
> index bf6b93deb5..2cf29d37bb 100644
> --- a/dts/tests/TestSuite_os_udp.py
> +++ b/dts/tests/TestSuite_os_udp.py
> @@ -1,7 +1,8 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2023 PANTHEON.tech s.r.o.
>
> -"""
> +"""Basic IPv4 OS routing test suite.
> +
>  Configure SUT node to route traffic from if1 to if2.
>  Send a packet to the SUT node, verify it comes back on the second port on
> the TG node.
>  """
> @@ -13,24 +14,26 @@
>
>
>  class TestOSUdp(TestSuite):
> +    """IPv4 UDP OS routing test suite."""
> +
>      def set_up_suite(self) -> None:
> -        """
> +        """Set up the test suite.
> +
>          Setup:
> -            Configure SUT ports and SUT to route traffic from if1 to if2.
> +            Bind the SUT ports to the OS driver, configure the ports and
> configure the SUT
> +            to route traffic from if1 to if2.
>          """
> -
> -        # This test uses kernel drivers
>          self.sut_node.bind_ports_to_driver(for_dpdk=False)
>          self.configure_testbed_ipv4()
>
>      def test_os_udp(self) -> None:
> -        """
> +        """Basic UDP IPv4 traffic test case.
> +
>          Steps:
>              Send a UDP packet.
>          Verify:
>              The packet with proper addresses arrives at the other TG port.
>          """
> -
>          packet = Ether() / IP() / UDP()
>
>          received_packets = self.send_packet_and_capture(packet)
> @@ -40,7 +43,8 @@ def test_os_udp(self) -> None:
>          self.verify_packets(expected_packet, received_packets)
>
>      def tear_down_suite(self) -> None:
> -        """
> +        """Tear down the test suite.
> +
>          Teardown:
>              Remove the SUT port configuration configured in setup.
>          """
> diff --git a/dts/tests/TestSuite_smoke_tests.py
> b/dts/tests/TestSuite_smoke_tests.py
> index 8958f58dac..5e2bac14bd 100644
> --- a/dts/tests/TestSuite_smoke_tests.py
> +++ b/dts/tests/TestSuite_smoke_tests.py
> @@ -1,6 +1,17 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2023 University of New Hampshire
>
> +"""Smoke test suite.
> +
> +Smoke tests are a class of tests which are used for validating a minimal
> set of important features.
> +These are the most important features without which (or when they're
> faulty) the software wouldn't
> +work properly. Thus, if any failure occurs while testing these features,
> +there isn't that much of a reason to continue testing, as the software is
> fundamentally broken.
> +
> +These tests don't have to include only DPDK tests, as the reason for
> failures could be
> +in the infrastructure (a faulty link between NICs or a misconfiguration).
> +"""
> +
>  import re
>
>  from framework.config import PortConfig
> @@ -11,23 +22,39 @@
>
>
>  class SmokeTests(TestSuite):
> +    """DPDK and infrastructure smoke test suite.
> +
> +    The test cases validate the most basic DPDK functionality needed for
> all other test suites.
> +    The infrastructure also needs to be tested, as that is also used by
> all other test suites.
> +
> +    Attributes:
> +        is_blocking: This test suite will block the execution of all
> other test suites
> +            in the build target after it.
> +        nics_in_node: The NICs present on the SUT node.
> +    """
> +
>      is_blocking = True
>      # dicts in this list are expected to have two keys:
>      # "pci_address" and "current_driver"
>      nics_in_node: list[PortConfig] = []
>
>      def set_up_suite(self) -> None:
> -        """
> +        """Set up the test suite.
> +
>          Setup:
> -            Set the build directory path and generate a list of NICs in
> the SUT node.
> +            Set the build directory path and a list of NICs in the SUT
> node.
>          """
>          self.dpdk_build_dir_path = self.sut_node.remote_dpdk_build_dir
>          self.nics_in_node = self.sut_node.config.ports
>
>      def test_unit_tests(self) -> None:
> -        """
> +        """DPDK meson ``fast-tests`` unit tests.
> +
> +        Test that all unit test from the ``fast-tests`` suite pass.
> +        The suite is a subset with only the most basic tests.
> +
>          Test:
> -            Run the fast-test unit-test suite through meson.
> +            Run the ``fast-tests`` unit test suite through meson.
>          """
>          self.sut_node.main_session.send_command(
>              f"meson test -C {self.dpdk_build_dir_path} --suite fast-tests
> -t 60",
> @@ -37,9 +64,14 @@ def test_unit_tests(self) -> None:
>          )
>
>      def test_driver_tests(self) -> None:
> -        """
> +        """DPDK meson ``driver-tests`` unit tests.
> +
> +        Test that all unit test from the ``driver-tests`` suite pass.
> +        The suite is a subset with driver tests. This suite may be run
> with virtual devices
> +        configured in the test run configuration.
> +
>          Test:
> -            Run the driver-test unit-test suite through meson.
> +            Run the ``driver-tests`` unit test suite through meson.
>          """
>          vdev_args = ""
>          for dev in self.sut_node.virtual_devices:
> @@ -60,9 +92,12 @@ def test_driver_tests(self) -> None:
>          )
>
>      def test_devices_listed_in_testpmd(self) -> None:
> -        """
> +        """Testpmd device discovery.
> +
> +        Test that the devices configured in the test run configuration
> are found in testpmd.
> +
>          Test:
> -            Uses testpmd driver to verify that devices have been found by
> testpmd.
> +            List all devices found in testpmd and verify the configured
> devices are among them.
>          """
>          testpmd_driver =
> self.sut_node.create_interactive_shell(TestPmdShell, privileged=True)
>          dev_list = [str(x) for x in testpmd_driver.get_devices()]
> @@ -74,10 +109,14 @@ def test_devices_listed_in_testpmd(self) -> None:
>              )
>
>      def test_device_bound_to_driver(self) -> None:
> -        """
> +        """Device driver in OS.
> +
> +        Test that the devices configured in the test run configuration
> are bound to
> +        the proper driver.
> +
>          Test:
> -            Ensure that all drivers listed in the config are bound to the
> correct
> -            driver.
> +            List all devices with the ``dpdk-devbind.py`` script and
> verify that
> +            the configured devices are bound to the proper driver.
>          """
>          path_to_devbind = self.sut_node.path_to_devbind_script
>
> --
> 2.34.1
>
>

[-- Attachment #2: Type: text/html, Size: 11694 bytes --]

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v9 00/21] dts: docstrings update
  2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
                                     ` (20 preceding siblings ...)
  2023-12-04 10:24                   ` [PATCH v9 21/21] dts: test suites " Juraj Linkeš
@ 2023-12-21 11:48                   ` Thomas Monjalon
  21 siblings, 0 replies; 255+ messages in thread
From: Thomas Monjalon @ 2023-12-21 11:48 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: Honnappa.Nagarahalli, jspewock, probb, paul.szczepanek,
	yoan.picchi, Luca.Vizzarro, dev

04/12/2023 11:24, Juraj Linkeš:
> The first commit makes changes to the code. These code changes mainly
> change the structure of the code so that the actual API docs generation
> works. There are also some code changes which get reflected in the
> documentation, such as making functions/methods/attributes private or
> public.
> 
> The rest of the commits deal with the actual docstring documentation
> (from which the API docs are generated). The format of the docstrings
> is the Google format [0] with PEP257 [1] and some guidelines captured
> in the last commit of this group covering what the Google format
> doesn't.
> The docstring updates are split into many commits to make review
> possible. When accepted, they may be squashed.
> The docstrings have been composed in anticipation of [2], adhering to
> maximum line length of 100. We don't have a tool for automatic docstring
> formatting, hence the usage of 100 right away to save time.

Applied and squashed with few minor improvements.
As discussed with Juraj privately, the copyright date bumps are removed,
because unnecessary.




^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v2 0/3] dts: API docs generation
  2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
  2023-11-15 13:36               ` [PATCH v1 1/2] dts: add doc generation dependencies Juraj Linkeš
  2023-11-15 13:36               ` [PATCH v1 2/2] dts: add doc generation Juraj Linkeš
@ 2024-01-22 12:00               ` Juraj Linkeš
  2024-01-22 12:00                 ` [PATCH v2 1/3] dts: add doc generation dependencies Juraj Linkeš
                                   ` (2 more replies)
  2024-01-22 16:35               ` [PATCH v3 0/3] dts: API docs generation Juraj Linkeš
  2024-04-12 10:14               ` [PATCH v4 0/3] dts: API docs generation Juraj Linkeš
  4 siblings, 3 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 12:00 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The generation is done with Sphinx, which DPDK already uses, with
slightly modified configuration of the sidebar present in an if block.

Dependencies are installed using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts-doc

The patchset contains the .rst sources which Sphinx uses to generate the
html pages. These were first generated with the sphinx-apidoc utility
and modified to provide a better look. The documentation just doesn't
look that good without the modifications and there isn't enough
configuration options to achieve that without manual changes to the .rst
files. This introduces extra maintenance which involves adding new .rst
files when a new Python module is added or changing the .rst structure
if the Python directory/file structure is changed (moved, renamed
files). This small maintenance burden is outweighed by the flexibility
afforded by the ability to make manual changes to the .rst files.

v2:
Removed the use of sphinx-apidoc from meson in favor of adding the files
generated by it directly to the repository (and modifying them).

Juraj Linkeš (3):
  dts: add doc generation dependencies
  dts: add API doc sources
  dts: add API doc generation

 buildtools/call-sphinx-build.py               |  33 +-
 doc/api/doxy-api-index.md                     |   3 +
 doc/api/doxy-api.conf.in                      |   2 +
 doc/api/meson.build                           |  11 +-
 doc/guides/conf.py                            |  39 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      |  34 +-
 dts/doc/conf_yaml_schema.json                 |   1 +
 dts/doc/framework.config.rst                  |  12 +
 dts/doc/framework.config.types.rst            |   6 +
 dts/doc/framework.dts.rst                     |   6 +
 dts/doc/framework.exception.rst               |   6 +
 dts/doc/framework.logger.rst                  |   6 +
 ...ote_session.interactive_remote_session.rst |   6 +
 ...ework.remote_session.interactive_shell.rst |   6 +
 .../framework.remote_session.python_shell.rst |   6 +
 ...ramework.remote_session.remote_session.rst |   6 +
 dts/doc/framework.remote_session.rst          |  17 +
 .../framework.remote_session.ssh_session.rst  |   6 +
 ...framework.remote_session.testpmd_shell.rst |   6 +
 dts/doc/framework.rst                         |  30 ++
 dts/doc/framework.settings.rst                |   6 +
 dts/doc/framework.test_result.rst             |   6 +
 dts/doc/framework.test_suite.rst              |   6 +
 dts/doc/framework.testbed_model.cpu.rst       |   6 +
 .../framework.testbed_model.linux_session.rst |   6 +
 dts/doc/framework.testbed_model.node.rst      |   6 +
 .../framework.testbed_model.os_session.rst    |   6 +
 dts/doc/framework.testbed_model.port.rst      |   6 +
 .../framework.testbed_model.posix_session.rst |   6 +
 dts/doc/framework.testbed_model.rst           |  26 +
 dts/doc/framework.testbed_model.sut_node.rst  |   6 +
 dts/doc/framework.testbed_model.tg_node.rst   |   6 +
 ..._generator.capturing_traffic_generator.rst |   6 +
 ...mework.testbed_model.traffic_generator.rst |  14 +
 ....testbed_model.traffic_generator.scapy.rst |   6 +
 ...el.traffic_generator.traffic_generator.rst |   6 +
 ...framework.testbed_model.virtual_device.rst |   6 +
 dts/doc/framework.utils.rst                   |   6 +
 dts/doc/index.rst                             |  41 ++
 dts/doc/meson.build                           |  27 +
 dts/meson.build                               |  16 +
 dts/poetry.lock                               | 499 +++++++++++++++++-
 dts/pyproject.toml                            |   7 +
 meson.build                                   |   1 +
 45 files changed, 950 insertions(+), 20 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/framework.config.rst
 create mode 100644 dts/doc/framework.config.types.rst
 create mode 100644 dts/doc/framework.dts.rst
 create mode 100644 dts/doc/framework.exception.rst
 create mode 100644 dts/doc/framework.logger.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst
 create mode 100644 dts/doc/framework.remote_session.python_shell.rst
 create mode 100644 dts/doc/framework.remote_session.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.ssh_session.rst
 create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst
 create mode 100644 dts/doc/framework.rst
 create mode 100644 dts/doc/framework.settings.rst
 create mode 100644 dts/doc/framework.test_result.rst
 create mode 100644 dts/doc/framework.test_suite.rst
 create mode 100644 dts/doc/framework.testbed_model.cpu.rst
 create mode 100644 dts/doc/framework.testbed_model.linux_session.rst
 create mode 100644 dts/doc/framework.testbed_model.node.rst
 create mode 100644 dts/doc/framework.testbed_model.os_session.rst
 create mode 100644 dts/doc/framework.testbed_model.port.rst
 create mode 100644 dts/doc/framework.testbed_model.posix_session.rst
 create mode 100644 dts/doc/framework.testbed_model.rst
 create mode 100644 dts/doc/framework.testbed_model.sut_node.rst
 create mode 100644 dts/doc/framework.testbed_model.tg_node.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst
 create mode 100644 dts/doc/framework.utils.rst
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v2 1/3] dts: add doc generation dependencies
  2024-01-22 12:00               ` [PATCH v2 0/3] dts: API docs generation Juraj Linkeš
@ 2024-01-22 12:00                 ` Juraj Linkeš
  2024-01-22 12:00                 ` [PATCH v2 2/3] dts: add API doc sources Juraj Linkeš
  2024-01-22 12:00                 ` [PATCH v2 3/3] dts: add API doc generation Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 12:00 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 499 ++++++++++++++++++++++++++++++++++++++++++++-
 dts/pyproject.toml |   7 +
 2 files changed, 505 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index a734fa71f0..8b27b0d751 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.13.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
+    {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
+]
+
+[package.dependencies]
+setuptools = {version = "*", markers = "python_version >= \"3.12\""}
+
+[package.extras]
+dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +201,105 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.3.2"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"},
+    {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.6"
@@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"]
 test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
 test-randomorder = ["pytest-randomly"]
 
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
 [[package]]
 name = "fabric"
 version = "2.7.1"
@@ -252,6 +401,28 @@ pathlib2 = "*"
 pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
 testing = ["mock (>=2.0.0,<3.0)"]
 
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
+
 [[package]]
 name = "invoke"
 version = "1.7.3"
@@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
 plugins = ["setuptools"]
 requirements-deprecated-finder = ["pip-api", "pipreqs"]
 
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
 [[package]]
 name = "jsonpatch"
 version = "1.33"
@@ -340,6 +528,65 @@ files = [
 [package.dependencies]
 referencing = ">=0.28.0"
 
+[[package]]
+name = "markupsafe"
+version = "2.1.3"
+description = "Safely add untrusted strings to HTML/XML markup."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
+    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -404,6 +651,17 @@ files = [
     {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
 ]
 
+[[package]]
+name = "packaging"
+version = "23.2"
+description = "Core utilities for Python packages"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
+    {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
+]
+
 [[package]]
 name = "paramiko"
 version = "3.2.0"
@@ -515,6 +773,20 @@ files = [
     {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
 ]
 
+[[package]]
+name = "pygments"
+version = "2.16.1"
+description = "Pygments is a syntax highlighting package written in Python."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"},
+    {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
 [[package]]
 name = "pylama"
 version = "8.4.1"
@@ -632,6 +904,27 @@ files = [
 attrs = ">=22.2.0"
 rpds-py = ">=0.7.0"
 
+[[package]]
+name = "requests"
+version = "2.31.0"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
+    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
 [[package]]
 name = "rpds-py"
 version = "0.9.2"
@@ -753,6 +1046,22 @@ basic = ["ipython"]
 complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
 docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
 
+[[package]]
+name = "setuptools"
+version = "68.2.2"
+description = "Easily download, build, install, upgrade, and uninstall Python packages"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"},
+    {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
+testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
+testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
+
 [[package]]
 name = "six"
 version = "1.16.0"
@@ -775,6 +1084,177 @@ files = [
     {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
 ]
 
+[[package]]
+name = "sphinx"
+version = "6.2.1"
+description = "Python documentation generator"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
+    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18.1,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.2"
+description = "Read the Docs theme for Sphinx"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
+    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = ">=4,<5"
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.7"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"},
+    {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.5"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"},
+    {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.4"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"},
+    {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.6"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"},
+    {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.9"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"},
+    {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
 [[package]]
 name = "toml"
 version = "0.10.2"
@@ -819,6 +1299,23 @@ files = [
     {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
 ]
 
+[[package]]
+name = "urllib3"
+version = "2.0.7"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
+    {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
+
 [[package]]
 name = "warlock"
 version = "2.0.1"
@@ -837,4 +1334,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
+content-hash = "44c0fd4ebd7a1630ad104d55a25ef85d361abc968157254e757d44e340bca06f"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 37a692d655..28bd970ae4 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -35,6 +35,13 @@ pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+sphinx = "<7"
+sphinx-rtd-theme = "^1.2.2"
+
 [build-system]
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v2 2/3] dts: add API doc sources
  2024-01-22 12:00               ` [PATCH v2 0/3] dts: API docs generation Juraj Linkeš
  2024-01-22 12:00                 ` [PATCH v2 1/3] dts: add doc generation dependencies Juraj Linkeš
@ 2024-01-22 12:00                 ` Juraj Linkeš
  2024-01-22 12:00                 ` [PATCH v2 3/3] dts: add API doc generation Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 12:00 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

These sources could be generated with the sphinx-apidoc utility, but
that doesn't give us enough flexibility.

The sources included in this patch were in fact generated by said
utility, but modified to improve the look of the documentation. The
improvements are mainly in toctree definitions and the titles of the
modules/packages. These were made with specific config options in mind.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/doc/conf_yaml_schema.json                 |  1 +
 dts/doc/framework.config.rst                  | 12 ++++++
 dts/doc/framework.config.types.rst            |  6 +++
 dts/doc/framework.dts.rst                     |  6 +++
 dts/doc/framework.exception.rst               |  6 +++
 dts/doc/framework.logger.rst                  |  6 +++
 ...ote_session.interactive_remote_session.rst |  6 +++
 ...ework.remote_session.interactive_shell.rst |  6 +++
 .../framework.remote_session.python_shell.rst |  6 +++
 ...ramework.remote_session.remote_session.rst |  6 +++
 dts/doc/framework.remote_session.rst          | 17 ++++++++
 .../framework.remote_session.ssh_session.rst  |  6 +++
 ...framework.remote_session.testpmd_shell.rst |  6 +++
 dts/doc/framework.rst                         | 30 ++++++++++++++
 dts/doc/framework.settings.rst                |  6 +++
 dts/doc/framework.test_result.rst             |  6 +++
 dts/doc/framework.test_suite.rst              |  6 +++
 dts/doc/framework.testbed_model.cpu.rst       |  6 +++
 .../framework.testbed_model.linux_session.rst |  6 +++
 dts/doc/framework.testbed_model.node.rst      |  6 +++
 .../framework.testbed_model.os_session.rst    |  6 +++
 dts/doc/framework.testbed_model.port.rst      |  6 +++
 .../framework.testbed_model.posix_session.rst |  6 +++
 dts/doc/framework.testbed_model.rst           | 26 ++++++++++++
 dts/doc/framework.testbed_model.sut_node.rst  |  6 +++
 dts/doc/framework.testbed_model.tg_node.rst   |  6 +++
 ..._generator.capturing_traffic_generator.rst |  6 +++
 ...mework.testbed_model.traffic_generator.rst | 14 +++++++
 ....testbed_model.traffic_generator.scapy.rst |  6 +++
 ...el.traffic_generator.traffic_generator.rst |  6 +++
 ...framework.testbed_model.virtual_device.rst |  6 +++
 dts/doc/framework.utils.rst                   |  6 +++
 dts/doc/index.rst                             | 41 +++++++++++++++++++
 33 files changed, 297 insertions(+)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/framework.config.rst
 create mode 100644 dts/doc/framework.config.types.rst
 create mode 100644 dts/doc/framework.dts.rst
 create mode 100644 dts/doc/framework.exception.rst
 create mode 100644 dts/doc/framework.logger.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst
 create mode 100644 dts/doc/framework.remote_session.python_shell.rst
 create mode 100644 dts/doc/framework.remote_session.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.ssh_session.rst
 create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst
 create mode 100644 dts/doc/framework.rst
 create mode 100644 dts/doc/framework.settings.rst
 create mode 100644 dts/doc/framework.test_result.rst
 create mode 100644 dts/doc/framework.test_suite.rst
 create mode 100644 dts/doc/framework.testbed_model.cpu.rst
 create mode 100644 dts/doc/framework.testbed_model.linux_session.rst
 create mode 100644 dts/doc/framework.testbed_model.node.rst
 create mode 100644 dts/doc/framework.testbed_model.os_session.rst
 create mode 100644 dts/doc/framework.testbed_model.port.rst
 create mode 100644 dts/doc/framework.testbed_model.posix_session.rst
 create mode 100644 dts/doc/framework.testbed_model.rst
 create mode 100644 dts/doc/framework.testbed_model.sut_node.rst
 create mode 100644 dts/doc/framework.testbed_model.tg_node.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst
 create mode 100644 dts/doc/framework.utils.rst
 create mode 100644 dts/doc/index.rst

diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json
new file mode 120000
index 0000000000..d89eb81b72
--- /dev/null
+++ b/dts/doc/conf_yaml_schema.json
@@ -0,0 +1 @@
+../framework/config/conf_yaml_schema.json
\ No newline at end of file
diff --git a/dts/doc/framework.config.rst b/dts/doc/framework.config.rst
new file mode 100644
index 0000000000..f765ef0e32
--- /dev/null
+++ b/dts/doc/framework.config.rst
@@ -0,0 +1,12 @@
+config - Configuration Package
+==============================
+
+.. automodule:: framework.config
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.config.types
diff --git a/dts/doc/framework.config.types.rst b/dts/doc/framework.config.types.rst
new file mode 100644
index 0000000000..5af915b681
--- /dev/null
+++ b/dts/doc/framework.config.types.rst
@@ -0,0 +1,6 @@
+types - Configuration Types
+===========================
+
+.. automodule:: framework.config.types
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.dts.rst b/dts/doc/framework.dts.rst
new file mode 100644
index 0000000000..b1de438887
--- /dev/null
+++ b/dts/doc/framework.dts.rst
@@ -0,0 +1,6 @@
+dts - Testbed Setup and Test Suite Runner
+=========================================
+
+.. automodule:: framework.dts
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.exception.rst b/dts/doc/framework.exception.rst
new file mode 100644
index 0000000000..ad58bd15de
--- /dev/null
+++ b/dts/doc/framework.exception.rst
@@ -0,0 +1,6 @@
+exception - Exceptions
+======================
+
+.. automodule:: framework.exception
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.logger.rst b/dts/doc/framework.logger.rst
new file mode 100644
index 0000000000..3c25b34819
--- /dev/null
+++ b/dts/doc/framework.logger.rst
@@ -0,0 +1,6 @@
+logger - Logging Facility
+=========================
+
+.. automodule:: framework.logger
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.interactive_remote_session.rst b/dts/doc/framework.remote_session.interactive_remote_session.rst
new file mode 100644
index 0000000000..35dc5c4b03
--- /dev/null
+++ b/dts/doc/framework.remote_session.interactive_remote_session.rst
@@ -0,0 +1,6 @@
+interactive\_remote\_session - SSH Interactive Remote Session
+=============================================================
+
+.. automodule:: framework.remote_session.interactive_remote_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.interactive_shell.rst b/dts/doc/framework.remote_session.interactive_shell.rst
new file mode 100644
index 0000000000..8a59db7b6e
--- /dev/null
+++ b/dts/doc/framework.remote_session.interactive_shell.rst
@@ -0,0 +1,6 @@
+interactive\_shell - Base Interactive Remote Shell
+==================================================
+
+.. automodule:: framework.remote_session.interactive_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.python_shell.rst b/dts/doc/framework.remote_session.python_shell.rst
new file mode 100644
index 0000000000..a8ec06f281
--- /dev/null
+++ b/dts/doc/framework.remote_session.python_shell.rst
@@ -0,0 +1,6 @@
+python\_shell - Python Interactive Remote Shell
+===============================================
+
+.. automodule:: framework.remote_session.python_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.remote_session.rst b/dts/doc/framework.remote_session.remote_session.rst
new file mode 100644
index 0000000000..58b0960d07
--- /dev/null
+++ b/dts/doc/framework.remote_session.remote_session.rst
@@ -0,0 +1,6 @@
+remote\_session - Remote Session ABC
+====================================
+
+.. automodule:: framework.remote_session.remote_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.rst b/dts/doc/framework.remote_session.rst
new file mode 100644
index 0000000000..74f83f0307
--- /dev/null
+++ b/dts/doc/framework.remote_session.rst
@@ -0,0 +1,17 @@
+remote\_session - Node Connections Package
+==========================================
+
+.. automodule:: framework.remote_session
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.remote_session.remote_session
+   framework.remote_session.ssh_session
+   framework.remote_session.interactive_remote_session
+   framework.remote_session.interactive_shell
+   framework.remote_session.testpmd_shell
+   framework.remote_session.python_shell
diff --git a/dts/doc/framework.remote_session.ssh_session.rst b/dts/doc/framework.remote_session.ssh_session.rst
new file mode 100644
index 0000000000..05b019bc7c
--- /dev/null
+++ b/dts/doc/framework.remote_session.ssh_session.rst
@@ -0,0 +1,6 @@
+ssh\_session - SSH Remote Session
+=================================
+
+.. automodule:: framework.remote_session.ssh_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.testpmd_shell.rst b/dts/doc/framework.remote_session.testpmd_shell.rst
new file mode 100644
index 0000000000..14510afb2b
--- /dev/null
+++ b/dts/doc/framework.remote_session.testpmd_shell.rst
@@ -0,0 +1,6 @@
+testpmd\_shell - Testpmd Interactive Remote Shell
+=================================================
+
+.. automodule:: framework.remote_session.testpmd_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.rst b/dts/doc/framework.rst
new file mode 100644
index 0000000000..978d5b5e38
--- /dev/null
+++ b/dts/doc/framework.rst
@@ -0,0 +1,30 @@
+framework - DTS Libraries
+=========================
+
+.. automodule:: framework
+   :members:
+   :show-inheritance:
+
+Packages
+--------
+
+.. toctree::
+   :maxdepth: 3
+
+   framework.config
+   framework.remote_session
+   framework.testbed_model
+
+Modules
+-------
+
+.. toctree::
+   :maxdepth: 1
+
+   framework.dts
+   framework.exception
+   framework.logger
+   framework.settings
+   framework.test_result
+   framework.test_suite
+   framework.utils
diff --git a/dts/doc/framework.settings.rst b/dts/doc/framework.settings.rst
new file mode 100644
index 0000000000..96bf194923
--- /dev/null
+++ b/dts/doc/framework.settings.rst
@@ -0,0 +1,6 @@
+settings - Command Line Arguments and Environment Variables
+===========================================================
+
+.. automodule:: framework.settings
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.test_result.rst b/dts/doc/framework.test_result.rst
new file mode 100644
index 0000000000..527357a04a
--- /dev/null
+++ b/dts/doc/framework.test_result.rst
@@ -0,0 +1,6 @@
+test\_result - Test Results Records
+===================================
+
+.. automodule:: framework.test_result
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.test_suite.rst b/dts/doc/framework.test_suite.rst
new file mode 100644
index 0000000000..96f893e465
--- /dev/null
+++ b/dts/doc/framework.test_suite.rst
@@ -0,0 +1,6 @@
+test\_suite - Common Test Suite Features
+========================================
+
+.. automodule:: framework.test_suite
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.cpu.rst b/dts/doc/framework.testbed_model.cpu.rst
new file mode 100644
index 0000000000..dd2baf09fb
--- /dev/null
+++ b/dts/doc/framework.testbed_model.cpu.rst
@@ -0,0 +1,6 @@
+cpu - CPU Representation and Utilities
+======================================
+
+.. automodule:: framework.testbed_model.cpu
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.linux_session.rst b/dts/doc/framework.testbed_model.linux_session.rst
new file mode 100644
index 0000000000..141f3f49e3
--- /dev/null
+++ b/dts/doc/framework.testbed_model.linux_session.rst
@@ -0,0 +1,6 @@
+linux\_session - Linux Remote Session
+=====================================
+
+.. automodule:: framework.testbed_model.linux_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.node.rst b/dts/doc/framework.testbed_model.node.rst
new file mode 100644
index 0000000000..2133dd604b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.node.rst
@@ -0,0 +1,6 @@
+node - Base Node
+================
+
+.. automodule:: framework.testbed_model.node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.os_session.rst b/dts/doc/framework.testbed_model.os_session.rst
new file mode 100644
index 0000000000..f3574e939a
--- /dev/null
+++ b/dts/doc/framework.testbed_model.os_session.rst
@@ -0,0 +1,6 @@
+os\_session - OS-aware Remote Session ABC
+=========================================
+
+.. automodule:: framework.testbed_model.os_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.port.rst b/dts/doc/framework.testbed_model.port.rst
new file mode 100644
index 0000000000..17bd391e63
--- /dev/null
+++ b/dts/doc/framework.testbed_model.port.rst
@@ -0,0 +1,6 @@
+port - NIC Port Representation
+==============================
+
+.. automodule:: framework.testbed_model.port
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.posix_session.rst b/dts/doc/framework.testbed_model.posix_session.rst
new file mode 100644
index 0000000000..308c051ae5
--- /dev/null
+++ b/dts/doc/framework.testbed_model.posix_session.rst
@@ -0,0 +1,6 @@
+posix\_session - Posix Remote Session
+=====================================
+
+.. automodule:: framework.testbed_model.posix_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.rst b/dts/doc/framework.testbed_model.rst
new file mode 100644
index 0000000000..4b024e47e6
--- /dev/null
+++ b/dts/doc/framework.testbed_model.rst
@@ -0,0 +1,26 @@
+testbed\_model - Testbed Modelling Package
+==========================================
+
+.. automodule:: framework.testbed_model
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 2
+
+   framework.testbed_model.traffic_generator
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.testbed_model.os_session
+   framework.testbed_model.linux_session
+   framework.testbed_model.posix_session
+   framework.testbed_model.node
+   framework.testbed_model.sut_node
+   framework.testbed_model.tg_node
+   framework.testbed_model.cpu
+   framework.testbed_model.port
+   framework.testbed_model.virtual_device
diff --git a/dts/doc/framework.testbed_model.sut_node.rst b/dts/doc/framework.testbed_model.sut_node.rst
new file mode 100644
index 0000000000..7e12b6c87e
--- /dev/null
+++ b/dts/doc/framework.testbed_model.sut_node.rst
@@ -0,0 +1,6 @@
+sut\_node - System Under Test Node
+==================================
+
+.. automodule:: framework.testbed_model.sut_node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.tg_node.rst b/dts/doc/framework.testbed_model.tg_node.rst
new file mode 100644
index 0000000000..41206c000b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.tg_node.rst
@@ -0,0 +1,6 @@
+tg\_node - Traffig Generator Node
+=================================
+
+.. automodule:: framework.testbed_model.tg_node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
new file mode 100644
index 0000000000..06c087155a
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
@@ -0,0 +1,6 @@
+capturing\_traffic\_generator - Base Capturing TG ABC
+=====================================================
+
+.. automodule:: framework.testbed_model.traffic_generator.capturing_traffic_generator
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.rst
new file mode 100644
index 0000000000..18b6f1b98b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.rst
@@ -0,0 +1,14 @@
+traffic\_generator Subpackage
+=============================
+
+.. automodule:: framework.testbed_model.traffic_generator
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.testbed_model.traffic_generator.traffic_generator
+   framework.testbed_model.traffic_generator.capturing_traffic_generator
+   framework.testbed_model.traffic_generator.scapy
diff --git a/dts/doc/framework.testbed_model.traffic_generator.scapy.rst b/dts/doc/framework.testbed_model.traffic_generator.scapy.rst
new file mode 100644
index 0000000000..7062914ec3
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.scapy.rst
@@ -0,0 +1,6 @@
+scapy - Capturing Traffic Generator
+===================================
+
+.. automodule:: framework.testbed_model.traffic_generator.scapy
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
new file mode 100644
index 0000000000..e366d7f222
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
@@ -0,0 +1,6 @@
+traffic\_generator - Base TG ABC
+================================
+
+.. automodule:: framework.testbed_model.traffic_generator.traffic_generator
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.virtual_device.rst b/dts/doc/framework.testbed_model.virtual_device.rst
new file mode 100644
index 0000000000..38e6c1d0bc
--- /dev/null
+++ b/dts/doc/framework.testbed_model.virtual_device.rst
@@ -0,0 +1,6 @@
+virtual\_device - Virtual Devices
+=================================
+
+.. automodule:: framework.testbed_model.virtual_device
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.utils.rst b/dts/doc/framework.utils.rst
new file mode 100644
index 0000000000..0e7bb80666
--- /dev/null
+++ b/dts/doc/framework.utils.rst
@@ -0,0 +1,6 @@
+utils - Various Utilities
+=========================
+
+.. automodule:: framework.utils
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/index.rst b/dts/doc/index.rst
new file mode 100644
index 0000000000..fc3b6d78b9
--- /dev/null
+++ b/dts/doc/index.rst
@@ -0,0 +1,41 @@
+.. DPDK Test Suite documentation.
+
+Welcome to DPDK Test Suite's API documentation!
+===============================================
+
+.. automodule:: framework
+   :members:
+   :show-inheritance:
+
+Packages
+--------
+
+.. toctree::
+   :includehidden:
+   :maxdepth: 1
+
+   framework.config
+   framework.remote_session
+   framework.testbed_model
+
+Modules
+-------
+
+.. toctree::
+   :maxdepth: 1
+
+   framework.dts
+   framework.exception
+   framework.logger
+   framework.settings
+   framework.test_result
+   framework.test_suite
+   framework.utils
+
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v2 3/3] dts: add API doc generation
  2024-01-22 12:00               ` [PATCH v2 0/3] dts: API docs generation Juraj Linkeš
  2024-01-22 12:00                 ` [PATCH v2 1/3] dts: add doc generation dependencies Juraj Linkeš
  2024-01-22 12:00                 ` [PATCH v2 2/3] dts: add API doc sources Juraj Linkeš
@ 2024-01-22 12:00                 ` Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 12:00 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is Sphinx, which is already in
use in DPDK. The same configuration is used to preserve style, but it's
been augmented with doc-generating configuration. There's a change that
modifies how the sidebar displays the content hierarchy that's been put
into an if block to not interfere with regular docs.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 33 +++++++++++++++++++---------
 doc/api/doxy-api-index.md       |  3 +++
 doc/api/doxy-api.conf.in        |  2 ++
 doc/api/meson.build             | 11 +++++++---
 doc/guides/conf.py              | 39 ++++++++++++++++++++++++++++-----
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 34 +++++++++++++++++++++++++++-
 dts/doc/meson.build             | 27 +++++++++++++++++++++++
 dts/meson.build                 | 16 ++++++++++++++
 meson.build                     |  1 +
 10 files changed, 148 insertions(+), 19 deletions(-)
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..aea771a64e 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,50 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default=None)
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+if args.dts_root:
+    os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
+if not os.path.exists(args.dst):
+    os.makedirs(args.dst)
+
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..b49b24acce 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -241,3 +241,6 @@ The public API headers are grouped by topics:
   [experimental APIs](@ref rte_compat.h),
   [ABI versioning](@ref rte_function_versioning.h),
   [version](@ref rte_version.h)
+
+- **tests**:
+  [**DTS**](@dts_api_main_page)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index e94c9e4e46..d53edeba57 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -121,6 +121,8 @@ SEARCHENGINE            = YES
 SORT_MEMBER_DOCS        = NO
 SOURCE_BROWSER          = YES
 
+ALIASES                 = "dts_api_main_page=@DTS_API_MAIN_PAGE@"
+
 EXAMPLE_PATH            = @TOPDIR@/examples
 EXAMPLE_PATTERNS        = *.c
 EXAMPLE_RECURSIVE       = YES
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..ffc75d7b5a 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
@@ -32,14 +33,18 @@ example = custom_target('examples.dox',
 # set up common Doxygen configuration
 cdata = configuration_data()
 cdata.set('VERSION', meson.project_version())
-cdata.set('API_EXAMPLES', join_paths(dpdk_build_root, 'doc', 'api', 'examples.dox'))
-cdata.set('OUTPUT', join_paths(dpdk_build_root, 'doc', 'api'))
+cdata.set('API_EXAMPLES', join_paths(doc_api_build_dir, 'examples.dox'))
+cdata.set('OUTPUT', doc_api_build_dir)
 cdata.set('TOPDIR', dpdk_source_root)
-cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, join_paths(dpdk_build_root, 'doc', 'api')]))
+cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, doc_api_build_dir]))
 cdata.set('WARN_AS_ERROR', 'NO')
 if get_option('werror')
     cdata.set('WARN_AS_ERROR', 'YES')
 endif
+# A local reference must be relative to the main index.html page
+# The path below can't be taken from the DTS meson file as that would
+# require recursive subdir traversal (doc, dts, then doc again)
+cdata.set('DTS_API_MAIN_PAGE', join_paths('..', 'dts', 'html', 'index.html'))
 
 # configure HTML Doxygen run
 html_cdata = configuration_data()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..b442a1f76c 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,37 @@
           file=stderr)
     pass
 
+# Napoleon enables the Google format of Python doscstrings, used in DTS
+# Intersphinx allows linking to external projects, such as Python docs, also used in DTS
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# DTS Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_typehints_description_target = 'documented'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_preprocess_types = True
+add_module_names = False
+toc_object_entries = True
+toc_object_entries_show_parents = 'hide'
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
+
+dts_root = environ.get('DTS_ROOT')
+if dts_root:
+    path.append(dts_root)
+    # DTS Sidebar config
+    html_theme_options = {
+        'collapse_navigation': False,
+        'navigation_depth': -1,
+    }
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +65,7 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index b6a3e1e791..49be2dbfa1 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -284,7 +284,12 @@ and try not to divert much from it.
 The :ref:`DTS developer tools <dts_dev_tools>` will issue warnings
 when some of the basics are not met.
 
-The code must be properly documented with docstrings.
+The API documentation, which is a helpful reference when developing, may be accessed
+in the code directly or generated with the :ref:`API docs build steps <building_api_docs>`.
+When adding new files or modifying the directory structure, the corresponding changes must
+be made to DTS api doc sources in ``dts/doc``.
+
+Speaking of which, the code must be properly documented with docstrings.
 The style must conform to the `Google style
 <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
 See an example of the style `here
@@ -413,3 +418,30 @@ There are three tools used in DTS to help with code checking, style and formatti
 These three tools are all used in ``devtools/dts-check-format.sh``,
 the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
+
+
+.. _building_api_docs:
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+.. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+The documentation is built using the standard DPDK build system. After executing the meson command
+and entering Poetry's shell, build the documentation with:
+
+.. code-block:: console
+
+   ninja -C build dts-doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run
+   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..01b7b51034
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: false)
+sphinx_apidoc = find_program('sphinx-apidoc', required: false)
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_doc_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+
+extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk', 'dts')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        command: [sphinx_wrapper, sphinx, meson.project_version(),
+            meson.current_source_dir(), dts_doc_api_build_dir, extra_sphinx_args],
+        build_by_default: false,
+        install: get_option('enable_docs'),
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..e8ce0f06ac
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts-doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 5e161f43e5..001fdcbbbf 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v3 0/3] dts: API docs generation
  2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
                                 ` (2 preceding siblings ...)
  2024-01-22 12:00               ` [PATCH v2 0/3] dts: API docs generation Juraj Linkeš
@ 2024-01-22 16:35               ` Juraj Linkeš
  2024-01-22 16:35                 ` [PATCH v3 1/3] dts: add doc generation dependencies Juraj Linkeš
                                   ` (2 more replies)
  2024-04-12 10:14               ` [PATCH v4 0/3] dts: API docs generation Juraj Linkeš
  4 siblings, 3 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 16:35 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The generation is done with Sphinx, which DPDK already uses, with
slightly modified configuration of the sidebar present in an if block.

Dependencies are installed using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts-doc

The patchset contains the .rst sources which Sphinx uses to generate the
html pages. These were first generated with the sphinx-apidoc utility
and modified to provide a better look. The documentation just doesn't
look that good without the modifications and there isn't enough
configuration options to achieve that without manual changes to the .rst
files. This introduces extra maintenance which involves adding new .rst
files when a new Python module is added or changing the .rst structure
if the Python directory/file structure is changed (moved, renamed
files). This small maintenance burden is outweighed by the flexibility
afforded by the ability to make manual changes to the .rst files.

v2:
Removed the use of sphinx-apidoc from meson in favor of adding the files
generated by it directly to the repository (and modifying them).

v3:
Rebase.

Juraj Linkeš (3):
  dts: add doc generation dependencies
  dts: add API doc sources
  dts: add API doc generation

 buildtools/call-sphinx-build.py               |  33 +-
 doc/api/doxy-api-index.md                     |   3 +
 doc/api/doxy-api.conf.in                      |   2 +
 doc/api/meson.build                           |  11 +-
 doc/guides/conf.py                            |  39 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      |  34 +-
 dts/doc/conf_yaml_schema.json                 |   1 +
 dts/doc/framework.config.rst                  |  12 +
 dts/doc/framework.config.types.rst            |   6 +
 dts/doc/framework.dts.rst                     |   6 +
 dts/doc/framework.exception.rst               |   6 +
 dts/doc/framework.logger.rst                  |   6 +
 ...ote_session.interactive_remote_session.rst |   6 +
 ...ework.remote_session.interactive_shell.rst |   6 +
 .../framework.remote_session.python_shell.rst |   6 +
 ...ramework.remote_session.remote_session.rst |   6 +
 dts/doc/framework.remote_session.rst          |  17 +
 .../framework.remote_session.ssh_session.rst  |   6 +
 ...framework.remote_session.testpmd_shell.rst |   6 +
 dts/doc/framework.rst                         |  30 ++
 dts/doc/framework.settings.rst                |   6 +
 dts/doc/framework.test_result.rst             |   6 +
 dts/doc/framework.test_suite.rst              |   6 +
 dts/doc/framework.testbed_model.cpu.rst       |   6 +
 .../framework.testbed_model.linux_session.rst |   6 +
 dts/doc/framework.testbed_model.node.rst      |   6 +
 .../framework.testbed_model.os_session.rst    |   6 +
 dts/doc/framework.testbed_model.port.rst      |   6 +
 .../framework.testbed_model.posix_session.rst |   6 +
 dts/doc/framework.testbed_model.rst           |  26 +
 dts/doc/framework.testbed_model.sut_node.rst  |   6 +
 dts/doc/framework.testbed_model.tg_node.rst   |   6 +
 ..._generator.capturing_traffic_generator.rst |   6 +
 ...mework.testbed_model.traffic_generator.rst |  14 +
 ....testbed_model.traffic_generator.scapy.rst |   6 +
 ...el.traffic_generator.traffic_generator.rst |   6 +
 ...framework.testbed_model.virtual_device.rst |   6 +
 dts/doc/framework.utils.rst                   |   6 +
 dts/doc/index.rst                             |  41 ++
 dts/doc/meson.build                           |  27 +
 dts/meson.build                               |  16 +
 dts/poetry.lock                               | 499 +++++++++++++++++-
 dts/pyproject.toml                            |   7 +
 meson.build                                   |   1 +
 45 files changed, 950 insertions(+), 20 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/framework.config.rst
 create mode 100644 dts/doc/framework.config.types.rst
 create mode 100644 dts/doc/framework.dts.rst
 create mode 100644 dts/doc/framework.exception.rst
 create mode 100644 dts/doc/framework.logger.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst
 create mode 100644 dts/doc/framework.remote_session.python_shell.rst
 create mode 100644 dts/doc/framework.remote_session.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.ssh_session.rst
 create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst
 create mode 100644 dts/doc/framework.rst
 create mode 100644 dts/doc/framework.settings.rst
 create mode 100644 dts/doc/framework.test_result.rst
 create mode 100644 dts/doc/framework.test_suite.rst
 create mode 100644 dts/doc/framework.testbed_model.cpu.rst
 create mode 100644 dts/doc/framework.testbed_model.linux_session.rst
 create mode 100644 dts/doc/framework.testbed_model.node.rst
 create mode 100644 dts/doc/framework.testbed_model.os_session.rst
 create mode 100644 dts/doc/framework.testbed_model.port.rst
 create mode 100644 dts/doc/framework.testbed_model.posix_session.rst
 create mode 100644 dts/doc/framework.testbed_model.rst
 create mode 100644 dts/doc/framework.testbed_model.sut_node.rst
 create mode 100644 dts/doc/framework.testbed_model.tg_node.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst
 create mode 100644 dts/doc/framework.utils.rst
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v3 1/3] dts: add doc generation dependencies
  2024-01-22 16:35               ` [PATCH v3 0/3] dts: API docs generation Juraj Linkeš
@ 2024-01-22 16:35                 ` Juraj Linkeš
  2024-01-22 16:35                 ` [PATCH v3 2/3] dts: add API doc sources Juraj Linkeš
  2024-01-22 16:35                 ` [PATCH v3 3/3] dts: add API doc generation Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 16:35 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all dts dependencies, including Python version,
must be satisfied.
By adding Sphinx to dts dependencies we make sure that the proper
Python version and dependencies are used when Sphinx is executed.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 499 ++++++++++++++++++++++++++++++++++++++++++++-
 dts/pyproject.toml |   7 +
 2 files changed, 505 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index a734fa71f0..8b27b0d751 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.13.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
+    {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
+]
+
+[package.dependencies]
+setuptools = {version = "*", markers = "python_version >= \"3.12\""}
+
+[package.extras]
+dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +201,105 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.3.2"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"},
+    {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.6"
@@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"]
 test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
 test-randomorder = ["pytest-randomly"]
 
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
 [[package]]
 name = "fabric"
 version = "2.7.1"
@@ -252,6 +401,28 @@ pathlib2 = "*"
 pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
 testing = ["mock (>=2.0.0,<3.0)"]
 
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
+
 [[package]]
 name = "invoke"
 version = "1.7.3"
@@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
 plugins = ["setuptools"]
 requirements-deprecated-finder = ["pip-api", "pipreqs"]
 
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
 [[package]]
 name = "jsonpatch"
 version = "1.33"
@@ -340,6 +528,65 @@ files = [
 [package.dependencies]
 referencing = ">=0.28.0"
 
+[[package]]
+name = "markupsafe"
+version = "2.1.3"
+description = "Safely add untrusted strings to HTML/XML markup."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
+    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -404,6 +651,17 @@ files = [
     {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
 ]
 
+[[package]]
+name = "packaging"
+version = "23.2"
+description = "Core utilities for Python packages"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
+    {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
+]
+
 [[package]]
 name = "paramiko"
 version = "3.2.0"
@@ -515,6 +773,20 @@ files = [
     {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
 ]
 
+[[package]]
+name = "pygments"
+version = "2.16.1"
+description = "Pygments is a syntax highlighting package written in Python."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"},
+    {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
 [[package]]
 name = "pylama"
 version = "8.4.1"
@@ -632,6 +904,27 @@ files = [
 attrs = ">=22.2.0"
 rpds-py = ">=0.7.0"
 
+[[package]]
+name = "requests"
+version = "2.31.0"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
+    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
 [[package]]
 name = "rpds-py"
 version = "0.9.2"
@@ -753,6 +1046,22 @@ basic = ["ipython"]
 complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
 docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
 
+[[package]]
+name = "setuptools"
+version = "68.2.2"
+description = "Easily download, build, install, upgrade, and uninstall Python packages"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"},
+    {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
+testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
+testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
+
 [[package]]
 name = "six"
 version = "1.16.0"
@@ -775,6 +1084,177 @@ files = [
     {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
 ]
 
+[[package]]
+name = "sphinx"
+version = "6.2.1"
+description = "Python documentation generator"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
+    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18.1,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.2"
+description = "Read the Docs theme for Sphinx"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
+    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = ">=4,<5"
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.7"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"},
+    {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.5"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"},
+    {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.4"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"},
+    {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.6"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"},
+    {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.9"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"},
+    {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
 [[package]]
 name = "toml"
 version = "0.10.2"
@@ -819,6 +1299,23 @@ files = [
     {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
 ]
 
+[[package]]
+name = "urllib3"
+version = "2.0.7"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
+    {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
+
 [[package]]
 name = "warlock"
 version = "2.0.1"
@@ -837,4 +1334,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
+content-hash = "44c0fd4ebd7a1630ad104d55a25ef85d361abc968157254e757d44e340bca06f"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 37a692d655..28bd970ae4 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -35,6 +35,13 @@ pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+sphinx = "<7"
+sphinx-rtd-theme = "^1.2.2"
+
 [build-system]
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v3 2/3] dts: add API doc sources
  2024-01-22 16:35               ` [PATCH v3 0/3] dts: API docs generation Juraj Linkeš
  2024-01-22 16:35                 ` [PATCH v3 1/3] dts: add doc generation dependencies Juraj Linkeš
@ 2024-01-22 16:35                 ` Juraj Linkeš
  2024-01-22 16:35                 ` [PATCH v3 3/3] dts: add API doc generation Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 16:35 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

These sources could be generated with the sphinx-apidoc utility, but
that doesn't give us enough flexibility.

The sources included in this patch were in fact generated by said
utility, but modified to improve the look of the documentation. The
improvements are mainly in toctree definitions and the titles of the
modules/packages. These were made with specific config options in mind.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/doc/conf_yaml_schema.json                 |  1 +
 dts/doc/framework.config.rst                  | 12 ++++++
 dts/doc/framework.config.types.rst            |  6 +++
 dts/doc/framework.dts.rst                     |  6 +++
 dts/doc/framework.exception.rst               |  6 +++
 dts/doc/framework.logger.rst                  |  6 +++
 ...ote_session.interactive_remote_session.rst |  6 +++
 ...ework.remote_session.interactive_shell.rst |  6 +++
 .../framework.remote_session.python_shell.rst |  6 +++
 ...ramework.remote_session.remote_session.rst |  6 +++
 dts/doc/framework.remote_session.rst          | 17 ++++++++
 .../framework.remote_session.ssh_session.rst  |  6 +++
 ...framework.remote_session.testpmd_shell.rst |  6 +++
 dts/doc/framework.rst                         | 30 ++++++++++++++
 dts/doc/framework.settings.rst                |  6 +++
 dts/doc/framework.test_result.rst             |  6 +++
 dts/doc/framework.test_suite.rst              |  6 +++
 dts/doc/framework.testbed_model.cpu.rst       |  6 +++
 .../framework.testbed_model.linux_session.rst |  6 +++
 dts/doc/framework.testbed_model.node.rst      |  6 +++
 .../framework.testbed_model.os_session.rst    |  6 +++
 dts/doc/framework.testbed_model.port.rst      |  6 +++
 .../framework.testbed_model.posix_session.rst |  6 +++
 dts/doc/framework.testbed_model.rst           | 26 ++++++++++++
 dts/doc/framework.testbed_model.sut_node.rst  |  6 +++
 dts/doc/framework.testbed_model.tg_node.rst   |  6 +++
 ..._generator.capturing_traffic_generator.rst |  6 +++
 ...mework.testbed_model.traffic_generator.rst | 14 +++++++
 ....testbed_model.traffic_generator.scapy.rst |  6 +++
 ...el.traffic_generator.traffic_generator.rst |  6 +++
 ...framework.testbed_model.virtual_device.rst |  6 +++
 dts/doc/framework.utils.rst                   |  6 +++
 dts/doc/index.rst                             | 41 +++++++++++++++++++
 33 files changed, 297 insertions(+)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/framework.config.rst
 create mode 100644 dts/doc/framework.config.types.rst
 create mode 100644 dts/doc/framework.dts.rst
 create mode 100644 dts/doc/framework.exception.rst
 create mode 100644 dts/doc/framework.logger.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst
 create mode 100644 dts/doc/framework.remote_session.python_shell.rst
 create mode 100644 dts/doc/framework.remote_session.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.ssh_session.rst
 create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst
 create mode 100644 dts/doc/framework.rst
 create mode 100644 dts/doc/framework.settings.rst
 create mode 100644 dts/doc/framework.test_result.rst
 create mode 100644 dts/doc/framework.test_suite.rst
 create mode 100644 dts/doc/framework.testbed_model.cpu.rst
 create mode 100644 dts/doc/framework.testbed_model.linux_session.rst
 create mode 100644 dts/doc/framework.testbed_model.node.rst
 create mode 100644 dts/doc/framework.testbed_model.os_session.rst
 create mode 100644 dts/doc/framework.testbed_model.port.rst
 create mode 100644 dts/doc/framework.testbed_model.posix_session.rst
 create mode 100644 dts/doc/framework.testbed_model.rst
 create mode 100644 dts/doc/framework.testbed_model.sut_node.rst
 create mode 100644 dts/doc/framework.testbed_model.tg_node.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst
 create mode 100644 dts/doc/framework.utils.rst
 create mode 100644 dts/doc/index.rst

diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json
new file mode 120000
index 0000000000..d89eb81b72
--- /dev/null
+++ b/dts/doc/conf_yaml_schema.json
@@ -0,0 +1 @@
+../framework/config/conf_yaml_schema.json
\ No newline at end of file
diff --git a/dts/doc/framework.config.rst b/dts/doc/framework.config.rst
new file mode 100644
index 0000000000..f765ef0e32
--- /dev/null
+++ b/dts/doc/framework.config.rst
@@ -0,0 +1,12 @@
+config - Configuration Package
+==============================
+
+.. automodule:: framework.config
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.config.types
diff --git a/dts/doc/framework.config.types.rst b/dts/doc/framework.config.types.rst
new file mode 100644
index 0000000000..5af915b681
--- /dev/null
+++ b/dts/doc/framework.config.types.rst
@@ -0,0 +1,6 @@
+types - Configuration Types
+===========================
+
+.. automodule:: framework.config.types
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.dts.rst b/dts/doc/framework.dts.rst
new file mode 100644
index 0000000000..b1de438887
--- /dev/null
+++ b/dts/doc/framework.dts.rst
@@ -0,0 +1,6 @@
+dts - Testbed Setup and Test Suite Runner
+=========================================
+
+.. automodule:: framework.dts
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.exception.rst b/dts/doc/framework.exception.rst
new file mode 100644
index 0000000000..ad58bd15de
--- /dev/null
+++ b/dts/doc/framework.exception.rst
@@ -0,0 +1,6 @@
+exception - Exceptions
+======================
+
+.. automodule:: framework.exception
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.logger.rst b/dts/doc/framework.logger.rst
new file mode 100644
index 0000000000..3c25b34819
--- /dev/null
+++ b/dts/doc/framework.logger.rst
@@ -0,0 +1,6 @@
+logger - Logging Facility
+=========================
+
+.. automodule:: framework.logger
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.interactive_remote_session.rst b/dts/doc/framework.remote_session.interactive_remote_session.rst
new file mode 100644
index 0000000000..35dc5c4b03
--- /dev/null
+++ b/dts/doc/framework.remote_session.interactive_remote_session.rst
@@ -0,0 +1,6 @@
+interactive\_remote\_session - SSH Interactive Remote Session
+=============================================================
+
+.. automodule:: framework.remote_session.interactive_remote_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.interactive_shell.rst b/dts/doc/framework.remote_session.interactive_shell.rst
new file mode 100644
index 0000000000..8a59db7b6e
--- /dev/null
+++ b/dts/doc/framework.remote_session.interactive_shell.rst
@@ -0,0 +1,6 @@
+interactive\_shell - Base Interactive Remote Shell
+==================================================
+
+.. automodule:: framework.remote_session.interactive_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.python_shell.rst b/dts/doc/framework.remote_session.python_shell.rst
new file mode 100644
index 0000000000..a8ec06f281
--- /dev/null
+++ b/dts/doc/framework.remote_session.python_shell.rst
@@ -0,0 +1,6 @@
+python\_shell - Python Interactive Remote Shell
+===============================================
+
+.. automodule:: framework.remote_session.python_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.remote_session.rst b/dts/doc/framework.remote_session.remote_session.rst
new file mode 100644
index 0000000000..58b0960d07
--- /dev/null
+++ b/dts/doc/framework.remote_session.remote_session.rst
@@ -0,0 +1,6 @@
+remote\_session - Remote Session ABC
+====================================
+
+.. automodule:: framework.remote_session.remote_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.rst b/dts/doc/framework.remote_session.rst
new file mode 100644
index 0000000000..74f83f0307
--- /dev/null
+++ b/dts/doc/framework.remote_session.rst
@@ -0,0 +1,17 @@
+remote\_session - Node Connections Package
+==========================================
+
+.. automodule:: framework.remote_session
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.remote_session.remote_session
+   framework.remote_session.ssh_session
+   framework.remote_session.interactive_remote_session
+   framework.remote_session.interactive_shell
+   framework.remote_session.testpmd_shell
+   framework.remote_session.python_shell
diff --git a/dts/doc/framework.remote_session.ssh_session.rst b/dts/doc/framework.remote_session.ssh_session.rst
new file mode 100644
index 0000000000..05b019bc7c
--- /dev/null
+++ b/dts/doc/framework.remote_session.ssh_session.rst
@@ -0,0 +1,6 @@
+ssh\_session - SSH Remote Session
+=================================
+
+.. automodule:: framework.remote_session.ssh_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.testpmd_shell.rst b/dts/doc/framework.remote_session.testpmd_shell.rst
new file mode 100644
index 0000000000..14510afb2b
--- /dev/null
+++ b/dts/doc/framework.remote_session.testpmd_shell.rst
@@ -0,0 +1,6 @@
+testpmd\_shell - Testpmd Interactive Remote Shell
+=================================================
+
+.. automodule:: framework.remote_session.testpmd_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.rst b/dts/doc/framework.rst
new file mode 100644
index 0000000000..978d5b5e38
--- /dev/null
+++ b/dts/doc/framework.rst
@@ -0,0 +1,30 @@
+framework - DTS Libraries
+=========================
+
+.. automodule:: framework
+   :members:
+   :show-inheritance:
+
+Packages
+--------
+
+.. toctree::
+   :maxdepth: 3
+
+   framework.config
+   framework.remote_session
+   framework.testbed_model
+
+Modules
+-------
+
+.. toctree::
+   :maxdepth: 1
+
+   framework.dts
+   framework.exception
+   framework.logger
+   framework.settings
+   framework.test_result
+   framework.test_suite
+   framework.utils
diff --git a/dts/doc/framework.settings.rst b/dts/doc/framework.settings.rst
new file mode 100644
index 0000000000..96bf194923
--- /dev/null
+++ b/dts/doc/framework.settings.rst
@@ -0,0 +1,6 @@
+settings - Command Line Arguments and Environment Variables
+===========================================================
+
+.. automodule:: framework.settings
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.test_result.rst b/dts/doc/framework.test_result.rst
new file mode 100644
index 0000000000..527357a04a
--- /dev/null
+++ b/dts/doc/framework.test_result.rst
@@ -0,0 +1,6 @@
+test\_result - Test Results Records
+===================================
+
+.. automodule:: framework.test_result
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.test_suite.rst b/dts/doc/framework.test_suite.rst
new file mode 100644
index 0000000000..96f893e465
--- /dev/null
+++ b/dts/doc/framework.test_suite.rst
@@ -0,0 +1,6 @@
+test\_suite - Common Test Suite Features
+========================================
+
+.. automodule:: framework.test_suite
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.cpu.rst b/dts/doc/framework.testbed_model.cpu.rst
new file mode 100644
index 0000000000..dd2baf09fb
--- /dev/null
+++ b/dts/doc/framework.testbed_model.cpu.rst
@@ -0,0 +1,6 @@
+cpu - CPU Representation and Utilities
+======================================
+
+.. automodule:: framework.testbed_model.cpu
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.linux_session.rst b/dts/doc/framework.testbed_model.linux_session.rst
new file mode 100644
index 0000000000..141f3f49e3
--- /dev/null
+++ b/dts/doc/framework.testbed_model.linux_session.rst
@@ -0,0 +1,6 @@
+linux\_session - Linux Remote Session
+=====================================
+
+.. automodule:: framework.testbed_model.linux_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.node.rst b/dts/doc/framework.testbed_model.node.rst
new file mode 100644
index 0000000000..2133dd604b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.node.rst
@@ -0,0 +1,6 @@
+node - Base Node
+================
+
+.. automodule:: framework.testbed_model.node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.os_session.rst b/dts/doc/framework.testbed_model.os_session.rst
new file mode 100644
index 0000000000..f3574e939a
--- /dev/null
+++ b/dts/doc/framework.testbed_model.os_session.rst
@@ -0,0 +1,6 @@
+os\_session - OS-aware Remote Session ABC
+=========================================
+
+.. automodule:: framework.testbed_model.os_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.port.rst b/dts/doc/framework.testbed_model.port.rst
new file mode 100644
index 0000000000..17bd391e63
--- /dev/null
+++ b/dts/doc/framework.testbed_model.port.rst
@@ -0,0 +1,6 @@
+port - NIC Port Representation
+==============================
+
+.. automodule:: framework.testbed_model.port
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.posix_session.rst b/dts/doc/framework.testbed_model.posix_session.rst
new file mode 100644
index 0000000000..308c051ae5
--- /dev/null
+++ b/dts/doc/framework.testbed_model.posix_session.rst
@@ -0,0 +1,6 @@
+posix\_session - Posix Remote Session
+=====================================
+
+.. automodule:: framework.testbed_model.posix_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.rst b/dts/doc/framework.testbed_model.rst
new file mode 100644
index 0000000000..4b024e47e6
--- /dev/null
+++ b/dts/doc/framework.testbed_model.rst
@@ -0,0 +1,26 @@
+testbed\_model - Testbed Modelling Package
+==========================================
+
+.. automodule:: framework.testbed_model
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 2
+
+   framework.testbed_model.traffic_generator
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.testbed_model.os_session
+   framework.testbed_model.linux_session
+   framework.testbed_model.posix_session
+   framework.testbed_model.node
+   framework.testbed_model.sut_node
+   framework.testbed_model.tg_node
+   framework.testbed_model.cpu
+   framework.testbed_model.port
+   framework.testbed_model.virtual_device
diff --git a/dts/doc/framework.testbed_model.sut_node.rst b/dts/doc/framework.testbed_model.sut_node.rst
new file mode 100644
index 0000000000..7e12b6c87e
--- /dev/null
+++ b/dts/doc/framework.testbed_model.sut_node.rst
@@ -0,0 +1,6 @@
+sut\_node - System Under Test Node
+==================================
+
+.. automodule:: framework.testbed_model.sut_node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.tg_node.rst b/dts/doc/framework.testbed_model.tg_node.rst
new file mode 100644
index 0000000000..41206c000b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.tg_node.rst
@@ -0,0 +1,6 @@
+tg\_node - Traffig Generator Node
+=================================
+
+.. automodule:: framework.testbed_model.tg_node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
new file mode 100644
index 0000000000..06c087155a
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
@@ -0,0 +1,6 @@
+capturing\_traffic\_generator - Base Capturing TG ABC
+=====================================================
+
+.. automodule:: framework.testbed_model.traffic_generator.capturing_traffic_generator
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.rst
new file mode 100644
index 0000000000..18b6f1b98b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.rst
@@ -0,0 +1,14 @@
+traffic\_generator Subpackage
+=============================
+
+.. automodule:: framework.testbed_model.traffic_generator
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.testbed_model.traffic_generator.traffic_generator
+   framework.testbed_model.traffic_generator.capturing_traffic_generator
+   framework.testbed_model.traffic_generator.scapy
diff --git a/dts/doc/framework.testbed_model.traffic_generator.scapy.rst b/dts/doc/framework.testbed_model.traffic_generator.scapy.rst
new file mode 100644
index 0000000000..7062914ec3
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.scapy.rst
@@ -0,0 +1,6 @@
+scapy - Capturing Traffic Generator
+===================================
+
+.. automodule:: framework.testbed_model.traffic_generator.scapy
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
new file mode 100644
index 0000000000..e366d7f222
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
@@ -0,0 +1,6 @@
+traffic\_generator - Base TG ABC
+================================
+
+.. automodule:: framework.testbed_model.traffic_generator.traffic_generator
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.virtual_device.rst b/dts/doc/framework.testbed_model.virtual_device.rst
new file mode 100644
index 0000000000..38e6c1d0bc
--- /dev/null
+++ b/dts/doc/framework.testbed_model.virtual_device.rst
@@ -0,0 +1,6 @@
+virtual\_device - Virtual Devices
+=================================
+
+.. automodule:: framework.testbed_model.virtual_device
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.utils.rst b/dts/doc/framework.utils.rst
new file mode 100644
index 0000000000..0e7bb80666
--- /dev/null
+++ b/dts/doc/framework.utils.rst
@@ -0,0 +1,6 @@
+utils - Various Utilities
+=========================
+
+.. automodule:: framework.utils
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/index.rst b/dts/doc/index.rst
new file mode 100644
index 0000000000..fc3b6d78b9
--- /dev/null
+++ b/dts/doc/index.rst
@@ -0,0 +1,41 @@
+.. DPDK Test Suite documentation.
+
+Welcome to DPDK Test Suite's API documentation!
+===============================================
+
+.. automodule:: framework
+   :members:
+   :show-inheritance:
+
+Packages
+--------
+
+.. toctree::
+   :includehidden:
+   :maxdepth: 1
+
+   framework.config
+   framework.remote_session
+   framework.testbed_model
+
+Modules
+-------
+
+.. toctree::
+   :maxdepth: 1
+
+   framework.dts
+   framework.exception
+   framework.logger
+   framework.settings
+   framework.test_result
+   framework.test_suite
+   framework.utils
+
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v3 3/3] dts: add API doc generation
  2024-01-22 16:35               ` [PATCH v3 0/3] dts: API docs generation Juraj Linkeš
  2024-01-22 16:35                 ` [PATCH v3 1/3] dts: add doc generation dependencies Juraj Linkeš
  2024-01-22 16:35                 ` [PATCH v3 2/3] dts: add API doc sources Juraj Linkeš
@ 2024-01-22 16:35                 ` Juraj Linkeš
  2024-01-29 17:09                   ` Jeremy Spewock
       [not found]                   ` <CAJvnSUCNjo0p-yhROF1MNLKhjiAw2QTyTHO2hpOaVVUn0xnJ0A@mail.gmail.com>
  2 siblings, 2 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-01-22 16:35 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro
  Cc: dev, Juraj Linkeš

The tool used to generate developer docs is Sphinx, which is already in
use in DPDK. The same configuration is used to preserve style, but it's
been augmented with doc-generating configuration. There's a change that
modifies how the sidebar displays the content hierarchy that's been put
into an if block to not interfere with regular docs.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 33 +++++++++++++++++++---------
 doc/api/doxy-api-index.md       |  3 +++
 doc/api/doxy-api.conf.in        |  2 ++
 doc/api/meson.build             | 11 +++++++---
 doc/guides/conf.py              | 39 ++++++++++++++++++++++++++++-----
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 34 +++++++++++++++++++++++++++-
 dts/doc/meson.build             | 27 +++++++++++++++++++++++
 dts/meson.build                 | 16 ++++++++++++++
 meson.build                     |  1 +
 10 files changed, 148 insertions(+), 19 deletions(-)
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..aea771a64e 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,50 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default=None)
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+if args.dts_root:
+    os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
+if not os.path.exists(args.dst):
+    os.makedirs(args.dst)
+
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..b49b24acce 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -241,3 +241,6 @@ The public API headers are grouped by topics:
   [experimental APIs](@ref rte_compat.h),
   [ABI versioning](@ref rte_function_versioning.h),
   [version](@ref rte_version.h)
+
+- **tests**:
+  [**DTS**](@dts_api_main_page)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index e94c9e4e46..d53edeba57 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -121,6 +121,8 @@ SEARCHENGINE            = YES
 SORT_MEMBER_DOCS        = NO
 SOURCE_BROWSER          = YES
 
+ALIASES                 = "dts_api_main_page=@DTS_API_MAIN_PAGE@"
+
 EXAMPLE_PATH            = @TOPDIR@/examples
 EXAMPLE_PATTERNS        = *.c
 EXAMPLE_RECURSIVE       = YES
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..ffc75d7b5a 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
@@ -32,14 +33,18 @@ example = custom_target('examples.dox',
 # set up common Doxygen configuration
 cdata = configuration_data()
 cdata.set('VERSION', meson.project_version())
-cdata.set('API_EXAMPLES', join_paths(dpdk_build_root, 'doc', 'api', 'examples.dox'))
-cdata.set('OUTPUT', join_paths(dpdk_build_root, 'doc', 'api'))
+cdata.set('API_EXAMPLES', join_paths(doc_api_build_dir, 'examples.dox'))
+cdata.set('OUTPUT', doc_api_build_dir)
 cdata.set('TOPDIR', dpdk_source_root)
-cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, join_paths(dpdk_build_root, 'doc', 'api')]))
+cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, doc_api_build_dir]))
 cdata.set('WARN_AS_ERROR', 'NO')
 if get_option('werror')
     cdata.set('WARN_AS_ERROR', 'YES')
 endif
+# A local reference must be relative to the main index.html page
+# The path below can't be taken from the DTS meson file as that would
+# require recursive subdir traversal (doc, dts, then doc again)
+cdata.set('DTS_API_MAIN_PAGE', join_paths('..', 'dts', 'html', 'index.html'))
 
 # configure HTML Doxygen run
 html_cdata = configuration_data()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..b442a1f76c 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,37 @@
           file=stderr)
     pass
 
+# Napoleon enables the Google format of Python doscstrings, used in DTS
+# Intersphinx allows linking to external projects, such as Python docs, also used in DTS
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# DTS Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_typehints_description_target = 'documented'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_preprocess_types = True
+add_module_names = False
+toc_object_entries = True
+toc_object_entries_show_parents = 'hide'
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
+
+dts_root = environ.get('DTS_ROOT')
+if dts_root:
+    path.append(dts_root)
+    # DTS Sidebar config
+    html_theme_options = {
+        'collapse_navigation': False,
+        'navigation_depth': -1,
+    }
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +65,7 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 846696e14e..21d3d89fc2 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -278,7 +278,12 @@ and try not to divert much from it.
 The :ref:`DTS developer tools <dts_dev_tools>` will issue warnings
 when some of the basics are not met.
 
-The code must be properly documented with docstrings.
+The API documentation, which is a helpful reference when developing, may be accessed
+in the code directly or generated with the :ref:`API docs build steps <building_api_docs>`.
+When adding new files or modifying the directory structure, the corresponding changes must
+be made to DTS api doc sources in ``dts/doc``.
+
+Speaking of which, the code must be properly documented with docstrings.
 The style must conform to the `Google style
 <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
 See an example of the style `here
@@ -413,6 +418,33 @@ the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
 
 
+.. _building_api_docs:
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+.. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+The documentation is built using the standard DPDK build system. After executing the meson command
+and entering Poetry's shell, build the documentation with:
+
+.. code-block:: console
+
+   ninja -C build dts-doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run
+   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
+
+
 Configuration Schema
 --------------------
 
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..01b7b51034
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: false)
+sphinx_apidoc = find_program('sphinx-apidoc', required: false)
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_doc_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+
+extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk', 'dts')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        command: [sphinx_wrapper, sphinx, meson.project_version(),
+            meson.current_source_dir(), dts_doc_api_build_dir, extra_sphinx_args],
+        build_by_default: false,
+        install: get_option('enable_docs'),
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..e8ce0f06ac
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts-doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 5e161f43e5..001fdcbbbf 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v3 3/3] dts: add API doc generation
  2024-01-22 16:35                 ` [PATCH v3 3/3] dts: add API doc generation Juraj Linkeš
@ 2024-01-29 17:09                   ` Jeremy Spewock
       [not found]                   ` <CAJvnSUCNjo0p-yhROF1MNLKhjiAw2QTyTHO2hpOaVVUn0xnJ0A@mail.gmail.com>
  1 sibling, 0 replies; 255+ messages in thread
From: Jeremy Spewock @ 2024-01-29 17:09 UTC (permalink / raw)
  To: Juraj Linkeš
  Cc: thomas, Honnappa.Nagarahalli, bruce.richardson, probb,
	paul.szczepanek, yoan.picchi, Luca.Vizzarro, dev

On Mon, Jan 22, 2024 at 11:35 AM Juraj Linkeš
<juraj.linkes@pantheon.tech> wrote:
>
> The tool used to generate developer docs is Sphinx, which is already in
> use in DPDK. The same configuration is used to preserve style, but it's
> been augmented with doc-generating configuration. There's a change that
> modifies how the sidebar displays the content hierarchy that's been put
> into an if block to not interfere with regular docs.
>
> Sphinx generates the documentation from Python docstrings. The docstring
> format is the Google format [0] which requires the sphinx.ext.napoleon
> extension. The other extension, sphinx.ext.intersphinx, enables linking
> to object in external documentations, such as the Python documentation.
>
> There are two requirements for building DTS docs:
> * The same Python version as DTS or higher, because Sphinx imports the
>   code.
> * Also the same Python packages as DTS, for the same reason.
>
> [0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
>  buildtools/call-sphinx-build.py | 33 +++++++++++++++++++---------
>  doc/api/doxy-api-index.md       |  3 +++
>  doc/api/doxy-api.conf.in        |  2 ++
>  doc/api/meson.build             | 11 +++++++---
>  doc/guides/conf.py              | 39 ++++++++++++++++++++++++++++-----
>  doc/guides/meson.build          |  1 +
>  doc/guides/tools/dts.rst        | 34 +++++++++++++++++++++++++++-
>  dts/doc/meson.build             | 27 +++++++++++++++++++++++
>  dts/meson.build                 | 16 ++++++++++++++
>  meson.build                     |  1 +
>  10 files changed, 148 insertions(+), 19 deletions(-)
>  create mode 100644 dts/doc/meson.build
>  create mode 100644 dts/meson.build
>
> diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
> index 39a60d09fa..aea771a64e 100755
> --- a/buildtools/call-sphinx-build.py
> +++ b/buildtools/call-sphinx-build.py
> @@ -3,37 +3,50 @@
>  # Copyright(c) 2019 Intel Corporation
>  #
>
> +import argparse
>  import sys
>  import os
>  from os.path import join
>  from subprocess import run, PIPE, STDOUT
>  from packaging.version import Version
>
> -# assign parameters to variables
> -(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
> +parser = argparse.ArgumentParser()
> +parser.add_argument('sphinx')
> +parser.add_argument('version')
> +parser.add_argument('src')
> +parser.add_argument('dst')
> +parser.add_argument('--dts-root', default=None)
> +args, extra_args = parser.parse_known_args()
>
>  # set the version in environment for sphinx to pick up
> -os.environ['DPDK_VERSION'] = version
> +os.environ['DPDK_VERSION'] = args.version
> +if args.dts_root:
> +    os.environ['DTS_ROOT'] = args.dts_root
>
>  # for sphinx version >= 1.7 add parallelism using "-j auto"
> -ver = run([sphinx, '--version'], stdout=PIPE,
> +ver = run([args.sphinx, '--version'], stdout=PIPE,
>            stderr=STDOUT).stdout.decode().split()[-1]
> -sphinx_cmd = [sphinx] + extra_args
> +sphinx_cmd = [args.sphinx] + extra_args
>  if Version(ver) >= Version('1.7'):
>      sphinx_cmd += ['-j', 'auto']
>
>  # find all the files sphinx will process so we can write them as dependencies
>  srcfiles = []
> -for root, dirs, files in os.walk(src):
> +for root, dirs, files in os.walk(args.src):
>      srcfiles.extend([join(root, f) for f in files])
>
> +if not os.path.exists(args.dst):
> +    os.makedirs(args.dst)
> +
>  # run sphinx, putting the html output in a "html" directory
> -with open(join(dst, 'sphinx_html.out'), 'w') as out:
> -    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
> -                  stdout=out)
> +with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
> +    process = run(
> +        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
> +        stdout=out
> +    )
>
>  # create a gcc format .d file giving all the dependencies of this doc build
> -with open(join(dst, '.html.d'), 'w') as d:
> +with open(join(args.dst, '.html.d'), 'w') as d:
>      d.write('html: ' + ' '.join(srcfiles) + '\n')
>
>  sys.exit(process.returncode)
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index a6a768bd7c..b49b24acce 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -241,3 +241,6 @@ The public API headers are grouped by topics:
>    [experimental APIs](@ref rte_compat.h),
>    [ABI versioning](@ref rte_function_versioning.h),
>    [version](@ref rte_version.h)
> +
> +- **tests**:
> +  [**DTS**](@dts_api_main_page)
> diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
> index e94c9e4e46..d53edeba57 100644
> --- a/doc/api/doxy-api.conf.in
> +++ b/doc/api/doxy-api.conf.in
> @@ -121,6 +121,8 @@ SEARCHENGINE            = YES
>  SORT_MEMBER_DOCS        = NO
>  SOURCE_BROWSER          = YES
>
> +ALIASES                 = "dts_api_main_page=@DTS_API_MAIN_PAGE@"
> +
>  EXAMPLE_PATH            = @TOPDIR@/examples
>  EXAMPLE_PATTERNS        = *.c
>  EXAMPLE_RECURSIVE       = YES
> diff --git a/doc/api/meson.build b/doc/api/meson.build
> index 5b50692df9..ffc75d7b5a 100644
> --- a/doc/api/meson.build
> +++ b/doc/api/meson.build
> @@ -1,6 +1,7 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
>
> +doc_api_build_dir = meson.current_build_dir()
>  doxygen = find_program('doxygen', required: get_option('enable_docs'))
>
>  if not doxygen.found()
> @@ -32,14 +33,18 @@ example = custom_target('examples.dox',
>  # set up common Doxygen configuration
>  cdata = configuration_data()
>  cdata.set('VERSION', meson.project_version())
> -cdata.set('API_EXAMPLES', join_paths(dpdk_build_root, 'doc', 'api', 'examples.dox'))
> -cdata.set('OUTPUT', join_paths(dpdk_build_root, 'doc', 'api'))
> +cdata.set('API_EXAMPLES', join_paths(doc_api_build_dir, 'examples.dox'))
> +cdata.set('OUTPUT', doc_api_build_dir)
>  cdata.set('TOPDIR', dpdk_source_root)
> -cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, join_paths(dpdk_build_root, 'doc', 'api')]))
> +cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, doc_api_build_dir]))
>  cdata.set('WARN_AS_ERROR', 'NO')
>  if get_option('werror')
>      cdata.set('WARN_AS_ERROR', 'YES')
>  endif
> +# A local reference must be relative to the main index.html page
> +# The path below can't be taken from the DTS meson file as that would
> +# require recursive subdir traversal (doc, dts, then doc again)
> +cdata.set('DTS_API_MAIN_PAGE', join_paths('..', 'dts', 'html', 'index.html'))
>
>  # configure HTML Doxygen run
>  html_cdata = configuration_data()
> diff --git a/doc/guides/conf.py b/doc/guides/conf.py
> index 0f7ff5282d..b442a1f76c 100644
> --- a/doc/guides/conf.py
> +++ b/doc/guides/conf.py
> @@ -7,10 +7,9 @@
>  from sphinx import __version__ as sphinx_version
>  from os import listdir
>  from os import environ
> -from os.path import basename
> -from os.path import dirname
> +from os.path import basename, dirname
>  from os.path import join as path_join
> -from sys import argv, stderr
> +from sys import argv, stderr, path
>
>  import configparser
>
> @@ -24,6 +23,37 @@
>            file=stderr)
>      pass
>
> +# Napoleon enables the Google format of Python doscstrings, used in DTS
> +# Intersphinx allows linking to external projects, such as Python docs, also used in DTS
> +extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
> +
> +# DTS Python docstring options
> +autodoc_default_options = {
> +    'members': True,
> +    'member-order': 'bysource',
> +    'show-inheritance': True,
> +}
> +autodoc_class_signature = 'separated'
> +autodoc_typehints = 'both'
> +autodoc_typehints_format = 'short'
> +autodoc_typehints_description_target = 'documented'
> +napoleon_numpy_docstring = False
> +napoleon_attr_annotations = True
> +napoleon_preprocess_types = True
> +add_module_names = False
> +toc_object_entries = True
> +toc_object_entries_show_parents = 'hide'
> +intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
> +
> +dts_root = environ.get('DTS_ROOT')
> +if dts_root:
> +    path.append(dts_root)
> +    # DTS Sidebar config
> +    html_theme_options = {
> +        'collapse_navigation': False,
> +        'navigation_depth': -1,
> +    }
> +
>  stop_on_error = ('-W' in argv)
>
>  project = 'Data Plane Development Kit'
> @@ -35,8 +65,7 @@
>  html_show_copyright = False
>  highlight_language = 'none'
>
> -release = environ.setdefault('DPDK_VERSION', "None")
> -version = release
> +version = environ.setdefault('DPDK_VERSION', "None")
>
>  master_doc = 'index'
>
> diff --git a/doc/guides/meson.build b/doc/guides/meson.build
> index 51f81da2e3..8933d75f6b 100644
> --- a/doc/guides/meson.build
> +++ b/doc/guides/meson.build
> @@ -1,6 +1,7 @@
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2018 Intel Corporation
>
> +doc_guides_source_dir = meson.current_source_dir()
>  sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
>
>  if not sphinx.found()
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index 846696e14e..21d3d89fc2 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -278,7 +278,12 @@ and try not to divert much from it.
>  The :ref:`DTS developer tools <dts_dev_tools>` will issue warnings
>  when some of the basics are not met.
>
> -The code must be properly documented with docstrings.
> +The API documentation, which is a helpful reference when developing, may be accessed
> +in the code directly or generated with the :ref:`API docs build steps <building_api_docs>`.
> +When adding new files or modifying the directory structure, the corresponding changes must
> +be made to DTS api doc sources in ``dts/doc``.
> +
> +Speaking of which, the code must be properly documented with docstrings.
>  The style must conform to the `Google style
>  <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
>  See an example of the style `here
> @@ -413,6 +418,33 @@ the DTS code check and format script.
>  Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
>
>
> +.. _building_api_docs:
> +
> +Building DTS API docs
> +---------------------
> +
> +To build DTS API docs, install the dependencies with Poetry, then enter its shell:
> +
> +.. code-block:: console
> +
> +   poetry install --with docs
> +   poetry shell
> +

The only thing to note here is with newer versions of poetry this will
start to throw warnings because of the way we use poetry and don't
have a root package. It is just a warning message so it shouldn't
cause any real problems, but I believe the way we should be handling
it is passing --no-root into poetry install so that it knows not to
use the root package.

>
> +The documentation is built using the standard DPDK build system. After executing the meson command
> +and entering Poetry's shell, build the documentation with:
> +
> +.. code-block:: console
> +
> +   ninja -C build dts-doc
> +
> +The output is generated in ``build/doc/api/dts/html``.
> +
> +.. Note::
> +
> +   Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run
> +   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
> +
> +
>  Configuration Schema
>  --------------------
>
> diff --git a/dts/doc/meson.build b/dts/doc/meson.build
> new file mode 100644
> index 0000000000..01b7b51034
> --- /dev/null
> +++ b/dts/doc/meson.build
> @@ -0,0 +1,27 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +sphinx = find_program('sphinx-build', required: false)
> +sphinx_apidoc = find_program('sphinx-apidoc', required: false)
> +
> +if not sphinx.found() or not sphinx_apidoc.found()
> +    subdir_done()
> +endif
> +
> +dts_doc_api_build_dir = join_paths(doc_api_build_dir, 'dts')
> +
> +extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
> +if get_option('werror')
> +    extra_sphinx_args += '-W'
> +endif
> +
> +htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk', 'dts')
> +dts_api_html = custom_target('dts_api_html',
> +        output: 'html',
> +        command: [sphinx_wrapper, sphinx, meson.project_version(),
> +            meson.current_source_dir(), dts_doc_api_build_dir, extra_sphinx_args],
> +        build_by_default: false,
> +        install: get_option('enable_docs'),
> +        install_dir: htmldir)
> +doc_targets += dts_api_html
> +doc_target_names += 'DTS_API_HTML'
> diff --git a/dts/meson.build b/dts/meson.build
> new file mode 100644
> index 0000000000..e8ce0f06ac
> --- /dev/null
> +++ b/dts/meson.build
> @@ -0,0 +1,16 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +doc_targets = []
> +doc_target_names = []
> +dts_dir = meson.current_source_dir()
> +
> +subdir('doc')
> +
> +if doc_targets.length() == 0
> +    message = 'No docs targets found'
> +else
> +    message = 'Built docs:'
> +endif
> +run_target('dts-doc', command: [echo, message, doc_target_names],
> +    depends: doc_targets)
> diff --git a/meson.build b/meson.build
> index 5e161f43e5..001fdcbbbf 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -87,6 +87,7 @@ subdir('app')
>
>  # build docs
>  subdir('doc')
> +subdir('dts')
>
>  # build any examples explicitly requested - useful for developers - and
>  # install any example code into the appropriate install path
> --
> 2.34.1
>
Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>

^ permalink raw reply	[flat|nested] 255+ messages in thread

* Re: [PATCH v3 3/3] dts: add API doc generation
       [not found]                   ` <CAJvnSUCNjo0p-yhROF1MNLKhjiAw2QTyTHO2hpOaVVUn0xnJ0A@mail.gmail.com>
@ 2024-02-29 18:12                     ` Nicholas Pratte
  0 siblings, 0 replies; 255+ messages in thread
From: Nicholas Pratte @ 2024-02-29 18:12 UTC (permalink / raw)
  To: dev, Jeremy Spewock

Tested-by: Nicholas Pratte <npratte@iol.unh.edu>

----

The tool used to generate developer docs is Sphinx, which is already in
use in DPDK. The same configuration is used to preserve style, but it's
been augmented with doc-generating configuration. There's a change that
modifies how the sidebar displays the content hierarchy that's been put
into an if block to not interfere with regular docs.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 buildtools/call-sphinx-build.py | 33 +++++++++++++++++++---------
 doc/api/doxy-api-index.md       |  3 +++
 doc/api/doxy-api.conf.in        |  2 ++
 doc/api/meson.build             | 11 +++++++---
 doc/guides/conf.py              | 39 ++++++++++++++++++++++++++++-----
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 34 +++++++++++++++++++++++++++-
 dts/doc/meson.build             | 27 +++++++++++++++++++++++
 dts/meson.build                 | 16 ++++++++++++++
 meson.build                     |  1 +
 10 files changed, 148 insertions(+), 19 deletions(-)
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..aea771a64e 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,50 @@
 # Copyright(c) 2019 Intel Corporation
 #

+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version

-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default=None)
+args, extra_args = parser.parse_known_args()

 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+if args.dts_root:
+    os.environ['DTS_ROOT'] = args.dts_root

 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']

 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])

+if not os.path.exists(args.dst):
+    os.makedirs(args.dst)
+
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )

 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')

 sys.exit(process.returncode)
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..b49b24acce 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -241,3 +241,6 @@ The public API headers are grouped by topics:
   [experimental APIs](@ref rte_compat.h),
   [ABI versioning](@ref rte_function_versioning.h),
   [version](@ref rte_version.h)
+
+- **tests**:
+  [**DTS**](@dts_api_main_page)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index e94c9e4e46..d53edeba57 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -121,6 +121,8 @@ SEARCHENGINE            = YES
 SORT_MEMBER_DOCS        = NO
 SOURCE_BROWSER          = YES

+ALIASES                 = "dts_api_main_page=@DTS_API_MAIN_PAGE@"
+
 EXAMPLE_PATH            = @TOPDIR@/examples
 EXAMPLE_PATTERNS        = *.c
 EXAMPLE_RECURSIVE       = YES
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..ffc75d7b5a 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>

+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))

 if not doxygen.found()
@@ -32,14 +33,18 @@ example = custom_target('examples.dox',
 # set up common Doxygen configuration
 cdata = configuration_data()
 cdata.set('VERSION', meson.project_version())
-cdata.set('API_EXAMPLES', join_paths(dpdk_build_root, 'doc', 'api',
'examples.dox'))
-cdata.set('OUTPUT', join_paths(dpdk_build_root, 'doc', 'api'))
+cdata.set('API_EXAMPLES', join_paths(doc_api_build_dir, 'examples.dox'))
+cdata.set('OUTPUT', doc_api_build_dir)
 cdata.set('TOPDIR', dpdk_source_root)
-cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root,
join_paths(dpdk_build_root, 'doc', 'api')]))
+cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, doc_api_build_dir]))
 cdata.set('WARN_AS_ERROR', 'NO')
 if get_option('werror')
     cdata.set('WARN_AS_ERROR', 'YES')
 endif
+# A local reference must be relative to the main index.html page
+# The path below can't be taken from the DTS meson file as that would
+# require recursive subdir traversal (doc, dts, then doc again)
+cdata.set('DTS_API_MAIN_PAGE', join_paths('..', 'dts', 'html', 'index.html'))

 # configure HTML Doxygen run
 html_cdata = configuration_data()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..b442a1f76c 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path

 import configparser

@@ -24,6 +23,37 @@
           file=stderr)
     pass

+# Napoleon enables the Google format of Python doscstrings, used in DTS
+# Intersphinx allows linking to external projects, such as Python
docs, also used in DTS
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# DTS Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_typehints_description_target = 'documented'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_preprocess_types = True
+add_module_names = False
+toc_object_entries = True
+toc_object_entries_show_parents = 'hide'
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
+
+dts_root = environ.get('DTS_ROOT')
+if dts_root:
+    path.append(dts_root)
+    # DTS Sidebar config
+    html_theme_options = {
+        'collapse_navigation': False,
+        'navigation_depth': -1,
+    }
+
 stop_on_error = ('-W' in argv)

 project = 'Data Plane Development Kit'
@@ -35,8 +65,7 @@
 html_show_copyright = False
 highlight_language = 'none'

-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+version = environ.setdefault('DPDK_VERSION', "None")

 master_doc = 'index'

diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation

+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))

 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 846696e14e..21d3d89fc2 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -278,7 +278,12 @@ and try not to divert much from it.
 The :ref:`DTS developer tools <dts_dev_tools>` will issue warnings
 when some of the basics are not met.

-The code must be properly documented with docstrings.
+The API documentation, which is a helpful reference when developing,
may be accessed
+in the code directly or generated with the :ref:`API docs build steps
<building_api_docs>`.
+When adding new files or modifying the directory structure, the
corresponding changes must
+be made to DTS api doc sources in ``dts/doc``.
+
+Speaking of which, the code must be properly documented with docstrings.
 The style must conform to the `Google style
 <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
 See an example of the style `here
@@ -413,6 +418,33 @@ the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.


+.. _building_api_docs:
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then
enter its shell:
+
+.. code-block:: console
+
+   poetry install --with docs
+   poetry shell
+
+The documentation is built using the standard DPDK build system.
After executing the meson command
+and entering Poetry's shell, build the documentation with:
+
+.. code-block:: console
+
+   ninja -C build dts-doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating
docstrings. Also make sure to run
+   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
+
+
 Configuration Schema
 --------------------

diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..01b7b51034
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: false)
+sphinx_apidoc = find_program('sphinx-apidoc', required: false)
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_doc_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+
+extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk', 'dts')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        command: [sphinx_wrapper, sphinx, meson.project_version(),
+            meson.current_source_dir(), dts_doc_api_build_dir,
extra_sphinx_args],
+        build_by_default: false,
+        install: get_option('enable_docs'),
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..e8ce0f06ac
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts-doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 5e161f43e5..001fdcbbbf 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,7 @@ subdir('app')

 # build docs
 subdir('doc')
+subdir('dts')

 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
--
2.34.1

^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v4 0/3] dts: API docs generation
  2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
                                 ` (3 preceding siblings ...)
  2024-01-22 16:35               ` [PATCH v3 0/3] dts: API docs generation Juraj Linkeš
@ 2024-04-12 10:14               ` Juraj Linkeš
  2024-04-12 10:14                 ` [PATCH v4 1/3] dts: add doc generation dependencies Juraj Linkeš
                                   ` (2 more replies)
  4 siblings, 3 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-04-12 10:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, Luca.Vizzarro, npratte
  Cc: dev, Juraj Linkeš

The generation is done with Sphinx, which DPDK already uses, with
slightly modified configuration of the sidebar present in an if block.

Dependencies are installed using Poetry from the dts directory:

poetry install --with docs

After installing, enter the Poetry shell:

poetry shell

And then run the build:
ninja -C <meson_build_dir> dts-doc

Python3.10 is required to build the DTS API docs.

The patchset contains the .rst sources which Sphinx uses to generate the
html pages. These were first generated with the sphinx-apidoc utility
and modified to provide a better look. The documentation just doesn't
look that good without the modifications and there isn't enough
configuration options to achieve that without manual changes to the .rst
files. This introduces extra maintenance which involves adding new .rst
files when a new Python module is added or changing the .rst structure
if the Python directory/file structure is changed (moved, renamed
files). This small maintenance burden is outweighed by the flexibility
afforded by the ability to make manual changes to the .rst files.

We can merge this patch when:
1. We agree on the approach with manually modifying the files.
This approach is, in my opinion, much better than just generating the
.rst files every time,
2. Bruce sends his ack on the meson modifications. I believe we had a
positive reaction on the previous version, but not this one.
3. The link to DTS API docs that was added to doxy-api-index.md is
satisfactory. I think Thomas could check this?

v2:
Removed the use of sphinx-apidoc from meson in favor of adding the files
generated by it directly to the repository (and modifying them).

v3:
Rebase.

v4:
Rebase.

Juraj Linkeš (3):
  dts: add doc generation dependencies
  dts: add API doc sources
  dts: add API doc generation

 buildtools/call-sphinx-build.py               |  33 +-
 doc/api/doxy-api-index.md                     |   3 +
 doc/api/doxy-api.conf.in                      |   2 +
 doc/api/meson.build                           |  11 +-
 doc/guides/conf.py                            |  39 +-
 doc/guides/meson.build                        |   1 +
 doc/guides/tools/dts.rst                      |  34 +-
 dts/doc/conf_yaml_schema.json                 |   1 +
 dts/doc/framework.config.rst                  |  12 +
 dts/doc/framework.config.types.rst            |   6 +
 dts/doc/framework.exception.rst               |   6 +
 dts/doc/framework.logger.rst                  |   6 +
 ...ote_session.interactive_remote_session.rst |   6 +
 ...ework.remote_session.interactive_shell.rst |   6 +
 .../framework.remote_session.python_shell.rst |   6 +
 ...ramework.remote_session.remote_session.rst |   6 +
 dts/doc/framework.remote_session.rst          |  17 +
 .../framework.remote_session.ssh_session.rst  |   6 +
 ...framework.remote_session.testpmd_shell.rst |   6 +
 dts/doc/framework.runner.rst                  |   6 +
 dts/doc/framework.settings.rst                |   6 +
 dts/doc/framework.test_result.rst             |   6 +
 dts/doc/framework.test_suite.rst              |   6 +
 dts/doc/framework.testbed_model.cpu.rst       |   6 +
 .../framework.testbed_model.linux_session.rst |   6 +
 dts/doc/framework.testbed_model.node.rst      |   6 +
 .../framework.testbed_model.os_session.rst    |   6 +
 dts/doc/framework.testbed_model.port.rst      |   6 +
 .../framework.testbed_model.posix_session.rst |   6 +
 dts/doc/framework.testbed_model.rst           |  26 +
 dts/doc/framework.testbed_model.sut_node.rst  |   6 +
 dts/doc/framework.testbed_model.tg_node.rst   |   6 +
 ..._generator.capturing_traffic_generator.rst |   6 +
 ...mework.testbed_model.traffic_generator.rst |  14 +
 ....testbed_model.traffic_generator.scapy.rst |   6 +
 ...el.traffic_generator.traffic_generator.rst |   6 +
 ...framework.testbed_model.virtual_device.rst |   6 +
 dts/doc/framework.utils.rst                   |   6 +
 dts/doc/index.rst                             |  41 ++
 dts/doc/meson.build                           |  27 +
 dts/meson.build                               |  16 +
 dts/poetry.lock                               | 499 +++++++++++++++++-
 dts/pyproject.toml                            |   7 +
 meson.build                                   |   1 +
 44 files changed, 920 insertions(+), 20 deletions(-)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/framework.config.rst
 create mode 100644 dts/doc/framework.config.types.rst
 create mode 100644 dts/doc/framework.exception.rst
 create mode 100644 dts/doc/framework.logger.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst
 create mode 100644 dts/doc/framework.remote_session.python_shell.rst
 create mode 100644 dts/doc/framework.remote_session.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.ssh_session.rst
 create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst
 create mode 100644 dts/doc/framework.runner.rst
 create mode 100644 dts/doc/framework.settings.rst
 create mode 100644 dts/doc/framework.test_result.rst
 create mode 100644 dts/doc/framework.test_suite.rst
 create mode 100644 dts/doc/framework.testbed_model.cpu.rst
 create mode 100644 dts/doc/framework.testbed_model.linux_session.rst
 create mode 100644 dts/doc/framework.testbed_model.node.rst
 create mode 100644 dts/doc/framework.testbed_model.os_session.rst
 create mode 100644 dts/doc/framework.testbed_model.port.rst
 create mode 100644 dts/doc/framework.testbed_model.posix_session.rst
 create mode 100644 dts/doc/framework.testbed_model.rst
 create mode 100644 dts/doc/framework.testbed_model.sut_node.rst
 create mode 100644 dts/doc/framework.testbed_model.tg_node.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst
 create mode 100644 dts/doc/framework.utils.rst
 create mode 100644 dts/doc/index.rst
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v4 1/3] dts: add doc generation dependencies
  2024-04-12 10:14               ` [PATCH v4 0/3] dts: API docs generation Juraj Linkeš
@ 2024-04-12 10:14                 ` Juraj Linkeš
  2024-04-12 10:14                 ` [PATCH v4 2/3] dts: add API doc sources Juraj Linkeš
  2024-04-12 10:14                 ` [PATCH v4 3/3] dts: add API doc generation Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-04-12 10:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, Luca.Vizzarro, npratte
  Cc: dev, Juraj Linkeš

Sphinx imports every Python module when generating documentation from
docstrings, meaning all DTS dependencies, including Python version,
must be satisfied.
By adding Sphinx to DTS dependencies we provide a convenient way to
generate the DTS API docs which satisfies all dependencies.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/poetry.lock    | 499 ++++++++++++++++++++++++++++++++++++++++++++-
 dts/pyproject.toml |   7 +
 2 files changed, 505 insertions(+), 1 deletion(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index a734fa71f0..8b27b0d751 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -1,5 +1,16 @@
 # This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
 
+[[package]]
+name = "alabaster"
+version = "0.7.13"
+description = "A configurable sidebar-enabled Sphinx theme"
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "alabaster-0.7.13-py3-none-any.whl", hash = "sha256:1ee19aca801bbabb5ba3f5f258e4422dfa86f82f3e9cefb0859b283cdd7f62a3"},
+    {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
+]
+
 [[package]]
 name = "attrs"
 version = "23.1.0"
@@ -18,6 +29,23 @@ docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-
 tests = ["attrs[tests-no-zope]", "zope-interface"]
 tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
 
+[[package]]
+name = "babel"
+version = "2.13.1"
+description = "Internationalization utilities"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Babel-2.13.1-py3-none-any.whl", hash = "sha256:7077a4984b02b6727ac10f1f7294484f737443d7e2e66c5e4380e41a3ae0b4ed"},
+    {file = "Babel-2.13.1.tar.gz", hash = "sha256:33e0952d7dd6374af8dbf6768cc4ddf3ccfefc244f9986d4074704f2fbd18900"},
+]
+
+[package.dependencies]
+setuptools = {version = "*", markers = "python_version >= \"3.12\""}
+
+[package.extras]
+dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
+
 [[package]]
 name = "bcrypt"
 version = "4.0.1"
@@ -86,6 +114,17 @@ d = ["aiohttp (>=3.7.4)"]
 jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
 uvloop = ["uvloop (>=0.15.2)"]
 
+[[package]]
+name = "certifi"
+version = "2023.7.22"
+description = "Python package for providing Mozilla's CA Bundle."
+optional = false
+python-versions = ">=3.6"
+files = [
+    {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
+    {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
+]
+
 [[package]]
 name = "cffi"
 version = "1.15.1"
@@ -162,6 +201,105 @@ files = [
 [package.dependencies]
 pycparser = "*"
 
+[[package]]
+name = "charset-normalizer"
+version = "3.3.2"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7.0"
+files = [
+    {file = "charset-normalizer-3.3.2.tar.gz", hash = "sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win32.whl", hash = "sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73"},
+    {file = "charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win32.whl", hash = "sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab"},
+    {file = "charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win32.whl", hash = "sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7"},
+    {file = "charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win32.whl", hash = "sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4"},
+    {file = "charset_normalizer-3.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win32.whl", hash = "sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25"},
+    {file = "charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win32.whl", hash = "sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f"},
+    {file = "charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d"},
+    {file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
+]
+
 [[package]]
 name = "click"
 version = "8.1.6"
@@ -232,6 +370,17 @@ ssh = ["bcrypt (>=3.1.5)"]
 test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
 test-randomorder = ["pytest-randomly"]
 
+[[package]]
+name = "docutils"
+version = "0.18.1"
+description = "Docutils -- Python Documentation Utilities"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+    {file = "docutils-0.18.1-py2.py3-none-any.whl", hash = "sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c"},
+    {file = "docutils-0.18.1.tar.gz", hash = "sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d06"},
+]
+
 [[package]]
 name = "fabric"
 version = "2.7.1"
@@ -252,6 +401,28 @@ pathlib2 = "*"
 pytest = ["mock (>=2.0.0,<3.0)", "pytest (>=3.2.5,<4.0)"]
 testing = ["mock (>=2.0.0,<3.0)"]
 
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+    {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "imagesize"
+version = "1.4.1"
+description = "Getting image size from png/jpeg/jpeg2000/gif file"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+    {file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
+    {file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
+]
+
 [[package]]
 name = "invoke"
 version = "1.7.3"
@@ -280,6 +451,23 @@ pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"
 plugins = ["setuptools"]
 requirements-deprecated-finder = ["pip-api", "pipreqs"]
 
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+    {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
 [[package]]
 name = "jsonpatch"
 version = "1.33"
@@ -340,6 +528,65 @@ files = [
 [package.dependencies]
 referencing = ">=0.28.0"
 
+[[package]]
+name = "markupsafe"
+version = "2.1.3"
+description = "Safely add untrusted strings to HTML/XML markup."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cd0f502fe016460680cd20aaa5a76d241d6f35a1c3350c474bac1273803893fa"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e09031c87a1e51556fdcb46e5bd4f59dfb743061cf93c4d6831bf894f125eb57"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68e78619a61ecf91e76aa3e6e8e33fc4894a2bebe93410754bd28fce0a8a4f9f"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:65c1a9bcdadc6c28eecee2c119465aebff8f7a584dd719facdd9e825ec61ab52"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:525808b8019e36eb524b8c68acdd63a37e75714eac50e988180b169d64480a00"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:962f82a3086483f5e5f64dbad880d31038b698494799b097bc59c2edf392fce6"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:aa7bd130efab1c280bed0f45501b7c8795f9fdbeb02e965371bbef3523627779"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c9c804664ebe8f83a211cace637506669e7890fec1b4195b505c214e50dd4eb7"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win32.whl", hash = "sha256:10bbfe99883db80bdbaff2dcf681dfc6533a614f700da1287707e8a5d78a8431"},
+    {file = "MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:1577735524cdad32f9f694208aa75e422adba74f1baee7551620e43a3141f559"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ad9e82fb8f09ade1c3e1b996a6337afac2b8b9e365f926f5a61aacc71adc5b3c"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c0fae6c3be832a0a0473ac912810b2877c8cb9d76ca48de1ed31e1c68386575"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b076b6226fb84157e3f7c971a47ff3a679d837cf338547532ab866c57930dbee"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bfce63a9e7834b12b87c64d6b155fdd9b3b96191b6bd334bf37db7ff1fe457f2"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:338ae27d6b8745585f87218a3f23f1512dbf52c26c28e322dbe54bcede54ccb9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e4dd52d80b8c83fdce44e12478ad2e85c64ea965e75d66dbeafb0a3e77308fcc"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:df0be2b576a7abbf737b1575f048c23fb1d769f267ec4358296f31c2479db8f9"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
+    {file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca379055a47383d02a5400cb0d110cef0a776fc644cda797db0c5696cfd7e18e"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7ff0f54cb4ff66dd38bebd335a38e2c22c41a8ee45aa608efc890ac3e3931bc"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c011a4149cfbcf9f03994ec2edffcb8b1dc2d2aede7ca243746df97a5d41ce48"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:56d9f2ecac662ca1611d183feb03a3fa4406469dafe241673d521dd5ae92a155"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win32.whl", hash = "sha256:8758846a7e80910096950b67071243da3e5a20ed2546e6392603c096778d48e0"},
+    {file = "MarkupSafe-2.1.3-cp37-cp37m-win_amd64.whl", hash = "sha256:787003c0ddb00500e49a10f2844fac87aa6ce977b90b0feaaf9de23c22508b24"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ef12179d3a291be237280175b542c07a36e7f60718296278d8593d21ca937d4"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2c1b19b3aaacc6e57b7e25710ff571c24d6c3613a45e905b1fde04d691b98ee0"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8afafd99945ead6e075b973fefa56379c5b5c53fd8937dad92c662da5d8fd5ee"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c41976a29d078bb235fea9b2ecd3da465df42a562910f9022f1a03107bd02be"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d080e0a5eb2529460b30190fcfcc4199bd7f827663f858a226a81bc27beaa97e"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:69c0f17e9f5a7afdf2cc9fb2d1ce6aabdb3bafb7f38017c0b77862bcec2bbad8"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:504b320cd4b7eff6f968eddf81127112db685e81f7e36e75f9f84f0df46041c3"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:42de32b22b6b804f42c5d98be4f7e5e977ecdd9ee9b660fda1a3edf03b11792d"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win32.whl", hash = "sha256:ceb01949af7121f9fc39f7d27f91be8546f3fb112c608bc4029aef0bab86a2a5"},
+    {file = "MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl", hash = "sha256:1b40069d487e7edb2676d3fbdb2b0829ffa2cd63a2ec26c4938b2d34391b4ecc"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8023faf4e01efadfa183e863fefde0046de576c6f14659e8782065bcece22198"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6b2b56950d93e41f33b4223ead100ea0fe11f8e6ee5f641eb753ce4b77a7042b"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9dcdfd0eaf283af041973bff14a2e143b8bd64e069f4c383416ecd79a81aab58"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05fb21170423db021895e1ea1e1f3ab3adb85d1c2333cbc2310f2a26bc77272e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:282c2cb35b5b673bbcadb33a585408104df04f14b2d9b01d4c345a3b92861c2c"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab4a0df41e7c16a1392727727e7998a467472d0ad65f3ad5e6e765015df08636"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ef3cb2ebbf91e330e3bb937efada0edd9003683db6b57bb108c4001f37a02ea"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0a4e4a1aff6c7ac4cd55792abf96c915634c2b97e3cc1c7129578aa68ebd754e"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win32.whl", hash = "sha256:fec21693218efe39aa7f8599346e90c705afa52c5b31ae019b2e57e8f6542bb2"},
+    {file = "MarkupSafe-2.1.3-cp39-cp39-win_amd64.whl", hash = "sha256:3fd4abcb888d15a94f32b75d8fd18ee162ca0c064f35b11134be77050296d6ba"},
+    {file = "MarkupSafe-2.1.3.tar.gz", hash = "sha256:af598ed32d6ae86f1b747b82783958b1a4ab8f617b06fe68795c7f026abbdcad"},
+]
+
 [[package]]
 name = "mccabe"
 version = "0.7.0"
@@ -404,6 +651,17 @@ files = [
     {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
 ]
 
+[[package]]
+name = "packaging"
+version = "23.2"
+description = "Core utilities for Python packages"
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
+    {file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
+]
+
 [[package]]
 name = "paramiko"
 version = "3.2.0"
@@ -515,6 +773,20 @@ files = [
     {file = "pyflakes-2.5.0.tar.gz", hash = "sha256:491feb020dca48ccc562a8c0cbe8df07ee13078df59813b83959cbdada312ea3"},
 ]
 
+[[package]]
+name = "pygments"
+version = "2.16.1"
+description = "Pygments is a syntax highlighting package written in Python."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "Pygments-2.16.1-py3-none-any.whl", hash = "sha256:13fc09fa63bc8d8671a6d247e1eb303c4b343eaee81d861f3404db2935653692"},
+    {file = "Pygments-2.16.1.tar.gz", hash = "sha256:1daff0494820c69bc8941e407aa20f577374ee88364ee10a98fdbe0aece96e29"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
 [[package]]
 name = "pylama"
 version = "8.4.1"
@@ -632,6 +904,27 @@ files = [
 attrs = ">=22.2.0"
 rpds-py = ">=0.7.0"
 
+[[package]]
+name = "requests"
+version = "2.31.0"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
+    {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
 [[package]]
 name = "rpds-py"
 version = "0.9.2"
@@ -753,6 +1046,22 @@ basic = ["ipython"]
 complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
 docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
 
+[[package]]
+name = "setuptools"
+version = "68.2.2"
+description = "Easily download, build, install, upgrade, and uninstall Python packages"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "setuptools-68.2.2-py3-none-any.whl", hash = "sha256:b454a35605876da60632df1a60f736524eb73cc47bbc9f3f1ef1b644de74fd2a"},
+    {file = "setuptools-68.2.2.tar.gz", hash = "sha256:4ac1475276d2f1c48684874089fefcd83bd7162ddaafb81fac866ba0db282a87"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
+testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
+testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
+
 [[package]]
 name = "six"
 version = "1.16.0"
@@ -775,6 +1084,177 @@ files = [
     {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
 ]
 
+[[package]]
+name = "sphinx"
+version = "6.2.1"
+description = "Python documentation generator"
+optional = false
+python-versions = ">=3.8"
+files = [
+    {file = "Sphinx-6.2.1.tar.gz", hash = "sha256:6d56a34697bb749ffa0152feafc4b19836c755d90a7c59b72bc7dfd371b9cc6b"},
+    {file = "sphinx-6.2.1-py3-none-any.whl", hash = "sha256:97787ff1fa3256a3eef9eda523a63dbf299f7b47e053cfcf684a1c2a8380c912"},
+]
+
+[package.dependencies]
+alabaster = ">=0.7,<0.8"
+babel = ">=2.9"
+colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
+docutils = ">=0.18.1,<0.20"
+imagesize = ">=1.3"
+Jinja2 = ">=3.0"
+packaging = ">=21.0"
+Pygments = ">=2.13"
+requests = ">=2.25.0"
+snowballstemmer = ">=2.0"
+sphinxcontrib-applehelp = "*"
+sphinxcontrib-devhelp = "*"
+sphinxcontrib-htmlhelp = ">=2.0.0"
+sphinxcontrib-jsmath = "*"
+sphinxcontrib-qthelp = "*"
+sphinxcontrib-serializinghtml = ">=1.1.5"
+
+[package.extras]
+docs = ["sphinxcontrib-websupport"]
+lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"]
+test = ["cython", "filelock", "html5lib", "pytest (>=4.6)"]
+
+[[package]]
+name = "sphinx-rtd-theme"
+version = "1.2.2"
+description = "Read the Docs theme for Sphinx"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
+files = [
+    {file = "sphinx_rtd_theme-1.2.2-py2.py3-none-any.whl", hash = "sha256:6a7e7d8af34eb8fc57d52a09c6b6b9c46ff44aea5951bc831eeb9245378f3689"},
+    {file = "sphinx_rtd_theme-1.2.2.tar.gz", hash = "sha256:01c5c5a72e2d025bd23d1f06c59a4831b06e6ce6c01fdd5ebfe9986c0a880fc7"},
+]
+
+[package.dependencies]
+docutils = "<0.19"
+sphinx = ">=1.6,<7"
+sphinxcontrib-jquery = ">=4,<5"
+
+[package.extras]
+dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
+
+[[package]]
+name = "sphinxcontrib-applehelp"
+version = "1.0.7"
+description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_applehelp-1.0.7-py3-none-any.whl", hash = "sha256:094c4d56209d1734e7d252f6e0b3ccc090bd52ee56807a5d9315b19c122ab15d"},
+    {file = "sphinxcontrib_applehelp-1.0.7.tar.gz", hash = "sha256:39fdc8d762d33b01a7d8f026a3b7d71563ea3b72787d5f00ad8465bd9d6dfbfa"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-devhelp"
+version = "1.0.5"
+description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_devhelp-1.0.5-py3-none-any.whl", hash = "sha256:fe8009aed765188f08fcaadbb3ea0d90ce8ae2d76710b7e29ea7d047177dae2f"},
+    {file = "sphinxcontrib_devhelp-1.0.5.tar.gz", hash = "sha256:63b41e0d38207ca40ebbeabcf4d8e51f76c03e78cd61abe118cf4435c73d4212"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-htmlhelp"
+version = "2.0.4"
+description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_htmlhelp-2.0.4-py3-none-any.whl", hash = "sha256:8001661c077a73c29beaf4a79968d0726103c5605e27db92b9ebed8bab1359e9"},
+    {file = "sphinxcontrib_htmlhelp-2.0.4.tar.gz", hash = "sha256:6c26a118a05b76000738429b724a0568dbde5b72391a688577da08f11891092a"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["html5lib", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-jquery"
+version = "4.1"
+description = "Extension to include jQuery on newer Sphinx releases"
+optional = false
+python-versions = ">=2.7"
+files = [
+    {file = "sphinxcontrib-jquery-4.1.tar.gz", hash = "sha256:1620739f04e36a2c779f1a131a2dfd49b2fd07351bf1968ced074365933abc7a"},
+    {file = "sphinxcontrib_jquery-4.1-py2.py3-none-any.whl", hash = "sha256:f936030d7d0147dd026a4f2b5a57343d233f1fc7b363f68b3d4f1cb0993878ae"},
+]
+
+[package.dependencies]
+Sphinx = ">=1.8"
+
+[[package]]
+name = "sphinxcontrib-jsmath"
+version = "1.0.1"
+description = "A sphinx extension which renders display math in HTML via JavaScript"
+optional = false
+python-versions = ">=3.5"
+files = [
+    {file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
+    {file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
+]
+
+[package.extras]
+test = ["flake8", "mypy", "pytest"]
+
+[[package]]
+name = "sphinxcontrib-qthelp"
+version = "1.0.6"
+description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp documents"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_qthelp-1.0.6-py3-none-any.whl", hash = "sha256:bf76886ee7470b934e363da7a954ea2825650013d367728588732c7350f49ea4"},
+    {file = "sphinxcontrib_qthelp-1.0.6.tar.gz", hash = "sha256:62b9d1a186ab7f5ee3356d906f648cacb7a6bdb94d201ee7adf26db55092982d"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
+[[package]]
+name = "sphinxcontrib-serializinghtml"
+version = "1.1.9"
+description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)"
+optional = false
+python-versions = ">=3.9"
+files = [
+    {file = "sphinxcontrib_serializinghtml-1.1.9-py3-none-any.whl", hash = "sha256:9b36e503703ff04f20e9675771df105e58aa029cfcbc23b8ed716019b7416ae1"},
+    {file = "sphinxcontrib_serializinghtml-1.1.9.tar.gz", hash = "sha256:0c64ff898339e1fac29abd2bf5f11078f3ec413cfe9c046d3120d7ca65530b54"},
+]
+
+[package.dependencies]
+Sphinx = ">=5"
+
+[package.extras]
+lint = ["docutils-stubs", "flake8", "mypy"]
+test = ["pytest"]
+
 [[package]]
 name = "toml"
 version = "0.10.2"
@@ -819,6 +1299,23 @@ files = [
     {file = "typing_extensions-4.7.1.tar.gz", hash = "sha256:b75ddc264f0ba5615db7ba217daeb99701ad295353c45f9e95963337ceeeffb2"},
 ]
 
+[[package]]
+name = "urllib3"
+version = "2.0.7"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.7"
+files = [
+    {file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
+    {file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
+secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["zstandard (>=0.18.0)"]
+
 [[package]]
 name = "warlock"
 version = "2.0.1"
@@ -837,4 +1334,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = "3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
+content-hash = "44c0fd4ebd7a1630ad104d55a25ef85d361abc968157254e757d44e340bca06f"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index a81e46fc07..8eb92b4f11 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -35,6 +35,13 @@ pylama = "^8.4.1"
 pyflakes = "^2.5.0"
 toml = "^0.10.2"
 
+[tool.poetry.group.docs]
+optional = true
+
+[tool.poetry.group.docs.dependencies]
+sphinx = "<7"
+sphinx-rtd-theme = "^1.2.2"
+
 [build-system]
 requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v4 2/3] dts: add API doc sources
  2024-04-12 10:14               ` [PATCH v4 0/3] dts: API docs generation Juraj Linkeš
  2024-04-12 10:14                 ` [PATCH v4 1/3] dts: add doc generation dependencies Juraj Linkeš
@ 2024-04-12 10:14                 ` Juraj Linkeš
  2024-04-12 10:14                 ` [PATCH v4 3/3] dts: add API doc generation Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-04-12 10:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, Luca.Vizzarro, npratte
  Cc: dev, Juraj Linkeš

These sources could be generated with the sphinx-apidoc utility, but
that doesn't give us enough flexibility, such as sorting the order of
modules or changing the headers of the modules.

The sources included in this patch were in fact generated by said
utility, but modified to improve the look of the documentation. The
improvements are mainly in toctree definitions and the titles of the
modules/packages. These were made with specific Sphinx config options in
mind.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 dts/doc/conf_yaml_schema.json                 |  1 +
 dts/doc/framework.config.rst                  | 12 ++++++
 dts/doc/framework.config.types.rst            |  6 +++
 dts/doc/framework.exception.rst               |  6 +++
 dts/doc/framework.logger.rst                  |  6 +++
 ...ote_session.interactive_remote_session.rst |  6 +++
 ...ework.remote_session.interactive_shell.rst |  6 +++
 .../framework.remote_session.python_shell.rst |  6 +++
 ...ramework.remote_session.remote_session.rst |  6 +++
 dts/doc/framework.remote_session.rst          | 17 ++++++++
 .../framework.remote_session.ssh_session.rst  |  6 +++
 ...framework.remote_session.testpmd_shell.rst |  6 +++
 dts/doc/framework.runner.rst                  |  6 +++
 dts/doc/framework.settings.rst                |  6 +++
 dts/doc/framework.test_result.rst             |  6 +++
 dts/doc/framework.test_suite.rst              |  6 +++
 dts/doc/framework.testbed_model.cpu.rst       |  6 +++
 .../framework.testbed_model.linux_session.rst |  6 +++
 dts/doc/framework.testbed_model.node.rst      |  6 +++
 .../framework.testbed_model.os_session.rst    |  6 +++
 dts/doc/framework.testbed_model.port.rst      |  6 +++
 .../framework.testbed_model.posix_session.rst |  6 +++
 dts/doc/framework.testbed_model.rst           | 26 ++++++++++++
 dts/doc/framework.testbed_model.sut_node.rst  |  6 +++
 dts/doc/framework.testbed_model.tg_node.rst   |  6 +++
 ..._generator.capturing_traffic_generator.rst |  6 +++
 ...mework.testbed_model.traffic_generator.rst | 14 +++++++
 ....testbed_model.traffic_generator.scapy.rst |  6 +++
 ...el.traffic_generator.traffic_generator.rst |  6 +++
 ...framework.testbed_model.virtual_device.rst |  6 +++
 dts/doc/framework.utils.rst                   |  6 +++
 dts/doc/index.rst                             | 41 +++++++++++++++++++
 32 files changed, 267 insertions(+)
 create mode 120000 dts/doc/conf_yaml_schema.json
 create mode 100644 dts/doc/framework.config.rst
 create mode 100644 dts/doc/framework.config.types.rst
 create mode 100644 dts/doc/framework.exception.rst
 create mode 100644 dts/doc/framework.logger.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.interactive_shell.rst
 create mode 100644 dts/doc/framework.remote_session.python_shell.rst
 create mode 100644 dts/doc/framework.remote_session.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.rst
 create mode 100644 dts/doc/framework.remote_session.ssh_session.rst
 create mode 100644 dts/doc/framework.remote_session.testpmd_shell.rst
 create mode 100644 dts/doc/framework.runner.rst
 create mode 100644 dts/doc/framework.settings.rst
 create mode 100644 dts/doc/framework.test_result.rst
 create mode 100644 dts/doc/framework.test_suite.rst
 create mode 100644 dts/doc/framework.testbed_model.cpu.rst
 create mode 100644 dts/doc/framework.testbed_model.linux_session.rst
 create mode 100644 dts/doc/framework.testbed_model.node.rst
 create mode 100644 dts/doc/framework.testbed_model.os_session.rst
 create mode 100644 dts/doc/framework.testbed_model.port.rst
 create mode 100644 dts/doc/framework.testbed_model.posix_session.rst
 create mode 100644 dts/doc/framework.testbed_model.rst
 create mode 100644 dts/doc/framework.testbed_model.sut_node.rst
 create mode 100644 dts/doc/framework.testbed_model.tg_node.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.scapy.rst
 create mode 100644 dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
 create mode 100644 dts/doc/framework.testbed_model.virtual_device.rst
 create mode 100644 dts/doc/framework.utils.rst
 create mode 100644 dts/doc/index.rst

diff --git a/dts/doc/conf_yaml_schema.json b/dts/doc/conf_yaml_schema.json
new file mode 120000
index 0000000000..d89eb81b72
--- /dev/null
+++ b/dts/doc/conf_yaml_schema.json
@@ -0,0 +1 @@
+../framework/config/conf_yaml_schema.json
\ No newline at end of file
diff --git a/dts/doc/framework.config.rst b/dts/doc/framework.config.rst
new file mode 100644
index 0000000000..f765ef0e32
--- /dev/null
+++ b/dts/doc/framework.config.rst
@@ -0,0 +1,12 @@
+config - Configuration Package
+==============================
+
+.. automodule:: framework.config
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.config.types
diff --git a/dts/doc/framework.config.types.rst b/dts/doc/framework.config.types.rst
new file mode 100644
index 0000000000..5af915b681
--- /dev/null
+++ b/dts/doc/framework.config.types.rst
@@ -0,0 +1,6 @@
+types - Configuration Types
+===========================
+
+.. automodule:: framework.config.types
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.exception.rst b/dts/doc/framework.exception.rst
new file mode 100644
index 0000000000..ad58bd15de
--- /dev/null
+++ b/dts/doc/framework.exception.rst
@@ -0,0 +1,6 @@
+exception - Exceptions
+======================
+
+.. automodule:: framework.exception
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.logger.rst b/dts/doc/framework.logger.rst
new file mode 100644
index 0000000000..3c25b34819
--- /dev/null
+++ b/dts/doc/framework.logger.rst
@@ -0,0 +1,6 @@
+logger - Logging Facility
+=========================
+
+.. automodule:: framework.logger
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.interactive_remote_session.rst b/dts/doc/framework.remote_session.interactive_remote_session.rst
new file mode 100644
index 0000000000..35dc5c4b03
--- /dev/null
+++ b/dts/doc/framework.remote_session.interactive_remote_session.rst
@@ -0,0 +1,6 @@
+interactive\_remote\_session - SSH Interactive Remote Session
+=============================================================
+
+.. automodule:: framework.remote_session.interactive_remote_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.interactive_shell.rst b/dts/doc/framework.remote_session.interactive_shell.rst
new file mode 100644
index 0000000000..8a59db7b6e
--- /dev/null
+++ b/dts/doc/framework.remote_session.interactive_shell.rst
@@ -0,0 +1,6 @@
+interactive\_shell - Base Interactive Remote Shell
+==================================================
+
+.. automodule:: framework.remote_session.interactive_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.python_shell.rst b/dts/doc/framework.remote_session.python_shell.rst
new file mode 100644
index 0000000000..a8ec06f281
--- /dev/null
+++ b/dts/doc/framework.remote_session.python_shell.rst
@@ -0,0 +1,6 @@
+python\_shell - Python Interactive Remote Shell
+===============================================
+
+.. automodule:: framework.remote_session.python_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.remote_session.rst b/dts/doc/framework.remote_session.remote_session.rst
new file mode 100644
index 0000000000..58b0960d07
--- /dev/null
+++ b/dts/doc/framework.remote_session.remote_session.rst
@@ -0,0 +1,6 @@
+remote\_session - Remote Session ABC
+====================================
+
+.. automodule:: framework.remote_session.remote_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.rst b/dts/doc/framework.remote_session.rst
new file mode 100644
index 0000000000..74f83f0307
--- /dev/null
+++ b/dts/doc/framework.remote_session.rst
@@ -0,0 +1,17 @@
+remote\_session - Node Connections Package
+==========================================
+
+.. automodule:: framework.remote_session
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.remote_session.remote_session
+   framework.remote_session.ssh_session
+   framework.remote_session.interactive_remote_session
+   framework.remote_session.interactive_shell
+   framework.remote_session.testpmd_shell
+   framework.remote_session.python_shell
diff --git a/dts/doc/framework.remote_session.ssh_session.rst b/dts/doc/framework.remote_session.ssh_session.rst
new file mode 100644
index 0000000000..05b019bc7c
--- /dev/null
+++ b/dts/doc/framework.remote_session.ssh_session.rst
@@ -0,0 +1,6 @@
+ssh\_session - SSH Remote Session
+=================================
+
+.. automodule:: framework.remote_session.ssh_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.remote_session.testpmd_shell.rst b/dts/doc/framework.remote_session.testpmd_shell.rst
new file mode 100644
index 0000000000..14510afb2b
--- /dev/null
+++ b/dts/doc/framework.remote_session.testpmd_shell.rst
@@ -0,0 +1,6 @@
+testpmd\_shell - Testpmd Interactive Remote Shell
+=================================================
+
+.. automodule:: framework.remote_session.testpmd_shell
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.runner.rst b/dts/doc/framework.runner.rst
new file mode 100644
index 0000000000..a1708f0002
--- /dev/null
+++ b/dts/doc/framework.runner.rst
@@ -0,0 +1,6 @@
+runner - Testbed Setup and Test Suite Runner
+============================================
+
+.. automodule:: framework.runner
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.settings.rst b/dts/doc/framework.settings.rst
new file mode 100644
index 0000000000..96bf194923
--- /dev/null
+++ b/dts/doc/framework.settings.rst
@@ -0,0 +1,6 @@
+settings - Command Line Arguments and Environment Variables
+===========================================================
+
+.. automodule:: framework.settings
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.test_result.rst b/dts/doc/framework.test_result.rst
new file mode 100644
index 0000000000..527357a04a
--- /dev/null
+++ b/dts/doc/framework.test_result.rst
@@ -0,0 +1,6 @@
+test\_result - Test Results Records
+===================================
+
+.. automodule:: framework.test_result
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.test_suite.rst b/dts/doc/framework.test_suite.rst
new file mode 100644
index 0000000000..96f893e465
--- /dev/null
+++ b/dts/doc/framework.test_suite.rst
@@ -0,0 +1,6 @@
+test\_suite - Common Test Suite Features
+========================================
+
+.. automodule:: framework.test_suite
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.cpu.rst b/dts/doc/framework.testbed_model.cpu.rst
new file mode 100644
index 0000000000..dd2baf09fb
--- /dev/null
+++ b/dts/doc/framework.testbed_model.cpu.rst
@@ -0,0 +1,6 @@
+cpu - CPU Representation and Utilities
+======================================
+
+.. automodule:: framework.testbed_model.cpu
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.linux_session.rst b/dts/doc/framework.testbed_model.linux_session.rst
new file mode 100644
index 0000000000..141f3f49e3
--- /dev/null
+++ b/dts/doc/framework.testbed_model.linux_session.rst
@@ -0,0 +1,6 @@
+linux\_session - Linux Remote Session
+=====================================
+
+.. automodule:: framework.testbed_model.linux_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.node.rst b/dts/doc/framework.testbed_model.node.rst
new file mode 100644
index 0000000000..2133dd604b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.node.rst
@@ -0,0 +1,6 @@
+node - Base Node
+================
+
+.. automodule:: framework.testbed_model.node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.os_session.rst b/dts/doc/framework.testbed_model.os_session.rst
new file mode 100644
index 0000000000..f3574e939a
--- /dev/null
+++ b/dts/doc/framework.testbed_model.os_session.rst
@@ -0,0 +1,6 @@
+os\_session - OS-aware Remote Session ABC
+=========================================
+
+.. automodule:: framework.testbed_model.os_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.port.rst b/dts/doc/framework.testbed_model.port.rst
new file mode 100644
index 0000000000..17bd391e63
--- /dev/null
+++ b/dts/doc/framework.testbed_model.port.rst
@@ -0,0 +1,6 @@
+port - NIC Port Representation
+==============================
+
+.. automodule:: framework.testbed_model.port
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.posix_session.rst b/dts/doc/framework.testbed_model.posix_session.rst
new file mode 100644
index 0000000000..308c051ae5
--- /dev/null
+++ b/dts/doc/framework.testbed_model.posix_session.rst
@@ -0,0 +1,6 @@
+posix\_session - Posix Remote Session
+=====================================
+
+.. automodule:: framework.testbed_model.posix_session
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.rst b/dts/doc/framework.testbed_model.rst
new file mode 100644
index 0000000000..4b024e47e6
--- /dev/null
+++ b/dts/doc/framework.testbed_model.rst
@@ -0,0 +1,26 @@
+testbed\_model - Testbed Modelling Package
+==========================================
+
+.. automodule:: framework.testbed_model
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 2
+
+   framework.testbed_model.traffic_generator
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.testbed_model.os_session
+   framework.testbed_model.linux_session
+   framework.testbed_model.posix_session
+   framework.testbed_model.node
+   framework.testbed_model.sut_node
+   framework.testbed_model.tg_node
+   framework.testbed_model.cpu
+   framework.testbed_model.port
+   framework.testbed_model.virtual_device
diff --git a/dts/doc/framework.testbed_model.sut_node.rst b/dts/doc/framework.testbed_model.sut_node.rst
new file mode 100644
index 0000000000..7e12b6c87e
--- /dev/null
+++ b/dts/doc/framework.testbed_model.sut_node.rst
@@ -0,0 +1,6 @@
+sut\_node - System Under Test Node
+==================================
+
+.. automodule:: framework.testbed_model.sut_node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.tg_node.rst b/dts/doc/framework.testbed_model.tg_node.rst
new file mode 100644
index 0000000000..41206c000b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.tg_node.rst
@@ -0,0 +1,6 @@
+tg\_node - Traffig Generator Node
+=================================
+
+.. automodule:: framework.testbed_model.tg_node
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
new file mode 100644
index 0000000000..06c087155a
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.capturing_traffic_generator.rst
@@ -0,0 +1,6 @@
+capturing\_traffic\_generator - Base Capturing TG ABC
+=====================================================
+
+.. automodule:: framework.testbed_model.traffic_generator.capturing_traffic_generator
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.rst
new file mode 100644
index 0000000000..18b6f1b98b
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.rst
@@ -0,0 +1,14 @@
+traffic\_generator Subpackage
+=============================
+
+.. automodule:: framework.testbed_model.traffic_generator
+   :members:
+   :show-inheritance:
+
+.. toctree::
+   :hidden:
+   :maxdepth: 1
+
+   framework.testbed_model.traffic_generator.traffic_generator
+   framework.testbed_model.traffic_generator.capturing_traffic_generator
+   framework.testbed_model.traffic_generator.scapy
diff --git a/dts/doc/framework.testbed_model.traffic_generator.scapy.rst b/dts/doc/framework.testbed_model.traffic_generator.scapy.rst
new file mode 100644
index 0000000000..7062914ec3
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.scapy.rst
@@ -0,0 +1,6 @@
+scapy - Capturing Traffic Generator
+===================================
+
+.. automodule:: framework.testbed_model.traffic_generator.scapy
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst b/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
new file mode 100644
index 0000000000..e366d7f222
--- /dev/null
+++ b/dts/doc/framework.testbed_model.traffic_generator.traffic_generator.rst
@@ -0,0 +1,6 @@
+traffic\_generator - Base TG ABC
+================================
+
+.. automodule:: framework.testbed_model.traffic_generator.traffic_generator
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.testbed_model.virtual_device.rst b/dts/doc/framework.testbed_model.virtual_device.rst
new file mode 100644
index 0000000000..38e6c1d0bc
--- /dev/null
+++ b/dts/doc/framework.testbed_model.virtual_device.rst
@@ -0,0 +1,6 @@
+virtual\_device - Virtual Devices
+=================================
+
+.. automodule:: framework.testbed_model.virtual_device
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/framework.utils.rst b/dts/doc/framework.utils.rst
new file mode 100644
index 0000000000..0e7bb80666
--- /dev/null
+++ b/dts/doc/framework.utils.rst
@@ -0,0 +1,6 @@
+utils - Various Utilities
+=========================
+
+.. automodule:: framework.utils
+   :members:
+   :show-inheritance:
diff --git a/dts/doc/index.rst b/dts/doc/index.rst
new file mode 100644
index 0000000000..501e7204a7
--- /dev/null
+++ b/dts/doc/index.rst
@@ -0,0 +1,41 @@
+.. DPDK Test Suite documentation.
+
+Welcome to DPDK Test Suite's API documentation!
+===============================================
+
+.. automodule:: framework
+   :members:
+   :show-inheritance:
+
+Packages
+--------
+
+.. toctree::
+   :includehidden:
+   :maxdepth: 1
+
+   framework.testbed_model
+   framework.remote_session
+   framework.config
+
+Modules
+-------
+
+.. toctree::
+   :maxdepth: 1
+
+   framework.runner
+   framework.test_suite
+   framework.test_result
+   framework.settings
+   framework.logger
+   framework.utils
+   framework.exception
+
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

* [PATCH v4 3/3] dts: add API doc generation
  2024-04-12 10:14               ` [PATCH v4 0/3] dts: API docs generation Juraj Linkeš
  2024-04-12 10:14                 ` [PATCH v4 1/3] dts: add doc generation dependencies Juraj Linkeš
  2024-04-12 10:14                 ` [PATCH v4 2/3] dts: add API doc sources Juraj Linkeš
@ 2024-04-12 10:14                 ` Juraj Linkeš
  2 siblings, 0 replies; 255+ messages in thread
From: Juraj Linkeš @ 2024-04-12 10:14 UTC (permalink / raw)
  To: thomas, Honnappa.Nagarahalli, bruce.richardson, jspewock, probb,
	paul.szczepanek, Luca.Vizzarro, npratte
  Cc: dev, Juraj Linkeš

The tool used to generate DTS API docs is Sphinx, which is already in
use in DPDK. The same configuration is used to preserve style with one
DTS-specific configuration (so that the DPDK docs are unchanged) that
modifies how the sidebar displays the content.

Sphinx generates the documentation from Python docstrings. The docstring
format is the Google format [0] which requires the sphinx.ext.napoleon
extension. The other extension, sphinx.ext.intersphinx, enables linking
to object in external documentations, such as the Python documentation.

There are two requirements for building DTS docs:
* The same Python version as DTS or higher, because Sphinx imports the
  code.
* Also the same Python packages as DTS, for the same reason.

[0] https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
Tested-by: Nicholas Pratte <npratte@iol.unh.edu>
---
 buildtools/call-sphinx-build.py | 33 +++++++++++++++++++---------
 doc/api/doxy-api-index.md       |  3 +++
 doc/api/doxy-api.conf.in        |  2 ++
 doc/api/meson.build             | 11 +++++++---
 doc/guides/conf.py              | 39 ++++++++++++++++++++++++++++-----
 doc/guides/meson.build          |  1 +
 doc/guides/tools/dts.rst        | 34 +++++++++++++++++++++++++++-
 dts/doc/meson.build             | 27 +++++++++++++++++++++++
 dts/meson.build                 | 16 ++++++++++++++
 meson.build                     |  1 +
 10 files changed, 148 insertions(+), 19 deletions(-)
 create mode 100644 dts/doc/meson.build
 create mode 100644 dts/meson.build

diff --git a/buildtools/call-sphinx-build.py b/buildtools/call-sphinx-build.py
index 39a60d09fa..aea771a64e 100755
--- a/buildtools/call-sphinx-build.py
+++ b/buildtools/call-sphinx-build.py
@@ -3,37 +3,50 @@
 # Copyright(c) 2019 Intel Corporation
 #
 
+import argparse
 import sys
 import os
 from os.path import join
 from subprocess import run, PIPE, STDOUT
 from packaging.version import Version
 
-# assign parameters to variables
-(sphinx, version, src, dst, *extra_args) = sys.argv[1:]
+parser = argparse.ArgumentParser()
+parser.add_argument('sphinx')
+parser.add_argument('version')
+parser.add_argument('src')
+parser.add_argument('dst')
+parser.add_argument('--dts-root', default=None)
+args, extra_args = parser.parse_known_args()
 
 # set the version in environment for sphinx to pick up
-os.environ['DPDK_VERSION'] = version
+os.environ['DPDK_VERSION'] = args.version
+if args.dts_root:
+    os.environ['DTS_ROOT'] = args.dts_root
 
 # for sphinx version >= 1.7 add parallelism using "-j auto"
-ver = run([sphinx, '--version'], stdout=PIPE,
+ver = run([args.sphinx, '--version'], stdout=PIPE,
           stderr=STDOUT).stdout.decode().split()[-1]
-sphinx_cmd = [sphinx] + extra_args
+sphinx_cmd = [args.sphinx] + extra_args
 if Version(ver) >= Version('1.7'):
     sphinx_cmd += ['-j', 'auto']
 
 # find all the files sphinx will process so we can write them as dependencies
 srcfiles = []
-for root, dirs, files in os.walk(src):
+for root, dirs, files in os.walk(args.src):
     srcfiles.extend([join(root, f) for f in files])
 
+if not os.path.exists(args.dst):
+    os.makedirs(args.dst)
+
 # run sphinx, putting the html output in a "html" directory
-with open(join(dst, 'sphinx_html.out'), 'w') as out:
-    process = run(sphinx_cmd + ['-b', 'html', src, join(dst, 'html')],
-                  stdout=out)
+with open(join(args.dst, 'sphinx_html.out'), 'w') as out:
+    process = run(
+        sphinx_cmd + ['-b', 'html', args.src, join(args.dst, 'html')],
+        stdout=out
+    )
 
 # create a gcc format .d file giving all the dependencies of this doc build
-with open(join(dst, '.html.d'), 'w') as d:
+with open(join(args.dst, '.html.d'), 'w') as d:
     d.write('html: ' + ' '.join(srcfiles) + '\n')
 
 sys.exit(process.returncode)
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 8c1eb8fafa..d5f823b7f0 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -243,3 +243,6 @@ The public API headers are grouped by topics:
   [experimental APIs](@ref rte_compat.h),
   [ABI versioning](@ref rte_function_versioning.h),
   [version](@ref rte_version.h)
+
+- **tests**:
+  [**DTS**](@dts_api_main_page)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 27afec8b3b..2e08c6a452 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -123,6 +123,8 @@ SEARCHENGINE            = YES
 SORT_MEMBER_DOCS        = NO
 SOURCE_BROWSER          = YES
 
+ALIASES                 = "dts_api_main_page=@DTS_API_MAIN_PAGE@"
+
 EXAMPLE_PATH            = @TOPDIR@/examples
 EXAMPLE_PATTERNS        = *.c
 EXAMPLE_RECURSIVE       = YES
diff --git a/doc/api/meson.build b/doc/api/meson.build
index 5b50692df9..ffc75d7b5a 100644
--- a/doc/api/meson.build
+++ b/doc/api/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
 
+doc_api_build_dir = meson.current_build_dir()
 doxygen = find_program('doxygen', required: get_option('enable_docs'))
 
 if not doxygen.found()
@@ -32,14 +33,18 @@ example = custom_target('examples.dox',
 # set up common Doxygen configuration
 cdata = configuration_data()
 cdata.set('VERSION', meson.project_version())
-cdata.set('API_EXAMPLES', join_paths(dpdk_build_root, 'doc', 'api', 'examples.dox'))
-cdata.set('OUTPUT', join_paths(dpdk_build_root, 'doc', 'api'))
+cdata.set('API_EXAMPLES', join_paths(doc_api_build_dir, 'examples.dox'))
+cdata.set('OUTPUT', doc_api_build_dir)
 cdata.set('TOPDIR', dpdk_source_root)
-cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, join_paths(dpdk_build_root, 'doc', 'api')]))
+cdata.set('STRIP_FROM_PATH', ' '.join([dpdk_source_root, doc_api_build_dir]))
 cdata.set('WARN_AS_ERROR', 'NO')
 if get_option('werror')
     cdata.set('WARN_AS_ERROR', 'YES')
 endif
+# A local reference must be relative to the main index.html page
+# The path below can't be taken from the DTS meson file as that would
+# require recursive subdir traversal (doc, dts, then doc again)
+cdata.set('DTS_API_MAIN_PAGE', join_paths('..', 'dts', 'html', 'index.html'))
 
 # configure HTML Doxygen run
 html_cdata = configuration_data()
diff --git a/doc/guides/conf.py b/doc/guides/conf.py
index 0f7ff5282d..b442a1f76c 100644
--- a/doc/guides/conf.py
+++ b/doc/guides/conf.py
@@ -7,10 +7,9 @@
 from sphinx import __version__ as sphinx_version
 from os import listdir
 from os import environ
-from os.path import basename
-from os.path import dirname
+from os.path import basename, dirname
 from os.path import join as path_join
-from sys import argv, stderr
+from sys import argv, stderr, path
 
 import configparser
 
@@ -24,6 +23,37 @@
           file=stderr)
     pass
 
+# Napoleon enables the Google format of Python doscstrings, used in DTS
+# Intersphinx allows linking to external projects, such as Python docs, also used in DTS
+extensions = ['sphinx.ext.napoleon', 'sphinx.ext.intersphinx']
+
+# DTS Python docstring options
+autodoc_default_options = {
+    'members': True,
+    'member-order': 'bysource',
+    'show-inheritance': True,
+}
+autodoc_class_signature = 'separated'
+autodoc_typehints = 'both'
+autodoc_typehints_format = 'short'
+autodoc_typehints_description_target = 'documented'
+napoleon_numpy_docstring = False
+napoleon_attr_annotations = True
+napoleon_preprocess_types = True
+add_module_names = False
+toc_object_entries = True
+toc_object_entries_show_parents = 'hide'
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
+
+dts_root = environ.get('DTS_ROOT')
+if dts_root:
+    path.append(dts_root)
+    # DTS Sidebar config
+    html_theme_options = {
+        'collapse_navigation': False,
+        'navigation_depth': -1,
+    }
+
 stop_on_error = ('-W' in argv)
 
 project = 'Data Plane Development Kit'
@@ -35,8 +65,7 @@
 html_show_copyright = False
 highlight_language = 'none'
 
-release = environ.setdefault('DPDK_VERSION', "None")
-version = release
+version = environ.setdefault('DPDK_VERSION', "None")
 
 master_doc = 'index'
 
diff --git a/doc/guides/meson.build b/doc/guides/meson.build
index 51f81da2e3..8933d75f6b 100644
--- a/doc/guides/meson.build
+++ b/doc/guides/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
 
+doc_guides_source_dir = meson.current_source_dir()
 sphinx = find_program('sphinx-build', required: get_option('enable_docs'))
 
 if not sphinx.found()
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 47b218b2c6..d1c3c2af7a 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -280,7 +280,12 @@ and try not to divert much from it.
 The :ref:`DTS developer tools <dts_dev_tools>` will issue warnings
 when some of the basics are not met.
 
-The code must be properly documented with docstrings.
+The API documentation, which is a helpful reference when developing, may be accessed
+in the code directly or generated with the :ref:`API docs build steps <building_api_docs>`.
+When adding new files or modifying the directory structure, the corresponding changes must
+be made to DTS api doc sources in ``dts/doc``.
+
+Speaking of which, the code must be properly documented with docstrings.
 The style must conform to the `Google style
 <https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
 See an example of the style `here
@@ -415,6 +420,33 @@ the DTS code check and format script.
 Refer to the script for usage: ``devtools/dts-check-format.sh -h``.
 
 
+.. _building_api_docs:
+
+Building DTS API docs
+---------------------
+
+To build DTS API docs, install the dependencies with Poetry, then enter its shell:
+
+.. code-block:: console
+
+   poetry install --no-root --with docs
+   poetry shell
+
+The documentation is built using the standard DPDK build system. After executing the meson command
+and entering Poetry's shell, build the documentation with:
+
+.. code-block:: console
+
+   ninja -C build dts-doc
+
+The output is generated in ``build/doc/api/dts/html``.
+
+.. Note::
+
+   Make sure to fix any Sphinx warnings when adding or updating docstrings. Also make sure to run
+   the ``devtools/dts-check-format.sh`` script and address any issues it finds.
+
+
 Configuration Schema
 --------------------
 
diff --git a/dts/doc/meson.build b/dts/doc/meson.build
new file mode 100644
index 0000000000..01b7b51034
--- /dev/null
+++ b/dts/doc/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+sphinx = find_program('sphinx-build', required: false)
+sphinx_apidoc = find_program('sphinx-apidoc', required: false)
+
+if not sphinx.found() or not sphinx_apidoc.found()
+    subdir_done()
+endif
+
+dts_doc_api_build_dir = join_paths(doc_api_build_dir, 'dts')
+
+extra_sphinx_args = ['-E', '-c', doc_guides_source_dir, '--dts-root', dts_dir]
+if get_option('werror')
+    extra_sphinx_args += '-W'
+endif
+
+htmldir = join_paths(get_option('datadir'), 'doc', 'dpdk', 'dts')
+dts_api_html = custom_target('dts_api_html',
+        output: 'html',
+        command: [sphinx_wrapper, sphinx, meson.project_version(),
+            meson.current_source_dir(), dts_doc_api_build_dir, extra_sphinx_args],
+        build_by_default: false,
+        install: get_option('enable_docs'),
+        install_dir: htmldir)
+doc_targets += dts_api_html
+doc_target_names += 'DTS_API_HTML'
diff --git a/dts/meson.build b/dts/meson.build
new file mode 100644
index 0000000000..e8ce0f06ac
--- /dev/null
+++ b/dts/meson.build
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+doc_targets = []
+doc_target_names = []
+dts_dir = meson.current_source_dir()
+
+subdir('doc')
+
+if doc_targets.length() == 0
+    message = 'No docs targets found'
+else
+    message = 'Built docs:'
+endif
+run_target('dts-doc', command: [echo, message, doc_target_names],
+    depends: doc_targets)
diff --git a/meson.build b/meson.build
index 8b248d4505..835973a0ce 100644
--- a/meson.build
+++ b/meson.build
@@ -87,6 +87,7 @@ subdir('app')
 
 # build docs
 subdir('doc')
+subdir('dts')
 
 # build any examples explicitly requested - useful for developers - and
 # install any example code into the appropriate install path
-- 
2.34.1


^ permalink raw reply	[flat|nested] 255+ messages in thread

end of thread, other threads:[~2024-04-12 10:14 UTC | newest]

Thread overview: 255+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-23 10:40 [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
2023-03-23 10:40 ` [RFC PATCH v1 1/4] dts: code adjustments for sphinx Juraj Linkeš
2023-03-23 10:40 ` [RFC PATCH v1 2/4] dts: add doc generation dependencies Juraj Linkeš
2023-03-23 10:40 ` [RFC PATCH v1 3/4] dts: add doc generation Juraj Linkeš
2023-03-23 10:40 ` [RFC PATCH v1 4/4] dts: format docstrigs to google format Juraj Linkeš
2023-04-28 19:33   ` Jeremy Spewock
2023-04-03  9:17 ` [RFC PATCH v1 0/4] dts: add dts api docs Juraj Linkeš
2023-04-03  9:42   ` Bruce Richardson
2023-04-25  8:20     ` Juraj Linkeš
2023-04-25  8:44       ` Bruce Richardson
2023-04-25  8:57         ` Juraj Linkeš
2023-04-25  9:43           ` Bruce Richardson
2023-05-03 11:33             ` Juraj Linkeš
2023-05-04 12:37 ` [RFC PATCH v2 " Juraj Linkeš
2023-05-04 12:37   ` [RFC PATCH v2 1/4] dts: code adjustments for sphinx Juraj Linkeš
2023-05-04 12:37   ` [RFC PATCH v2 2/4] dts: add doc generation dependencies Juraj Linkeš
2023-05-04 12:37   ` [RFC PATCH v2 3/4] dts: add doc generation Juraj Linkeš
2023-05-04 12:45     ` Bruce Richardson
2023-05-05  7:53       ` Juraj Linkeš
2023-05-05 10:24         ` Bruce Richardson
2023-05-05 10:41           ` Juraj Linkeš
2023-05-05 10:56     ` Bruce Richardson
2023-05-05 11:13       ` Juraj Linkeš
2023-05-05 13:28         ` Bruce Richardson
2023-05-09  9:23           ` Juraj Linkeš
2023-05-09  9:40             ` Bruce Richardson
2023-05-10 12:19               ` Juraj Linkeš
2023-05-04 12:37   ` [RFC PATCH v2 4/4] dts: format docstrigs to google format Juraj Linkeš
2023-05-05 14:06   ` [RFC PATCH v2 0/4] dts: add dts api docs Bruce Richardson
2023-05-09 15:28     ` Juraj Linkeš
2023-05-11  8:55     ` Juraj Linkeš
2023-05-11  9:14   ` [RFC PATCH v3 " Juraj Linkeš
2023-05-11  9:14     ` [RFC PATCH v3 1/4] dts: code adjustments for sphinx Juraj Linkeš
2023-05-11  9:14     ` [RFC PATCH v3 2/4] dts: add doc generation dependencies Juraj Linkeš
2023-05-11  9:14     ` [RFC PATCH v3 3/4] dts: add doc generation Juraj Linkeš
2023-05-11  9:14     ` [RFC PATCH v3 4/4] dts: format docstrigs to google format Juraj Linkeš
2023-06-21 18:27       ` Jeremy Spewock
2023-05-17 16:56     ` [RFC PATCH v3 0/4] dts: add dts api docs Bruce Richardson
2023-05-22  9:17       ` Juraj Linkeš
2023-08-31 10:04     ` [RFC PATCH v4 " Juraj Linkeš
2023-08-31 10:04       ` [RFC PATCH v4 1/4] dts: code adjustments for sphinx Juraj Linkeš
2023-10-22 14:30         ` Yoan Picchi
2023-10-23  6:44           ` Juraj Linkeš
2023-10-23 11:52             ` Yoan Picchi
2023-10-24  6:39               ` Juraj Linkeš
2023-10-24 12:21                 ` Yoan Picchi
2023-08-31 10:04       ` [RFC PATCH v4 2/4] dts: add doc generation dependencies Juraj Linkeš
2023-10-27 15:27         ` Yoan Picchi
2023-08-31 10:04       ` [RFC PATCH v4 3/4] dts: add doc generation Juraj Linkeš
2023-09-20  7:08         ` Juraj Linkeš
2023-10-26 16:43         ` Yoan Picchi
2023-10-27  9:52           ` Juraj Linkeš
2023-08-31 10:04       ` [RFC PATCH v4 4/4] dts: format docstrigs to google format Juraj Linkeš
2023-09-01 17:02         ` Jeremy Spewock
2023-10-31 12:10         ` Yoan Picchi
2023-11-02 10:17           ` Juraj Linkeš
2023-11-06 17:15       ` [PATCH v5 00/23] dts: add dts api docs Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 01/23] dts: code adjustments for doc generation Juraj Linkeš
2023-11-08 13:35           ` Yoan Picchi
2023-11-15  7:46             ` Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 02/23] dts: add docstring checker Juraj Linkeš
2023-11-07 17:38           ` Yoan Picchi
2023-11-06 17:15         ` [PATCH v5 03/23] dts: add basic developer docs Juraj Linkeš
2023-11-07 14:39           ` Yoan Picchi
2023-11-08  9:01             ` Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 04/23] dts: exceptions docstring update Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 05/23] dts: settings " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 06/23] dts: logger and " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 07/23] dts: dts runner and main " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 08/23] dts: test suite " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 09/23] dts: test result " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 10/23] dts: config " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 11/23] dts: remote session " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 12/23] dts: interactive " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 13/23] dts: port and virtual device " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 14/23] dts: cpu " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 15/23] dts: os session " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 16/23] dts: posix and linux sessions " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 17/23] dts: node " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 18/23] dts: sut and tg nodes " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 19/23] dts: base traffic generators " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 20/23] dts: scapy tg " Juraj Linkeš
2023-11-06 17:15         ` [PATCH v5 21/23] dts: test suites " Juraj Linkeš
2023-11-06 17:16         ` [PATCH v5 22/23] dts: add doc generation dependencies Juraj Linkeš
2023-11-06 17:16         ` [PATCH v5 23/23] dts: add doc generation Juraj Linkeš
2023-11-08 12:53         ` [PATCH v6 01/23] dts: code adjustments for " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 02/23] dts: add docstring checker Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 03/23] dts: add basic developer docs Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 04/23] dts: exceptions docstring update Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 05/23] dts: settings " Juraj Linkeš
2023-11-08 16:17             ` Yoan Picchi
2023-11-15 10:09               ` Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 06/23] dts: logger and " Juraj Linkeš
2023-11-08 17:14             ` Yoan Picchi
2023-11-15 10:11               ` Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 07/23] dts: dts runner and main " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 08/23] dts: test suite " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 09/23] dts: test result " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 10/23] dts: config " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 11/23] dts: remote session " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 12/23] dts: interactive " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 13/23] dts: port and virtual device " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 14/23] dts: cpu " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 15/23] dts: os session " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 16/23] dts: posix and linux sessions " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 17/23] dts: node " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 18/23] dts: sut and tg nodes " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 19/23] dts: base traffic generators " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 20/23] dts: scapy tg " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 21/23] dts: test suites " Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 22/23] dts: add doc generation dependencies Juraj Linkeš
2023-11-08 16:00             ` Yoan Picchi
2023-11-15 10:00               ` Juraj Linkeš
2023-11-08 12:53           ` [PATCH v6 23/23] dts: add doc generation Juraj Linkeš
2023-11-15 13:09             ` [PATCH v7 00/21] dts: docstrings update Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 01/21] dts: code adjustments for doc generation Juraj Linkeš
2023-11-16 21:04                 ` Jeremy Spewock
2023-11-20 16:10                   ` Juraj Linkeš
2023-11-20 16:02                 ` Yoan Picchi
2023-11-15 13:09               ` [PATCH v7 02/21] dts: add docstring checker Juraj Linkeš
2023-11-20 16:03                 ` Yoan Picchi
2023-11-15 13:09               ` [PATCH v7 03/21] dts: add basic developer docs Juraj Linkeš
2023-11-20 16:03                 ` Yoan Picchi
2023-11-15 13:09               ` [PATCH v7 04/21] dts: exceptions docstring update Juraj Linkeš
2023-11-20 16:22                 ` Yoan Picchi
2023-11-20 16:35                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 05/21] dts: settings " Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 06/21] dts: logger and utils " Juraj Linkeš
2023-11-20 16:23                 ` Yoan Picchi
2023-11-20 16:36                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 07/21] dts: dts runner and main " Juraj Linkeš
2023-11-16 21:51                 ` Jeremy Spewock
2023-11-20 16:13                   ` Juraj Linkeš
2023-11-20 17:43                 ` Yoan Picchi
2023-11-21  9:10                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 08/21] dts: test suite " Juraj Linkeš
2023-11-16 22:16                 ` Jeremy Spewock
2023-11-20 16:25                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 09/21] dts: test result " Juraj Linkeš
2023-11-16 22:47                 ` Jeremy Spewock
2023-11-20 16:33                   ` Juraj Linkeš
2023-11-30 21:20                     ` Jeremy Spewock
2023-11-15 13:09               ` [PATCH v7 10/21] dts: config " Juraj Linkeš
2023-11-21 15:08                 ` Yoan Picchi
2023-11-22 10:42                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 11/21] dts: remote session " Juraj Linkeš
2023-11-21 15:36                 ` Yoan Picchi
2023-11-22 11:13                   ` Juraj Linkeš
2023-11-22 11:25                     ` Yoan Picchi
2023-11-15 13:09               ` [PATCH v7 12/21] dts: interactive " Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 13/21] dts: port and virtual device " Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 14/21] dts: cpu " Juraj Linkeš
2023-11-21 17:45                 ` Yoan Picchi
2023-11-22 11:18                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 15/21] dts: os session " Juraj Linkeš
2023-11-22 11:50                 ` Yoan Picchi
2023-11-22 13:27                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 16/21] dts: posix and linux sessions " Juraj Linkeš
2023-11-22 13:24                 ` Yoan Picchi
2023-11-22 13:35                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 17/21] dts: node " Juraj Linkeš
2023-11-22 12:18                 ` Yoan Picchi
2023-11-22 13:28                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 18/21] dts: sut and tg nodes " Juraj Linkeš
2023-11-22 13:12                 ` Yoan Picchi
2023-11-22 13:34                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 19/21] dts: base traffic generators " Juraj Linkeš
2023-11-21 16:20                 ` Yoan Picchi
2023-11-22 11:38                   ` Juraj Linkeš
2023-11-22 11:56                     ` Yoan Picchi
2023-11-22 13:11                       ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 20/21] dts: scapy tg " Juraj Linkeš
2023-11-21 16:33                 ` Yoan Picchi
2023-11-22 13:18                   ` Juraj Linkeš
2023-11-15 13:09               ` [PATCH v7 21/21] dts: test suites " Juraj Linkeš
2023-11-16 17:36                 ` Yoan Picchi
2023-11-20 10:17                   ` Juraj Linkeš
2023-11-20 12:50                     ` Yoan Picchi
2023-11-22 13:40                       ` Juraj Linkeš
2023-11-23 15:13               ` [PATCH v8 00/21] dts: docstrings update Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 01/21] dts: code adjustments for doc generation Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 02/21] dts: add docstring checker Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 03/21] dts: add basic developer docs Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 04/21] dts: exceptions docstring update Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 05/21] dts: settings " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 06/21] dts: logger and utils " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 07/21] dts: dts runner and main " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 08/21] dts: test suite " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 09/21] dts: test result " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 10/21] dts: config " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 11/21] dts: remote session " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 12/21] dts: interactive " Juraj Linkeš
2023-11-30 21:49                   ` Jeremy Spewock
2023-12-04  9:50                     ` Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 13/21] dts: port and virtual device " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 14/21] dts: cpu " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 15/21] dts: os session " Juraj Linkeš
2023-12-01 17:33                   ` Jeremy Spewock
2023-12-04  9:53                     ` Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 16/21] dts: posix and linux sessions " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 17/21] dts: node " Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 18/21] dts: sut and tg nodes " Juraj Linkeš
2023-12-01 18:06                   ` Jeremy Spewock
2023-12-04 10:02                     ` Juraj Linkeš
2023-12-04 11:02                       ` Bruce Richardson
2023-11-23 15:13                 ` [PATCH v8 19/21] dts: base traffic generators " Juraj Linkeš
2023-12-01 18:05                   ` Jeremy Spewock
2023-12-04 10:03                     ` Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 20/21] dts: scapy tg " Juraj Linkeš
2023-12-01 18:17                   ` Jeremy Spewock
2023-12-04 10:07                     ` Juraj Linkeš
2023-11-23 15:13                 ` [PATCH v8 21/21] dts: test suites " Juraj Linkeš
2023-12-01 16:00                 ` [PATCH v8 00/21] dts: docstrings update Yoan Picchi
2023-12-01 18:23                   ` Jeremy Spewock
2023-12-04 10:24                 ` [PATCH v9 " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 01/21] dts: code adjustments for doc generation Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 02/21] dts: add docstring checker Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 03/21] dts: add basic developer docs Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 04/21] dts: exceptions docstring update Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 05/21] dts: settings " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 06/21] dts: logger and utils " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 07/21] dts: dts runner and main " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 08/21] dts: test suite " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 09/21] dts: test result " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 10/21] dts: config " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 11/21] dts: remote session " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 12/21] dts: interactive " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 13/21] dts: port and virtual device " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 14/21] dts: cpu " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 15/21] dts: os session " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 16/21] dts: posix and linux sessions " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 17/21] dts: node " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 18/21] dts: sut and tg nodes " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 19/21] dts: base traffic generators " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 20/21] dts: scapy tg " Juraj Linkeš
2023-12-04 10:24                   ` [PATCH v9 21/21] dts: test suites " Juraj Linkeš
2023-12-05 18:39                     ` Jeremy Spewock
2023-12-21 11:48                   ` [PATCH v9 00/21] dts: docstrings update Thomas Monjalon
2023-11-15 13:36             ` [PATCH v1 0/2] dts: api docs generation Juraj Linkeš
2023-11-15 13:36               ` [PATCH v1 1/2] dts: add doc generation dependencies Juraj Linkeš
2023-11-15 13:36               ` [PATCH v1 2/2] dts: add doc generation Juraj Linkeš
2024-01-22 12:00               ` [PATCH v2 0/3] dts: API docs generation Juraj Linkeš
2024-01-22 12:00                 ` [PATCH v2 1/3] dts: add doc generation dependencies Juraj Linkeš
2024-01-22 12:00                 ` [PATCH v2 2/3] dts: add API doc sources Juraj Linkeš
2024-01-22 12:00                 ` [PATCH v2 3/3] dts: add API doc generation Juraj Linkeš
2024-01-22 16:35               ` [PATCH v3 0/3] dts: API docs generation Juraj Linkeš
2024-01-22 16:35                 ` [PATCH v3 1/3] dts: add doc generation dependencies Juraj Linkeš
2024-01-22 16:35                 ` [PATCH v3 2/3] dts: add API doc sources Juraj Linkeš
2024-01-22 16:35                 ` [PATCH v3 3/3] dts: add API doc generation Juraj Linkeš
2024-01-29 17:09                   ` Jeremy Spewock
     [not found]                   ` <CAJvnSUCNjo0p-yhROF1MNLKhjiAw2QTyTHO2hpOaVVUn0xnJ0A@mail.gmail.com>
2024-02-29 18:12                     ` Nicholas Pratte
2024-04-12 10:14               ` [PATCH v4 0/3] dts: API docs generation Juraj Linkeš
2024-04-12 10:14                 ` [PATCH v4 1/3] dts: add doc generation dependencies Juraj Linkeš
2024-04-12 10:14                 ` [PATCH v4 2/3] dts: add API doc sources Juraj Linkeš
2024-04-12 10:14                 ` [PATCH v4 3/3] dts: add API doc generation Juraj Linkeš

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).