DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/6] dts: add file management
@ 2025-07-25 15:11 Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 1/6] dts: merge RemoteSession class Luca Vizzarro
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Luca Vizzarro @ 2025-07-25 15:11 UTC (permalink / raw)
  To: dev; +Cc: Luca Vizzarro, Patrick Robb

Hello everyone,

I am sending in a new major feature for DTS. This would introduce a
unified and abstracted way to handle files in DTS. Everything is handled
directly through the Artifact class, which provides several file-related
methods, including a standard file object interface for file RW
operations.

Kind regards,
Luca

Luca Vizzarro (6):
  dts: merge RemoteSession class
  dts: add node retriever by identifier
  dts: add current test suite and cases to context
  dts: add artifact module
  dts: make log files into artifacts
  dts: use artifacts in packet capture and softnic

 doc/api/dts/framework.remote_session.rst      |   1 -
 .../framework.remote_session.ssh_session.rst  |   8 -
 .../dts/framework.testbed_model.artifact.rst  |   8 +
 doc/api/dts/framework.testbed_model.rst       |   1 +
 dts/framework/context.py                      |   7 +-
 dts/framework/logger.py                       | 113 ++--
 dts/framework/remote_session/__init__.py      |   3 +-
 .../remote_session/remote_session.py          | 110 ++-
 dts/framework/remote_session/ssh_session.py   | 116 ----
 dts/framework/runner.py                       |   5 +-
 dts/framework/test_run.py                     |  20 +-
 dts/framework/testbed_model/artifact.py       | 628 ++++++++++++++++++
 dts/framework/testbed_model/node.py           |  38 ++
 dts/framework/testbed_model/topology.py       |   8 +-
 dts/tests/TestSuite_packet_capture.py         |  73 +-
 dts/tests/TestSuite_softnic.py                |  95 ++-
 16 files changed, 908 insertions(+), 326 deletions(-)
 delete mode 100644 doc/api/dts/framework.remote_session.ssh_session.rst
 create mode 100644 doc/api/dts/framework.testbed_model.artifact.rst
 delete mode 100644 dts/framework/remote_session/ssh_session.py
 create mode 100644 dts/framework/testbed_model/artifact.py

-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/6] dts: merge RemoteSession class
  2025-07-25 15:11 [PATCH 0/6] dts: add file management Luca Vizzarro
@ 2025-07-25 15:11 ` Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 2/6] dts: add node retriever by identifier Luca Vizzarro
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Luca Vizzarro @ 2025-07-25 15:11 UTC (permalink / raw)
  To: dev; +Cc: Luca Vizzarro, Paul Szczepanek, Patrick Robb

Merge the RemoteSession class with SSHSession as there is no current use
of a separate channel other than SSH.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 doc/api/dts/framework.remote_session.rst      |   1 -
 .../framework.remote_session.ssh_session.rst  |   8 --
 dts/framework/remote_session/__init__.py      |   3 +-
 .../remote_session/remote_session.py          | 110 ++++++++++++-----
 dts/framework/remote_session/ssh_session.py   | 116 ------------------
 5 files changed, 79 insertions(+), 159 deletions(-)
 delete mode 100644 doc/api/dts/framework.remote_session.ssh_session.rst
 delete mode 100644 dts/framework/remote_session/ssh_session.py

diff --git a/doc/api/dts/framework.remote_session.rst b/doc/api/dts/framework.remote_session.rst
index 27c9153e64..e4ebd21ab9 100644
--- a/doc/api/dts/framework.remote_session.rst
+++ b/doc/api/dts/framework.remote_session.rst
@@ -12,7 +12,6 @@ remote\_session - Node Connections Package
    :maxdepth: 1
 
    framework.remote_session.remote_session
-   framework.remote_session.ssh_session
    framework.remote_session.interactive_remote_session
    framework.remote_session.interactive_shell
    framework.remote_session.shell_pool
diff --git a/doc/api/dts/framework.remote_session.ssh_session.rst b/doc/api/dts/framework.remote_session.ssh_session.rst
deleted file mode 100644
index 4bb51d7db2..0000000000
--- a/doc/api/dts/framework.remote_session.ssh_session.rst
+++ /dev/null
@@ -1,8 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
-
-ssh\_session - SSH Remote Session
-=================================
-
-.. automodule:: framework.remote_session.ssh_session
-   :members:
-   :show-inheritance:
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 1a5cf6abd3..2c0922acd2 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,6 @@
 
 from .interactive_remote_session import InteractiveRemoteSession
 from .remote_session import RemoteSession
-from .ssh_session import SSHSession
 
 
 def create_remote_session(
@@ -36,7 +35,7 @@ def create_remote_session(
     Returns:
         The SSH remote session.
     """
-    return SSHSession(node_config, name, logger)
+    return RemoteSession(node_config, name, logger)
 
 
 def create_interactive_session(
diff --git a/dts/framework/remote_session/remote_session.py b/dts/framework/remote_session/remote_session.py
index 89d4618c41..cdb2dc1ed5 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -4,18 +4,33 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2024 Arm Limited
 
-"""Base remote session.
+"""SSH remote session."""
 
-This module contains the abstract base class for remote sessions and defines
-the structure of the result of a command execution.
-"""
-
-from abc import ABC, abstractmethod
+import socket
+import traceback
 from dataclasses import InitVar, dataclass, field
 from pathlib import Path, PurePath
 
+from fabric import Connection  # type: ignore[import-untyped]
+from invoke.exceptions import (
+    CommandTimedOut,
+    ThreadException,
+    UnexpectedExit,
+)
+from paramiko.ssh_exception import (
+    AuthenticationException,
+    BadHostKeyException,
+    NoValidConnectionsError,
+    SSHException,
+)
+
 from framework.config.node import NodeConfiguration
-from framework.exception import RemoteCommandExecutionError
+from framework.exception import (
+    RemoteCommandExecutionError,
+    SSHConnectionError,
+    SSHSessionDeadError,
+    SSHTimeoutError,
+)
 from framework.logger import DTSLogger
 from framework.settings import SETTINGS
 
@@ -63,14 +78,11 @@ def __str__(self) -> str:
         )
 
 
-class RemoteSession(ABC):
+class RemoteSession:
     """Non-interactive remote session.
 
-    The abstract methods must be implemented in order to connect to a remote host (node)
-    and maintain a remote session.
-    The subclasses must use (or implement) some underlying transport protocol (e.g. SSH)
-    to implement the methods. On top of that, it provides some basic services common to all
-    subclasses, such as keeping history and logging what's being executed on the remote node.
+    The connection is implemented with
+    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
 
     Attributes:
         name: The name of the session.
@@ -82,6 +94,7 @@ class RemoteSession(ABC):
         password: The password used in the connection. Most frequently empty,
             as the use of passwords is discouraged.
         history: The executed commands during this session.
+        session: The underlying Fabric SSH session.
     """
 
     name: str
@@ -91,6 +104,7 @@ class RemoteSession(ABC):
     username: str
     password: str
     history: list[CommandResult]
+    session: Connection
     _logger: DTSLogger
     _node_config: NodeConfiguration
 
@@ -128,7 +142,6 @@ def __init__(
         self._connect()
         self._logger.info(f"Connection to {self.username}@{self.hostname} successful.")
 
-    @abstractmethod
     def _connect(self) -> None:
         """Create a connection to the node.
 
@@ -137,7 +150,42 @@ def _connect(self) -> None:
         The implementation must except all exceptions and convert them to an SSHConnectionError.
 
         The implementation may optionally implement retry attempts.
+
+        Raises:
+            SSHConnectionError: If the connection to the node was not successful.
         """
+        errors = []
+        retry_attempts = 10
+        login_timeout = 20 if self.port else 10
+        for retry_attempt in range(retry_attempts):
+            try:
+                self.session = Connection(
+                    self.ip,
+                    user=self.username,
+                    port=self.port,
+                    connect_kwargs={"password": self.password},
+                    connect_timeout=login_timeout,
+                )
+                self.session.open()
+
+            except (ValueError, BadHostKeyException, AuthenticationException) as e:
+                self._logger.exception(e)
+                raise SSHConnectionError(self.hostname) from e
+
+            except (NoValidConnectionsError, socket.error, SSHException) as e:
+                self._logger.debug(traceback.format_exc())
+                self._logger.warning(e)
+
+                error = repr(e)
+                if error not in errors:
+                    errors.append(error)
+
+                self._logger.info(f"Retrying connection: retry number {retry_attempt + 1}.")
+
+            else:
+                break
+        else:
+            raise SSHConnectionError(self.hostname, errors)
 
     def send_command(
         self,
@@ -166,7 +214,18 @@ def send_command(
             The output of the command along with the return code.
         """
         self._logger.info(f"Sending: '{command}'" + (f" with env vars: '{env}'" if env else ""))
-        result = self._send_command(command, timeout, env)
+
+        try:
+            output = self.session.run(command, env=env, warn=True, hide=True, timeout=timeout)
+        except (UnexpectedExit, ThreadException) as e:
+            self._logger.exception(e)
+            raise SSHSessionDeadError(self.hostname) from e
+        except CommandTimedOut as e:
+            self._logger.exception(e)
+            raise SSHTimeoutError(command) from e
+
+        result = CommandResult(self.name, command, output.stdout, output.stderr, output.return_code)
+
         if verify and result.return_code:
             self._logger.debug(
                 f"Command '{command}' failed with return code '{result.return_code}'"
@@ -178,24 +237,10 @@ def send_command(
         self.history.append(result)
         return result
 
-    @abstractmethod
-    def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
-        """Send a command to the connected node.
-
-        The implementation must execute the command remotely with `env` environment variables
-        and return the result.
-
-        The implementation must except all exceptions and raise:
-
-            * SSHSessionDeadError if the session is not alive,
-            * SSHTimeoutError if the command execution times out.
-        """
-
-    @abstractmethod
     def is_alive(self) -> bool:
         """Check whether the remote session is still responding."""
+        return self.session.is_connected
 
-    @abstractmethod
     def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
         """Copy a file from the remote Node to the local filesystem.
 
@@ -207,8 +252,8 @@ def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) ->
             destination_dir: The directory path on the local filesystem where the `source_file`
                 will be saved.
         """
+        self.session.get(str(source_file), str(destination_dir))
 
-    @abstractmethod
     def copy_to(self, source_file: str | Path, destination_dir: str | PurePath) -> None:
         """Copy a file from local filesystem to the remote Node.
 
@@ -220,7 +265,8 @@ def copy_to(self, source_file: str | Path, destination_dir: str | PurePath) -> N
             destination_dir: The directory path on the remote Node where the `source_file`
                 will be saved.
         """
+        self.session.put(str(source_file), str(destination_dir))
 
-    @abstractmethod
     def close(self) -> None:
         """Close the remote session and free all used resources."""
+        self.session.close()
diff --git a/dts/framework/remote_session/ssh_session.py b/dts/framework/remote_session/ssh_session.py
deleted file mode 100644
index e6e4704bc2..0000000000
--- a/dts/framework/remote_session/ssh_session.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2023 PANTHEON.tech s.r.o.
-
-"""SSH remote session."""
-
-import socket
-import traceback
-from pathlib import Path, PurePath
-
-from fabric import Connection  # type: ignore[import-untyped]
-from invoke.exceptions import (
-    CommandTimedOut,
-    ThreadException,
-    UnexpectedExit,
-)
-from paramiko.ssh_exception import (
-    AuthenticationException,
-    BadHostKeyException,
-    NoValidConnectionsError,
-    SSHException,
-)
-
-from framework.exception import SSHConnectionError, SSHSessionDeadError, SSHTimeoutError
-
-from .remote_session import CommandResult, RemoteSession
-
-
-class SSHSession(RemoteSession):
-    """A persistent SSH connection to a remote Node.
-
-    The connection is implemented with
-    `the Fabric Python library <https://docs.fabfile.org/en/latest/>`_.
-
-    Attributes:
-        session: The underlying Fabric SSH connection.
-
-    Raises:
-        SSHConnectionError: The connection cannot be established.
-    """
-
-    session: Connection
-
-    def _connect(self) -> None:
-        errors = []
-        retry_attempts = 10
-        login_timeout = 20 if self.port else 10
-        for retry_attempt in range(retry_attempts):
-            try:
-                self.session = Connection(
-                    self.ip,
-                    user=self.username,
-                    port=self.port,
-                    connect_kwargs={"password": self.password},
-                    connect_timeout=login_timeout,
-                )
-                self.session.open()
-
-            except (ValueError, BadHostKeyException, AuthenticationException) as e:
-                self._logger.exception(e)
-                raise SSHConnectionError(self.hostname) from e
-
-            except (NoValidConnectionsError, socket.error, SSHException) as e:
-                self._logger.debug(traceback.format_exc())
-                self._logger.warning(e)
-
-                error = repr(e)
-                if error not in errors:
-                    errors.append(error)
-
-                self._logger.info(f"Retrying connection: retry number {retry_attempt + 1}.")
-
-            else:
-                break
-        else:
-            raise SSHConnectionError(self.hostname, errors)
-
-    def _send_command(self, command: str, timeout: float, env: dict | None) -> CommandResult:
-        """Send a command and return the result of the execution.
-
-        Args:
-            command: The command to execute.
-            timeout: Wait at most this long in seconds for the command execution to complete.
-            env: Extra environment variables that will be used in command execution.
-
-        Raises:
-            SSHSessionDeadError: The session died while executing the command.
-            SSHTimeoutError: The command execution timed out.
-        """
-        try:
-            output = self.session.run(command, env=env, warn=True, hide=True, timeout=timeout)
-
-        except (UnexpectedExit, ThreadException) as e:
-            self._logger.exception(e)
-            raise SSHSessionDeadError(self.hostname) from e
-
-        except CommandTimedOut as e:
-            self._logger.exception(e)
-            raise SSHTimeoutError(command) from e
-
-        return CommandResult(self.name, command, output.stdout, output.stderr, output.return_code)
-
-    def is_alive(self) -> bool:
-        """Overrides :meth:`~.remote_session.RemoteSession.is_alive`."""
-        return self.session.is_connected
-
-    def copy_from(self, source_file: str | PurePath, destination_dir: str | Path) -> None:
-        """Overrides :meth:`~.remote_session.RemoteSession.copy_from`."""
-        self.session.get(str(source_file), str(destination_dir))
-
-    def copy_to(self, source_file: str | Path, destination_dir: str | PurePath) -> None:
-        """Overrides :meth:`~.remote_session.RemoteSession.copy_to`."""
-        self.session.put(str(source_file), str(destination_dir))
-
-    def close(self) -> None:
-        """Overrides :meth:`~.remote_session.RemoteSession.close`."""
-        self.session.close()
-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 2/6] dts: add node retriever by identifier
  2025-07-25 15:11 [PATCH 0/6] dts: add file management Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 1/6] dts: merge RemoteSession class Luca Vizzarro
@ 2025-07-25 15:11 ` Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 3/6] dts: add current test suite and cases to context Luca Vizzarro
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Luca Vizzarro @ 2025-07-25 15:11 UTC (permalink / raw)
  To: dev; +Cc: Luca Vizzarro, Paul Szczepanek, Patrick Robb

Refactor the logic to identify nodes. Add facility to retrieve current
nodes from context.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/framework/testbed_model/node.py     | 38 +++++++++++++++++++++++++
 dts/framework/testbed_model/topology.py |  8 ++----
 2 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e6737cd173..474157490d 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -15,6 +15,7 @@
 
 from functools import cached_property
 from pathlib import PurePath
+from typing import Literal, TypeAlias
 
 from framework.config.node import (
     OS,
@@ -202,3 +203,40 @@ def create_session(node_config: NodeConfiguration, name: str, logger: DTSLogger)
             return LinuxSession(node_config, name, logger)
         case _:
             raise ConfigurationError(f"Unsupported OS {node_config.os}")
+
+
+LocalNodeIdentifier: TypeAlias = Literal["local"]
+"""Local node identifier for testbed model."""
+
+RemoteNodeIdentifier: TypeAlias = Literal["sut", "tg"]
+"""Remote node identifiers for testbed model."""
+
+NodeIdentifier: TypeAlias = Literal["local", "sut", "tg"]
+"""Node identifiers for testbed model."""
+
+
+def get_node(node_identifier: NodeIdentifier) -> Node | None:
+    """Get the node based on the identifier.
+
+    Args:
+        node_identifier: The identifier of the node.
+
+    Returns:
+        The node object corresponding to the identifier, or :data:`None` if the identifier is
+            "local".
+
+    Raises:
+        InternalError: If the node identifier is unknown.
+    """
+    if node_identifier == "local":
+        return None
+
+    from framework.context import get_ctx
+
+    ctx = get_ctx()
+    if node_identifier == "sut":
+        return ctx.sut_node
+    elif node_identifier == "tg":
+        return ctx.tg_node
+    else:
+        raise InternalError(f"Unknown node identifier: {node_identifier}")
diff --git a/dts/framework/testbed_model/topology.py b/dts/framework/testbed_model/topology.py
index 899ea0ad3a..9fc056b330 100644
--- a/dts/framework/testbed_model/topology.py
+++ b/dts/framework/testbed_model/topology.py
@@ -12,12 +12,12 @@
 from collections.abc import Iterator
 from dataclasses import dataclass
 from enum import Enum
-from typing import Literal, NamedTuple
+from typing import NamedTuple
 
 from typing_extensions import Self
 
 from framework.exception import ConfigurationError, InternalError
-from framework.testbed_model.node import Node
+from framework.testbed_model.node import Node, NodeIdentifier
 
 from .port import DriverKind, Port, PortConfig
 
@@ -47,10 +47,6 @@ class PortLink(NamedTuple):
     tg_port: Port
 
 
-NodeIdentifier = Literal["sut", "tg"]
-"""The node identifier."""
-
-
 @dataclass(frozen=True)
 class Topology:
     """Testbed topology.
-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 3/6] dts: add current test suite and cases to context
  2025-07-25 15:11 [PATCH 0/6] dts: add file management Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 1/6] dts: merge RemoteSession class Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 2/6] dts: add node retriever by identifier Luca Vizzarro
@ 2025-07-25 15:11 ` Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 4/6] dts: add artifact module Luca Vizzarro
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Luca Vizzarro @ 2025-07-25 15:11 UTC (permalink / raw)
  To: dev; +Cc: Luca Vizzarro, Paul Szczepanek, Patrick Robb

Add the current test suite and case to the context, so that they can be
accessed by helper functions, etc.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/framework/context.py  | 7 ++++++-
 dts/framework/test_run.py | 2 ++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/dts/framework/context.py b/dts/framework/context.py
index 4360bc8699..b99a5a4a69 100644
--- a/dts/framework/context.py
+++ b/dts/framework/context.py
@@ -5,7 +5,7 @@
 
 import functools
 from dataclasses import MISSING, dataclass, field, fields
-from typing import TYPE_CHECKING, ParamSpec
+from typing import TYPE_CHECKING, ParamSpec, Union
 
 from framework.exception import InternalError
 from framework.remote_session.shell_pool import ShellPool
@@ -16,6 +16,7 @@
 
 if TYPE_CHECKING:
     from framework.remote_session.dpdk import DPDKBuildEnvironment, DPDKRuntimeEnvironment
+    from framework.test_suite import TestCase, TestSuite
     from framework.testbed_model.traffic_generator.traffic_generator import TrafficGenerator
 
 P = ParamSpec("P")
@@ -26,6 +27,8 @@ class LocalContext:
     """Updatable context local to test suites and cases.
 
     Attributes:
+        current_test_suite: The currently running test suite, if any.
+        current_test_case: The currently running test case, if any.
         lcore_filter_specifier: A number of lcores/cores/sockets to use or a list of lcore ids to
             use. The default will select one lcore for each of two cores on one socket, in ascending
             order of core ids.
@@ -37,6 +40,8 @@ class LocalContext:
             and no output is gathered within the timeout, an exception is thrown.
     """
 
+    current_test_suite: Union["TestSuite", None] = None
+    current_test_case: Union[type["TestCase"], None] = None
     lcore_filter_specifier: LogicalCoreCount | LogicalCoreList = field(
         default_factory=LogicalCoreCount
     )
diff --git a/dts/framework/test_run.py b/dts/framework/test_run.py
index 4355aeeb4b..614022e2b6 100644
--- a/dts/framework/test_run.py
+++ b/dts/framework/test_run.py
@@ -406,6 +406,7 @@ def next(self) -> State | None:
                 return self
 
             test_run.ctx.local.reset()
+            test_run.ctx.local.current_test_suite = test_suite
             return TestSuiteSetup(test_run, test_suite, test_suite_result)
         except IndexError:
             # No more test suites. We are done here.
@@ -529,6 +530,7 @@ def next(self) -> State | None:
                 test_case_result.mark_result_as(Result.SKIP, e)
                 return self
 
+            self.test_run.ctx.local.current_test_case = test_case
             return TestCaseSetup(
                 self.test_run,
                 self.test_suite,
-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 4/6] dts: add artifact module
  2025-07-25 15:11 [PATCH 0/6] dts: add file management Luca Vizzarro
                   ` (2 preceding siblings ...)
  2025-07-25 15:11 ` [PATCH 3/6] dts: add current test suite and cases to context Luca Vizzarro
@ 2025-07-25 15:11 ` Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 5/6] dts: make log files into artifacts Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 6/6] dts: use artifacts in packet capture and softnic Luca Vizzarro
  5 siblings, 0 replies; 7+ messages in thread
From: Luca Vizzarro @ 2025-07-25 15:11 UTC (permalink / raw)
  To: dev; +Cc: Luca Vizzarro, Paul Szczepanek, Patrick Robb

Add a new artifact module which provides DTS with remote and local file
management capabilities. A new Artifact class is provided which acts as
an abstract representation of a file that is present in any node in the
testbed model. The same provides several helper file management
functions, including open, which returns an ArtifactFile class which
behaves like a standard Python file object for easy interoperability.

Moreover, add a novel directory tree structure for both the remote
temporary and local output directories. This structure would represent
the test suites and test cases in structured folders as appropriate,
where each artifact will be stored. The location of the artifact is
determined by the time when it was first defined. If an artifact was
defined during a test case stage, this will be placed in the test case's
own folder.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 .../dts/framework.testbed_model.artifact.rst  |   8 +
 doc/api/dts/framework.testbed_model.rst       |   1 +
 dts/framework/test_run.py                     |   3 +-
 dts/framework/testbed_model/artifact.py       | 628 ++++++++++++++++++
 4 files changed, 639 insertions(+), 1 deletion(-)
 create mode 100644 doc/api/dts/framework.testbed_model.artifact.rst
 create mode 100644 dts/framework/testbed_model/artifact.py

diff --git a/doc/api/dts/framework.testbed_model.artifact.rst b/doc/api/dts/framework.testbed_model.artifact.rst
new file mode 100644
index 0000000000..1b941b9a95
--- /dev/null
+++ b/doc/api/dts/framework.testbed_model.artifact.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+artifact - File Management
+==========================
+
+.. automodule:: framework.testbed_model.artifact
+   :members:
+   :show-inheritance:
diff --git a/doc/api/dts/framework.testbed_model.rst b/doc/api/dts/framework.testbed_model.rst
index f283178f6a..59429e5cd9 100644
--- a/doc/api/dts/framework.testbed_model.rst
+++ b/doc/api/dts/framework.testbed_model.rst
@@ -17,6 +17,7 @@ testbed\_model - Testbed Modelling Package
    :hidden:
    :maxdepth: 1
 
+   framework.testbed_model.artifact
    framework.testbed_model.os_session
    framework.testbed_model.linux_session
    framework.testbed_model.posix_session
diff --git a/dts/framework/test_run.py b/dts/framework/test_run.py
index 614022e2b6..f70580f8fd 100644
--- a/dts/framework/test_run.py
+++ b/dts/framework/test_run.py
@@ -115,6 +115,7 @@
 from framework.settings import SETTINGS
 from framework.test_result import Result, ResultNode, TestRunResult
 from framework.test_suite import BaseConfig, TestCase, TestSuite
+from framework.testbed_model.artifact import Artifact
 from framework.testbed_model.capability import (
     Capability,
     get_supported_capabilities,
@@ -579,7 +580,7 @@ def on_error(self, ex: Exception) -> State | None:
         self.result.mark_step_as("teardown", Result.ERROR, ex)
         return TestRunExecution(self.test_run, self.test_run.result)
 
-    def after(self):
+    def after(self) -> None:
         """Hook after state is processed."""
         if (
             self.result.get_overall_result() in [Result.FAIL, Result.ERROR]
diff --git a/dts/framework/testbed_model/artifact.py b/dts/framework/testbed_model/artifact.py
new file mode 100644
index 0000000000..9b4caacae2
--- /dev/null
+++ b/dts/framework/testbed_model/artifact.py
@@ -0,0 +1,628 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Arm Limited
+
+"""Artifact module.
+
+This module provides the :class:`Artifact` class, which represents a file artifact that can be or is
+stored on a remote node or locally.
+
+Example usage of the :class:`Artifact` class:
+
+    .. code:: python
+
+        from framework.testbed_model.artifact import Artifact
+
+        # Create an artifact on a remote node
+        artifact = Artifact(node="sut", file_name="example.txt")
+        # Open the artifact file in write mode
+        with artifact.open("w") as f:
+            f.write("Hello, World!")
+        # Pull the artifact to the local output directory
+        artifact.save_locally() # This is also done automatically on object deletion
+                                # if save_local_copy is set to True
+        # Check if the artifact exists
+        if artifact.exists():
+            print(f"Artifact exists at {artifact.path}")
+        # Delete the artifact
+        artifact.delete()
+
+        # Create an artifact from a local file
+        local_artifact = Artifact.create_from(
+            original_file="local_file.txt",
+            node="sut",
+            file_name="copied_file.txt",
+        )
+        # Copy the content of the local artifact to another artifact
+        another_artifact = Artifact("sut", "another_file.txt")
+        another_artifact.copy_contents_from(local_artifact)
+"""
+
+import shutil
+import uuid
+from collections.abc import Iterable
+from io import SEEK_SET, RawIOBase, TextIOWrapper
+from pathlib import Path, PurePath
+from typing import BinaryIO, ClassVar, Literal, TypeAlias, Union, cast, overload
+
+from paramiko import SFTPClient, SFTPFile
+from typing_extensions import Buffer
+
+from framework.exception import InternalError
+from framework.logger import DTSLogger, get_dts_logger
+from framework.settings import SETTINGS
+from framework.testbed_model.node import Node, NodeIdentifier, get_node
+
+TextMode: TypeAlias = (
+    Literal["r", "r+", "w", "w+", "a", "a+", "x", "x+"]
+    | Literal["rt", "r+t", "wt", "w+t", "at", "a+t", "xt", "x+t"]
+)
+"""Open text mode for artifacts."""
+BinaryMode: TypeAlias = Literal["rb", "r+b", "wb", "w+b", "ab", "a+b", "xb", "x+b"]
+"""Open binary mode for artifacts."""
+OpenMode: TypeAlias = TextMode | BinaryMode
+"""Open mode for artifacts, can be either text or binary mode."""
+
+
+@overload
+def make_file_path(node: Node, file_name: str, custom_path: PurePath | None = None) -> PurePath: ...
+
+
+@overload
+def make_file_path(node: None, file_name: str, custom_path: PurePath | None = None) -> Path: ...
+
+
+def make_file_path(
+    node: Node | None, file_name: str, custom_path: PurePath | None = None
+) -> Path | PurePath:
+    """Make a file path for the artifact."""
+    if node is None:
+        path: Path | PurePath = Path(SETTINGS.output_dir).resolve()
+    else:
+        path = node.tmp_dir
+
+    if custom_path is not None:
+        if custom_path.is_absolute():
+            return custom_path / file_name
+
+        path /= custom_path
+    else:
+        from framework.context import get_ctx
+
+        try:
+            ctx = get_ctx()
+            if ctx.local.current_test_suite is not None:
+                path /= ctx.local.current_test_suite.name
+            if ctx.local.current_test_case is not None:
+                path /= ctx.local.current_test_case.name
+        except InternalError:
+            # If the context is not available, use the root path.
+            pass
+
+    return path / file_name
+
+
+def make_unique_file_name() -> str:
+    """Generate a unique filename for the artifact."""
+    return f"{uuid.uuid4().hex}.dat"
+
+
+class Artifact:
+    """Artifact class.
+
+    Represents a file artifact that can be or is stored on a remote node or locally. It provides
+    methods to open, read, write, and manage the artifact file, in the same familiar Python API. It
+    also provides functionality to save a local copy of the artifact if it is remote – saved in the
+    test run output directory. It can be used to manage files that are part of the test run, such as
+    logs, reports, or any other files that need to be stored for later analysis.
+
+    By default, the artifact is created in the temporary directory of the node, following the tree
+    directory structure defined by :class:`DirectoryTree` and managed by the test run states.
+
+    The artifact file is not created automatically upon instantiation. The methods :meth:`open` –
+    with either `w` or `a` modes – and :meth:`touch` can be used to create it.
+
+    If `save_local_copy` is :data:`True` and there already exist a local file with the same name, it
+    will be overwritten. If this is undesired, make sure to give distinct names to the artifacts.
+
+    Attributes:
+        save_local_copy: If :data:`True`, a local copy of the artifact will be saved at the end of
+            the lifetime of the object automatically.
+    """
+
+    DIRECTORY_PERMISSIONS: ClassVar[int] = 0o755
+    """Permission mode for directories created by the artifact."""
+    TEXT_MODE_ENCODING: ClassVar[str] = "utf-8"
+    """Encoding used for text mode artifacts."""
+    TEXT_MODE_NEWLINE: ClassVar[str] = "\n"
+    """Newline character used for text mode artifacts."""
+
+    _logger: DTSLogger
+    _node: Node | None = None
+    _sftp: Union[SFTPClient, None] = None
+    _fd: Union["ArtifactFile", None] = None
+    _file_path: PurePath
+    _local_path: Path
+    _file_was_saved: bool = False
+    _directories_created: bool = False
+    save_local_copy: bool
+
+    def __init__(
+        self,
+        node: NodeIdentifier,
+        file_name: str = "",
+        save_local_copy: bool = True,
+        custom_path: PurePath | None = None,
+    ):
+        """Constructor for an artifact.
+
+        Args:
+            node: The node identifier on which the file is.
+            file_name: The name of the file. If not provided, a unique filename will be generated.
+            save_local_copy: If :data:`True`, makes a local copy of the artifact in the output
+                directory. Applies only to remote artifacts.
+            custom_path: A custom path to save the artifact. If :data:`None`, the default path will
+                be used based on the node identifier. If a relative path is provided, it will be
+                relative to the remote temporary directory (for remote artifacts) and local output
+                directory (for local artifacts and copies).
+        """
+        self._logger = get_dts_logger(f"{node}_artifact")
+
+        self._node = get_node(node)
+        self.save_local_copy = save_local_copy
+
+        if not file_name:
+            file_name = make_unique_file_name()
+
+        if self._node is not None:
+            self._sftp = self._node.main_session.remote_session.session.sftp()
+
+            if custom_path is not None and not custom_path.is_absolute():
+                relative_custom_path = custom_path
+            else:
+                relative_custom_path = None
+
+            self._file_path = make_file_path(self._node, file_name, custom_path)
+            self._local_path = make_file_path(
+                node=None, file_name=file_name, custom_path=relative_custom_path
+            )
+        else:
+            self._local_path = self._file_path = make_file_path(self._node, file_name, custom_path)
+
+    @overload
+    def open(
+        self,
+        file_mode: BinaryMode = "rb",
+        buffering: int = -1,
+    ) -> "ArtifactFile": ...
+
+    @overload
+    def open(
+        self,
+        file_mode: TextMode = "r",
+        buffering: int = -1,
+    ) -> TextIOWrapper: ...
+
+    def open(
+        self, file_mode: BinaryMode | TextMode = "rb", buffering: int = -1
+    ) -> Union["ArtifactFile", TextIOWrapper]:
+        """Open the artifact file.
+
+        Args:
+            file_mode: The mode of file opening.
+            buffering: The size of the buffer to use. If -1, the default buffer size is used.
+
+        Returns:
+            An instance of :class:`ArtifactFile` or :class:`TextIOWrapper`.
+        """
+        if self._fd is not None and not self._fd.closed:
+            self._logger.warning(
+                f"Artifact {self.path} is already open. Closing the previous file descriptor."
+            )
+            self._fd.close()
+        elif not self._directories_created:
+            self.mkdir()
+
+        # SFTPFile does not support text mode, therefore everything needs to be handled as binary.
+        if "t" in file_mode:
+            actual_mode = cast(BinaryMode, cast(str, file_mode).replace("t", "") + "b")
+        elif "b" not in file_mode:
+            actual_mode = cast(BinaryMode, file_mode + "b")
+        else:
+            actual_mode = cast(BinaryMode, file_mode)
+
+        self._logger.debug(f"Opening file at {self.path} with mode {file_mode}.")
+        if self._sftp is None:
+            fd: BinaryIO | SFTPFile = open(str(self.path), mode=actual_mode, buffering=buffering)
+        else:
+            fd = self._sftp.open(str(self.path), mode=actual_mode, bufsize=buffering)
+
+        self._fd = ArtifactFile(fd, self.path, file_mode)
+
+        if "b" in file_mode:
+            return self._fd
+        else:
+            return TextIOWrapper(
+                self._fd,
+                encoding=self.TEXT_MODE_ENCODING,
+                newline=self.TEXT_MODE_NEWLINE,
+                write_through=True,
+            )
+
+    @classmethod
+    def create_from(
+        cls,
+        original_file: Union[Path, "Artifact"],
+        node: NodeIdentifier,
+        /,
+        new_file_name: str = "",
+        save_local_copy: bool = False,
+        custom_path: PurePath | None = None,
+    ) -> "Artifact":
+        """Create a new artifact from a local file or another artifact.
+
+        Args:
+            node: The node identifier on which the file is.
+            original_file: The local file or artifact to copy.
+            new_file_name: The name of the new file. If not provided, the name of the original file
+                will be used.
+            save_local_copy: Makes a local copy of the artifact if :data:`True`. Applies only to
+                remote files.
+            custom_path: A custom path to save the artifact. If :data:`None`, the default path will
+                be used based on the node identifier.
+
+        Returns:
+            An instance of :class:`Artifact`.
+        """
+        if not new_file_name:
+            if isinstance(original_file, Artifact):
+                new_file_name = original_file.local_path.name
+            else:
+                new_file_name = original_file.name
+
+        artifact = cls(node, new_file_name, save_local_copy, custom_path)
+        artifact.copy_contents_from(original_file)
+        return artifact
+
+    def copy_contents_from(self, original_file: Union[Path, "Artifact"]) -> None:
+        """Copy the content of another file or artifact into this artifact.
+
+        This action will close the file descriptor associated with `self` or `original_file` if
+        open.
+
+        Args:
+            original_file: The local file or artifact to copy.
+
+        Raises:
+            InternalError: If the provided `original_file` does not exist.
+        """
+        if isinstance(original_file, Path) and not original_file.exists():
+            raise InternalError(f"The provided file '{original_file}' does not exist.")
+
+        self.open("wb").close()  # Close any prior fd and truncate file.
+
+        self._logger.debug(f"Copying content from {original_file} to {self}.")
+        match (original_file, self._sftp):
+            case (Path(), None):  # local file to local artifact
+                # Use syscalls to copy
+                shutil.copyfile(original_file, self.path)
+            case (Artifact(_sftp=None), None):  # local artifact to local artifact
+                # Use syscalls to copy
+                shutil.copyfile(original_file.local_path, self.path)
+            case (Artifact(), None):  # remote artifact to local artifact
+                # Use built-in chunked transfer copy
+                with original_file.open("rb") as original_fd:
+                    with self.open("wb") as copy_fd:
+                        shutil.copyfileobj(original_fd, copy_fd)
+            case (_, SFTPClient()):  # remote artifact to remote artifact
+                # Use SFTPClient's buffered file copy
+                with original_file.open("rb") as original_fd:
+                    self._sftp.putfo(original_fd, str(self.path))
+
+    @property
+    def path(self) -> PurePath:
+        """Return the actual path of the artifact."""
+        return self._file_path
+
+    @property
+    def local_path(self) -> Path:
+        """Return the local path of the artifact."""
+        return self._local_path
+
+    def save_locally(self) -> None:
+        """Copy remote artifact file and save it locally. Does nothing on local artifacts.
+
+        If there already exist a local file with the same name, it will be overwritten. If this is
+        undesired, make sure to give distinct names to the artifacts.
+        """
+        if self._sftp is not None:
+            if not self.exists():
+                self._logger.debug(f"File {self.path} was never created, skipping save.")
+                return
+
+            self._logger.debug(f"Pulling artifact {self.path} to {self.local_path}.")
+
+            if not self._file_was_saved and self.local_path.exists():
+                self._logger.warning(
+                    f"While saving a remote artifact: local file {self.local_path} already exists, "
+                    "overwriting it. Please use distinct file names."
+                )
+                self._file_was_saved = True
+
+            self._sftp.get(str(self.path), str(self.local_path))
+
+    def delete(self, remove_local_copy: bool = True) -> None:
+        """Delete the artifact file. It also prevents a local copy from being saved.
+
+        Args:
+            remove_local_copy: If :data:`True`, the local copy of the artifact will be deleted if
+                it already exists.
+        """
+        self._logger.debug(f"Deleting artifact {self.path}.")
+
+        if self._fd is not None and not self._fd.closed:
+            self._fd.close()
+            self._fd = None
+
+        if self._sftp is not None:
+            self._sftp.remove(str(self._file_path))
+
+        if self._sftp is None or remove_local_copy:
+            self.local_path.unlink(missing_ok=True)
+
+    def touch(self, mode: int = 0o644) -> None:
+        """Touch the artifact file, creating it if it does not exist.
+
+        Args:
+            mode: The permission mode to set for the artifact file, if just created.
+        """
+        if not self._directories_created:
+            self.mkdir()
+
+        self._logger.debug(f"Touching artifact {self.path} with mode {oct(mode)}.")
+        if self._sftp is not None:
+            file_path = str(self._file_path)
+            try:
+                self._sftp.stat(file_path)
+            except FileNotFoundError:
+                self._sftp.open(file_path, "w").close()
+                self._sftp.chmod(file_path, mode)
+        else:
+            Path(self._file_path).touch(mode=mode)
+
+    def chmod(self, mode: int = 0o644) -> None:
+        """Change the permissions of the artifact file.
+
+        Args:
+            mode: The permission mode to set for the artifact file.
+        """
+        self._logger.debug(f"Changing permissions of {self.path} to {oct(mode)}.")
+        if self._sftp is not None:
+            self._sftp.chmod(str(self._file_path), mode)
+        else:
+            Path(self._file_path).chmod(mode)
+
+    def exists(self) -> bool:
+        """Check if the artifact file exists.
+
+        Returns:
+            :data:`True` if the artifact file exists, :data:`False` otherwise.
+        """
+        if self._sftp is not None:
+            try:
+                self._sftp.stat(str(self._file_path))
+                return True
+            except FileNotFoundError:
+                return False
+        else:
+            return self._local_path.exists()
+
+    def mkdir(self) -> None:
+        """Create all the intermediate file path directories."""
+        if self._sftp is not None:
+            parts = self._file_path.parts[:-1]
+            paths = (PurePath(*parts[:tree_depth]) for tree_depth in range(1, len(parts) + 1))
+            for path in paths:
+                try:
+                    self._sftp.stat(str(path))
+                except FileNotFoundError:
+                    self._logger.debug(f"Creating directories {path}.")
+                    self._sftp.mkdir(str(path), mode=self.DIRECTORY_PERMISSIONS)
+
+        if self._sftp is None or self.save_local_copy:
+            self._logger.debug(f"Creating directories {self.local_path.parent} locally.")
+            self.local_path.parent.mkdir(
+                mode=self.DIRECTORY_PERMISSIONS, parents=True, exist_ok=True
+            )
+
+        self._directories_created = True
+
+    def __del__(self):
+        """Close the file descriptor if it is open and save it if requested."""
+        if self._fd is not None and not self._fd.closed:
+            self._fd.close()
+            self._fd = None
+
+        if self.save_local_copy:
+            self.save_locally()
+
+    def __str__(self):
+        """Return path of the artifact."""
+        return str(self.path)
+
+
+class ArtifactFile(RawIOBase, BinaryIO):
+    """Artifact file wrapper class.
+
+    Provides a single interface for either local or remote files.
+    This class implements the :class:`~io.RawIOBase` interface, allowing it to be used
+    interchangeably with standard file objects.
+    """
+
+    _fd: Union[BinaryIO, SFTPFile]
+    _path: PurePath
+    _mode: OpenMode
+
+    def __init__(self, fd: Union[BinaryIO, SFTPFile], path: PurePath, mode: OpenMode):
+        """Initialize the artifact file wrapper.
+
+        Args:
+            fd: The file descriptor of the artifact.
+            path: The path of the artifact file.
+            mode: The mode in which the artifact file was opened.
+        """
+        super().__init__()
+        self._fd = fd
+        self._path = path
+        self._mode = mode
+
+    def close(self) -> None:
+        """Close artifact file.
+
+        This method implements :meth:`~io.RawIOBase.close()`.
+        """
+        self._fd.close()
+
+    def read(self, size: int | None = -1) -> bytes:
+        """Read bytes from the artifact file.
+
+        This method implements :meth:`~io.RawIOBase.read()`.
+        """
+        return self._fd.read(size if size is not None else -1)
+
+    def readline(self, size: int | None = -1) -> bytes:
+        """Read line from the artifact file.
+
+        This method implements :meth:`~io.RawIOBase.readline()`.
+        """
+        if size is None:
+            size = -1  # Turning None to -1 due to abstract type mismatch.
+        return self._fd.readline(size)
+
+    def readlines(self, hint: int = -1) -> list[bytes]:
+        """Read lines from the artifact file.
+
+        This method implements :meth:`~io.RawIOBase.readlines()`.
+        """
+        return self._fd.readlines(hint)
+
+    def write(self, data: Buffer) -> int:
+        """Write bytes to the artifact file.
+
+        Returns the number of bytes written if available, otherwise -1.
+
+        This method implements :meth:`~io.RawIOBase.write()`.
+        """
+        return self._fd.write(data) or -1
+
+    def writelines(self, lines: Iterable[Buffer]):
+        """Write lines to the artifact file.
+
+        This method implements :meth:`~io.RawIOBase.writelines()`.
+        """
+        return self._fd.writelines(lines)
+
+    def flush(self) -> None:
+        """Flush the write buffers to the artifact file if applicable.
+
+        This method implements :meth:`~io.RawIOBase.flush()`.
+        """
+        self._fd.flush()
+
+    def seek(self, offset: int, whence: int = SEEK_SET) -> int:
+        """Change the file position to the given byte offset.
+
+        This method implements :meth:`~io.RawIOBase.seek()`.
+        """
+        pos = self._fd.seek(offset, whence)
+        if pos is None:
+            return self._fd.tell()
+        return pos
+
+    def tell(self) -> int:
+        """Return the current absolute file position.
+
+        This method implements :meth:`~io.RawIOBase.tell()`.
+        """
+        return self._fd.tell()
+
+    def truncate(self, size: int | None = None) -> int:
+        """Change the size of the file to `size` or to the current position.
+
+        This method implements :meth:`~io.RawIOBase.truncate()`.
+        """
+        if size is None:
+            size = self._fd.tell()
+        new_size = self._fd.truncate(size)
+        if new_size is None:
+            return size
+        return new_size
+
+    @property
+    def name(self) -> str:
+        """Return the name of the artifact file.
+
+        This method implements :meth:`~io.RawIOBase.name()`.
+        """
+        return str(self._path)
+
+    @property
+    def mode(self) -> str:
+        """Return the mode in which the artifact file was opened.
+
+        This method implements :meth:`~io.RawIOBase.mode()`.
+        """
+        return self._mode
+
+    @property
+    def closed(self) -> bool:
+        """:data:`True` if the file is closed."""
+        return self._fd.closed
+
+    def fileno(self) -> int:
+        """Return the underlying file descriptor.
+
+        This method implements :meth:`~io.RawIOBase.fileno()`.
+        """
+        return self._fd.fileno() if hasattr(self._fd, "fileno") else -1
+
+    def isatty(self) -> bool:
+        """Return :data:`True` if the file is connected to a terminal device.
+
+        This method implements :meth:`~io.RawIOBase.isatty()`.
+        """
+        return self._fd.isatty() if hasattr(self._fd, "isatty") else False
+
+    def readable(self) -> bool:
+        """Return :data:`True` if the file is readable.
+
+        This method implements :meth:`~io.RawIOBase.readable()`.
+        """
+        return self._fd.readable()
+
+    def writable(self) -> bool:
+        """Return :data:`True` if the file is writable.
+
+        This method implements :meth:`~io.RawIOBase.writable()`.
+        """
+        return self._fd.writable()
+
+    def seekable(self) -> bool:
+        """Return :data:`True` if the file is seekable.
+
+        This method implements :meth:`~io.RawIOBase.seekable()`.
+        """
+        return self._fd.seekable()
+
+    def __enter__(self) -> "ArtifactFile":
+        """Enter the runtime context related to this object.
+
+        This method implements :meth:`~io.RawIOBase.__enter__()`.
+        """
+        return self
+
+    def __exit__(self, *args) -> None:
+        """Exit the runtime context related to this object.
+
+        This method implements :meth:`~io.RawIOBase.__exit__()`.
+        """
+        self.close()
-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 5/6] dts: make log files into artifacts
  2025-07-25 15:11 [PATCH 0/6] dts: add file management Luca Vizzarro
                   ` (3 preceding siblings ...)
  2025-07-25 15:11 ` [PATCH 4/6] dts: add artifact module Luca Vizzarro
@ 2025-07-25 15:11 ` Luca Vizzarro
  2025-07-25 15:11 ` [PATCH 6/6] dts: use artifacts in packet capture and softnic Luca Vizzarro
  5 siblings, 0 replies; 7+ messages in thread
From: Luca Vizzarro @ 2025-07-25 15:11 UTC (permalink / raw)
  To: dev; +Cc: Luca Vizzarro, Paul Szczepanek, Patrick Robb

Make log files behave like artifacts as dictated by the Artifact class.
Implicitly, this will automatically place all the logs in a structured
manner.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/framework/logger.py   | 113 +++++++++++++++++++++-----------------
 dts/framework/runner.py   |   5 +-
 dts/framework/test_run.py |  17 ++----
 3 files changed, 69 insertions(+), 66 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index f43b442bc9..230513c01e 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -2,43 +2,54 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
+# Copyright(c) 2025 Arm Limited
 
 """DTS logger module.
 
 The module provides several additional features:
 
     * The storage of DTS execution stages,
-    * Logging to console, a human-readable log file and a machine-readable log file,
-    * Optional log files for specific stages.
+    * Logging to console, a human-readable log artifact and a machine-readable log artifact,
+    * Optional log artifacts for specific stages.
 """
 
 import logging
-from logging import FileHandler, StreamHandler
-from pathlib import Path
-from typing import ClassVar
+from logging import StreamHandler
+from typing import TYPE_CHECKING, ClassVar, NamedTuple
+
+if TYPE_CHECKING:
+    from framework.testbed_model.artifact import Artifact
 
 date_fmt = "%Y/%m/%d %H:%M:%S"
 stream_fmt = "%(asctime)s - %(stage)s - %(name)s - %(levelname)s - %(message)s"
 dts_root_logger_name = "dts"
 
 
+class ArtifactHandler(NamedTuple):
+    """A logger handler with an associated artifact."""
+
+    artifact: "Artifact"
+    handler: StreamHandler
+
+
 class DTSLogger(logging.Logger):
     """The DTS logger class.
 
     The class extends the :class:`~logging.Logger` class to add the DTS execution stage information
     to log records. The stage is common to all loggers, so it's stored in a class variable.
 
-    Any time we switch to a new stage, we have the ability to log to an additional log file along
-    with a supplementary log file with machine-readable format. These two log files are used until
-    a new stage switch occurs. This is useful mainly for logging per test suite.
+    Any time we switch to a new stage, we have the ability to log to an additional log artifact
+    along with a supplementary log artifact with machine-readable format. These two log artifacts
+    are used until a new stage switch occurs. This is useful mainly for logging per test suite.
     """
 
     _stage: ClassVar[str] = "pre_run"
-    _extra_file_handlers: list[FileHandler] = []
+    _root_artifact_handlers: list[ArtifactHandler] = []
+    _extra_artifact_handlers: list[ArtifactHandler] = []
 
     def __init__(self, *args, **kwargs):
-        """Extend the constructor with extra file handlers."""
-        self._extra_file_handlers = []
+        """Extend the constructor with extra artifact handlers."""
+        self._extra_artifact_handlers = []
         super().__init__(*args, **kwargs)
 
     def makeRecord(self, *args, **kwargs) -> logging.LogRecord:
@@ -56,7 +67,7 @@ def makeRecord(self, *args, **kwargs) -> logging.LogRecord:
         record.stage = DTSLogger._stage
         return record
 
-    def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
+    def add_dts_root_logger_handlers(self, verbose: bool) -> None:
         """Add logger handlers to the DTS root logger.
 
         This method should be called only on the DTS root logger.
@@ -65,18 +76,16 @@ def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
         Three handlers are added:
 
             * A console handler,
-            * A file handler,
-            * A supplementary file handler with machine-readable logs
+            * An artifact handler,
+            * A supplementary artifact handler with machine-readable logs
               containing more debug information.
 
-        All log messages will be logged to files. The log level of the console handler
+        All log messages will be logged to artifacts. The log level of the console handler
         is configurable with `verbose`.
 
         Args:
             verbose: If :data:`True`, log all messages to the console.
                 If :data:`False`, log to console with the :data:`logging.INFO` level.
-            output_dir: The directory where the log files will be located.
-                The names of the log files correspond to the name of the logger instance.
         """
         self.setLevel(1)
 
@@ -86,70 +95,76 @@ def add_dts_root_logger_handlers(self, verbose: bool, output_dir: str) -> None:
             sh.setLevel(logging.INFO)
         self.addHandler(sh)
 
-        self._add_file_handlers(Path(output_dir, self.name))
+        self._root_artifact_handlers = self._add_artifact_handlers(self.name)
 
-    def set_stage(self, stage: str, log_file_path: Path | None = None) -> None:
-        """Set the DTS execution stage and optionally log to files.
+    def set_stage(self, stage: str, log_file_name: str | None = None) -> None:
+        """Set the DTS execution stage and optionally log to artifact files.
 
         Set the DTS execution stage of the DTSLog class and optionally add
-        file handlers to the instance if the log file name is provided.
+        artifact handlers to the instance if the log artifact file name is provided.
 
-        The file handlers log all messages. One is a regular human-readable log file and
-        the other one is a machine-readable log file with extra debug information.
+        The artifact handlers log all messages. One is a regular human-readable log artifact and
+        the other one is a machine-readable log artifact with extra debug information.
 
         Args:
             stage: The DTS stage to set.
-            log_file_path: An optional path of the log file to use. This should be a full path
-                (either relative or absolute) without suffix (which will be appended).
+            log_file_name: An optional name of the log artifact file to use. This should be without
+                suffix (which will be appended).
         """
-        self._remove_extra_file_handlers()
+        self._remove_extra_artifact_handlers()
 
         if DTSLogger._stage != stage:
             self.info(f"Moving from stage '{DTSLogger._stage}' to stage '{stage}'.")
             DTSLogger._stage = stage
 
-        if log_file_path:
-            self._extra_file_handlers.extend(self._add_file_handlers(log_file_path))
+        if log_file_name:
+            self._extra_artifact_handlers.extend(self._add_artifact_handlers(log_file_name))
 
-    def _add_file_handlers(self, log_file_path: Path) -> list[FileHandler]:
-        """Add file handlers to the DTS root logger.
+    def _add_artifact_handlers(self, log_file_name: str) -> list[ArtifactHandler]:
+        """Add artifact handlers to the DTS root logger.
 
-        Add two type of file handlers:
+        Add two type of artifact handlers:
 
-            * A regular file handler with suffix ".log",
-            * A machine-readable file handler with suffix ".verbose.log".
+            * A regular artifact handler with suffix ".log",
+            * A machine-readable artifact handler with suffix ".verbose.log".
               This format provides extensive information for debugging and detailed analysis.
 
         Args:
-            log_file_path: The full path to the log file without suffix.
+            log_file_name: The name of the artifact log file without suffix.
 
         Returns:
-            The newly created file handlers.
-
+            The newly created artifact handlers.
         """
-        fh = FileHandler(f"{log_file_path}.log")
-        fh.setFormatter(logging.Formatter(stream_fmt, date_fmt))
-        self.addHandler(fh)
+        from framework.testbed_model.artifact import Artifact
+
+        log_artifact = Artifact("local", f"{log_file_name}.log")
+        handler = StreamHandler(log_artifact.open("w"))
+        handler.setFormatter(logging.Formatter(stream_fmt, date_fmt))
+        self.addHandler(handler)
 
-        verbose_fh = FileHandler(f"{log_file_path}.verbose.log")
-        verbose_fh.setFormatter(
+        verbose_log_artifact = Artifact("local", f"{log_file_name}.verbose.log")
+        verbose_handler = StreamHandler(verbose_log_artifact.open("w"))
+        verbose_handler.setFormatter(
             logging.Formatter(
                 "%(asctime)s|%(stage)s|%(name)s|%(levelname)s|%(pathname)s|%(lineno)d|"
                 "%(funcName)s|%(process)d|%(thread)d|%(threadName)s|%(message)s",
                 datefmt=date_fmt,
             )
         )
-        self.addHandler(verbose_fh)
+        self.addHandler(verbose_handler)
 
-        return [fh, verbose_fh]
+        return [
+            ArtifactHandler(log_artifact, handler),
+            ArtifactHandler(verbose_log_artifact, verbose_handler),
+        ]
 
-    def _remove_extra_file_handlers(self) -> None:
-        """Remove any extra file handlers that have been added to the logger."""
-        if self._extra_file_handlers:
-            for extra_file_handler in self._extra_file_handlers:
-                self.removeHandler(extra_file_handler)
+    def _remove_extra_artifact_handlers(self) -> None:
+        """Remove any extra artifact handlers that have been added to the logger."""
+        if self._extra_artifact_handlers:
+            for extra_artifact_handler in self._extra_artifact_handlers:
+                self.removeHandler(extra_artifact_handler.handler)
 
-            self._extra_file_handlers = []
+            self._extra_artifact_handlers = []
 
 
 def get_dts_logger(name: str | None = None) -> DTSLogger:
diff --git a/dts/framework/runner.py b/dts/framework/runner.py
index 0a3d92b0c8..ae5ac014e2 100644
--- a/dts/framework/runner.py
+++ b/dts/framework/runner.py
@@ -9,7 +9,6 @@
 The module is responsible for preparing DTS and running the test run.
 """
 
-import os
 import sys
 import textwrap
 
@@ -45,9 +44,7 @@ def __init__(self):
             sys.exit(e.severity)
 
         self._logger = get_dts_logger()
-        if not os.path.exists(SETTINGS.output_dir):
-            os.makedirs(SETTINGS.output_dir)
-        self._logger.add_dts_root_logger_handlers(SETTINGS.verbose, SETTINGS.output_dir)
+        self._logger.add_dts_root_logger_handlers(SETTINGS.verbose)
 
         test_suites_result = ResultNode(label="test_suites")
         self._result = TestRunResult(test_suites=test_suites_result)
diff --git a/dts/framework/test_run.py b/dts/framework/test_run.py
index f70580f8fd..5609380c95 100644
--- a/dts/framework/test_run.py
+++ b/dts/framework/test_run.py
@@ -103,7 +103,6 @@
 from collections.abc import Iterable
 from dataclasses import dataclass
 from functools import cached_property
-from pathlib import Path
 from types import MethodType
 from typing import ClassVar, Protocol, Union
 
@@ -115,7 +114,6 @@
 from framework.settings import SETTINGS
 from framework.test_result import Result, ResultNode, TestRunResult
 from framework.test_suite import BaseConfig, TestCase, TestSuite
-from framework.testbed_model.artifact import Artifact
 from framework.testbed_model.capability import (
     Capability,
     get_supported_capabilities,
@@ -259,11 +257,11 @@ class State(Protocol):
     test_run: TestRun
     result: TestRunResult | ResultNode
 
-    def before(self):
+    def before(self) -> None:
         """Hook before the state is processed."""
-        self.logger.set_stage(self.logger_name, self.log_file_path)
+        self.logger.set_stage(self.logger_name, self.get_log_file_name())
 
-    def after(self):
+    def after(self) -> None:
         """Hook after the state is processed."""
         return
 
@@ -280,13 +278,6 @@ def get_log_file_name(self) -> str | None:
         """Name of the log file for this state."""
         return None
 
-    @property
-    def log_file_path(self) -> Path | None:
-        """Path to the log file for this state."""
-        if file_name := self.get_log_file_name():
-            return Path(SETTINGS.output_dir, file_name)
-        return None
-
     def next(self) -> Union["State", None]:
         """Next state."""
 
@@ -604,7 +595,7 @@ class TestCaseState(State):
 
     def get_log_file_name(self) -> str | None:
         """Get the log file name."""
-        return self.test_suite.name
+        return self.test_case.name
 
 
 @dataclass
-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 6/6] dts: use artifacts in packet capture and softnic
  2025-07-25 15:11 [PATCH 0/6] dts: add file management Luca Vizzarro
                   ` (4 preceding siblings ...)
  2025-07-25 15:11 ` [PATCH 5/6] dts: make log files into artifacts Luca Vizzarro
@ 2025-07-25 15:11 ` Luca Vizzarro
  5 siblings, 0 replies; 7+ messages in thread
From: Luca Vizzarro @ 2025-07-25 15:11 UTC (permalink / raw)
  To: dev; +Cc: Luca Vizzarro, Paul Szczepanek, Patrick Robb

Use the newly added artifact/file manager in the packet capture and
softnic test suites.

Signed-off-by: Luca Vizzarro <luca.vizzarro@arm.com>
Reviewed-by: Paul Szczepanek <paul.szczepanek@arm.com>
---
 dts/tests/TestSuite_packet_capture.py | 73 ++++++++++----------
 dts/tests/TestSuite_softnic.py        | 95 +++++++++++----------------
 2 files changed, 74 insertions(+), 94 deletions(-)

diff --git a/dts/tests/TestSuite_packet_capture.py b/dts/tests/TestSuite_packet_capture.py
index bad243a571..f8cd1b00cf 100644
--- a/dts/tests/TestSuite_packet_capture.py
+++ b/dts/tests/TestSuite_packet_capture.py
@@ -7,7 +7,7 @@
 """
 
 from dataclasses import dataclass, field
-from pathlib import Path, PurePath
+from pathlib import PurePath
 
 from scapy.contrib.lldp import (
     LLDPDUChassisID,
@@ -29,11 +29,10 @@
 from framework.remote_session.blocking_app import BlockingApp
 from framework.remote_session.dpdk_shell import compute_eal_params
 from framework.remote_session.testpmd_shell import TestPmdShell
-from framework.settings import SETTINGS
 from framework.test_suite import TestSuite, func_test
+from framework.testbed_model.artifact import Artifact
 from framework.testbed_model.capability import requires
 from framework.testbed_model.cpu import LogicalCoreList
-from framework.testbed_model.os_session import FilePermissions
 from framework.testbed_model.topology import TopologyType
 from framework.testbed_model.traffic_generator.capturing_traffic_generator import (
     PacketFilteringConfig,
@@ -65,13 +64,13 @@ class TestPacketCapture(TestSuite):
 
     Attributes:
         packets: List of packets to send for testing dumpcap.
-        rx_pcap_path: The remote path where to create the Rx packets pcap with dumpcap.
-        tx_pcap_path: The remote path where to create the Tx packets pcap with dumpcap.
+        rx_pcap: The artifact associated with the Rx packets pcap created by dumpcap.
+        tx_pcap: The artifact associated with the Tx packets pcap created by dumpcap.
     """
 
     packets: list[Packet]
-    rx_pcap_path: PurePath
-    tx_pcap_path: PurePath
+    rx_pcap: Artifact
+    tx_pcap: Artifact
 
     def _run_dumpcap(self, params: DumpcapParams) -> BlockingApp:
         eal_params = compute_eal_params()
@@ -108,29 +107,32 @@ def set_up_suite(self) -> None:
             / LLDPDUSystemCapabilities()
             / LLDPDUEndOfLLDPDU(),
         ]
-        self.tx_pcap_path = self._ctx.sut_node.tmp_dir.joinpath("tx.pcapng")
-        self.rx_pcap_path = self._ctx.sut_node.tmp_dir.joinpath("rx.pcapng")
 
-    def _load_pcap_packets(self, remote_pcap_path: PurePath) -> list[Packet]:
-        local_pcap_path = Path(SETTINGS.output_dir).joinpath(remote_pcap_path.name)
-        self._ctx.sut_node.main_session.copy_from(remote_pcap_path, local_pcap_path)
-        return list(rdpcap(str(local_pcap_path)))
+    def set_up_test_case(self):
+        """Test case setup.
+
+        Prepare the artifacts for the Rx and Tx pcap files.
+        """
+        self.tx_pcap = Artifact("sut", "tx.pcapng")
+        self.rx_pcap = Artifact("sut", "rx.pcapng")
 
     def _send_and_dump(
         self, packet_filter: str | None = None, rx_only: bool = False
     ) -> list[Packet]:
+        self.rx_pcap.touch()
         dumpcap_rx = self._run_dumpcap(
             DumpcapParams(
                 interface=self.topology.sut_port_ingress.pci,
-                output_pcap_path=self.rx_pcap_path,
+                output_pcap_path=self.rx_pcap.path,
                 packet_filter=packet_filter,
             )
         )
         if not rx_only:
+            self.tx_pcap.touch()
             dumpcap_tx = self._run_dumpcap(
                 DumpcapParams(
                     interface=self.topology.sut_port_egress.pci,
-                    output_pcap_path=self.tx_pcap_path,
+                    output_pcap_path=self.tx_pcap.path,
                     packet_filter=packet_filter,
                 )
             )
@@ -140,14 +142,8 @@ def _send_and_dump(
         )
 
         dumpcap_rx.close()
-        self._ctx.sut_node.main_session.change_permissions(
-            self.rx_pcap_path, FilePermissions(0o644)
-        )
         if not rx_only:
             dumpcap_tx.close()
-            self._ctx.sut_node.main_session.change_permissions(
-                self.tx_pcap_path, FilePermissions(0o644)
-            )
 
         return received_packets
 
@@ -169,17 +165,19 @@ def test_dumpcap(self) -> None:
             received_packets = self._send_and_dump()
 
             expected_packets = self.get_expected_packets(self.packets, sent_from_tg=True)
-            rx_pcap_packets = self._load_pcap_packets(self.rx_pcap_path)
-            self.verify(
-                self.match_all_packets(expected_packets, rx_pcap_packets, verify=False),
-                "Rx packets from dumpcap weren't the same as the expected packets.",
-            )
+            with self.rx_pcap.open() as fd:
+                rx_pcap_packets = list(rdpcap(fd))
+                self.verify(
+                    self.match_all_packets(expected_packets, rx_pcap_packets, verify=False),
+                    "Rx packets from dumpcap weren't the same as the expected packets.",
+                )
 
-            tx_pcap_packets = self._load_pcap_packets(self.tx_pcap_path)
-            self.verify(
-                self.match_all_packets(tx_pcap_packets, received_packets, verify=False),
-                "Tx packets from dumpcap weren't the same as the packets received by Scapy.",
-            )
+            with self.tx_pcap.open() as fd:
+                tx_pcap_packets = list(rdpcap(fd))
+                self.verify(
+                    self.match_all_packets(tx_pcap_packets, received_packets, verify=False),
+                    "Tx packets from dumpcap weren't the same as the packets received by Scapy.",
+                )
 
     @func_test
     def test_dumpcap_filter(self) -> None:
@@ -202,9 +200,10 @@ def test_dumpcap_filter(self) -> None:
                 if not p.haslayer(TCP)
             ]
 
-            rx_pcap_packets = [raw(p) for p in self._load_pcap_packets(self.rx_pcap_path)]
-            for filtered_packet in filtered_packets:
-                self.verify(
-                    filtered_packet not in rx_pcap_packets,
-                    "Found a packet in the pcap that was meant to be filtered out.",
-                )
+            with self.rx_pcap.open() as fd:
+                rx_pcap_packets = [raw(p) for p in rdpcap(fd)]
+                for filtered_packet in filtered_packets:
+                    self.verify(
+                        filtered_packet not in rx_pcap_packets,
+                        "Found a packet in the pcap that was meant to be filtered out.",
+                    )
diff --git a/dts/tests/TestSuite_softnic.py b/dts/tests/TestSuite_softnic.py
index 27754c08e7..edeca55f32 100644
--- a/dts/tests/TestSuite_softnic.py
+++ b/dts/tests/TestSuite_softnic.py
@@ -6,11 +6,10 @@
 Create a softnic virtual device and verify it successfully forwards packets.
 """
 
-from pathlib import Path, PurePath
-
 from framework.params.testpmd import EthPeer
 from framework.remote_session.testpmd_shell import NicCapability, TestPmdShell
 from framework.test_suite import TestSuite, func_test
+from framework.testbed_model.artifact import Artifact
 from framework.testbed_model.capability import requires
 from framework.testbed_model.topology import TopologyType
 from framework.testbed_model.virtual_device import VirtualDevice
@@ -35,62 +34,44 @@ def set_up_suite(self) -> None:
         """
         self.sut_node = self._ctx.sut_node  # FIXME: accessing the context should be forbidden
         self.packets = generate_random_packets(self.NUMBER_OF_PACKETS_TO_SEND, self.PAYLOAD_SIZE)
-        self.cli_file = self.prepare_softnic_files()
-
-    def prepare_softnic_files(self) -> PurePath:
-        """Creates the config files that are required for the creation of the softnic.
-
-        The config files are created at runtime to accommodate paths and device addresses.
-        """
-        # paths of files needed by softnic
-        cli_file = Path("rx_tx.cli")
-        spec_file = Path("rx_tx.spec")
-        rx_tx_1_file = Path("rx_tx_1.io")
-        rx_tx_2_file = Path("rx_tx_2.io")
-        path_sut = self._ctx.sut_node.tmp_dir
-        cli_file_sut = self.sut_node.main_session.join_remote_path(path_sut, cli_file)
-        spec_file_sut = self.sut_node.main_session.join_remote_path(path_sut, spec_file)
-        rx_tx_1_file_sut = self.sut_node.main_session.join_remote_path(path_sut, rx_tx_1_file)
-        rx_tx_2_file_sut = self.sut_node.main_session.join_remote_path(path_sut, rx_tx_2_file)
-        firmware_c_file_sut = self.sut_node.main_session.join_remote_path(path_sut, "firmware.c")
-        firmware_so_file_sut = self.sut_node.main_session.join_remote_path(path_sut, "firmware.so")
-
-        # write correct remote paths to local files
-        with open(cli_file, "w+") as fh:
-            fh.write(f"pipeline codegen {spec_file_sut} {firmware_c_file_sut}\n")
-            fh.write(f"pipeline libbuild {firmware_c_file_sut} {firmware_so_file_sut}\n")
-            fh.write(f"pipeline RX build lib {firmware_so_file_sut} io {rx_tx_1_file_sut} numa 0\n")
-            fh.write(f"pipeline TX build lib {firmware_so_file_sut} io {rx_tx_2_file_sut} numa 0\n")
-            fh.write("thread 2 pipeline RX enable\n")
-            fh.write("thread 2 pipeline TX enable\n")
-        with open(spec_file, "w+") as fh:
-            fh.write("struct metadata_t {{\n")
-            fh.write("	bit<32> port\n")
-            fh.write("}}\n")
-            fh.write("metadata instanceof metadata_t\n")
-            fh.write("apply {{\n")
-            fh.write("	rx m.port\n")
-            fh.write("	tx m.port\n")
-            fh.write("}}\n")
-        with open(rx_tx_1_file, "w+") as fh:
-            fh.write(f"port in 0 ethdev {self.sut_node.config.ports[0].pci} rxq 0 bsz 32\n")
-            fh.write("port out 0 ring RXQ0 bsz 32\n")
-        with open(rx_tx_2_file, "w+") as fh:
-            fh.write("port in 0 ring TXQ0 bsz 32\n")
-            fh.write(f"port out 1 ethdev {self.sut_node.config.ports[1].pci} txq 0 bsz 32\n")
-
-        # copy files over to SUT
-        self.sut_node.main_session.copy_to(cli_file, cli_file_sut)
-        self.sut_node.main_session.copy_to(spec_file, spec_file_sut)
-        self.sut_node.main_session.copy_to(rx_tx_1_file, rx_tx_1_file_sut)
-        self.sut_node.main_session.copy_to(rx_tx_2_file, rx_tx_2_file_sut)
-        # and cleanup local files
-        cli_file.unlink()
-        spec_file.unlink()
-        rx_tx_1_file.unlink()
-        rx_tx_2_file.unlink()
 
-        return cli_file_sut
+        self.cli_file = Artifact("sut", "rx_tx.cli")
+        self.spec_file = Artifact("sut", "rx_tx.spec")
+        self.rx_tx_1_file = Artifact("sut", "rx_tx_1.io")
+        self.rx_tx_2_file = Artifact("sut", "rx_tx_2.io")
+        self.firmware_c_file = Artifact("sut", "firmware.c")
+        self.firmware_so_file = Artifact("sut", "firmware.so")
+
+        with self.cli_file.open("w+") as fh:
+            fh.write(
+                f"pipeline codegen {self.spec_file} {self.firmware_c_file}\n"
+                f"pipeline libbuild {self.firmware_c_file} {self.firmware_so_file}\n"
+                f"pipeline RX build lib {self.firmware_so_file} io {self.rx_tx_1_file} numa 0\n"
+                f"pipeline TX build lib {self.firmware_so_file} io {self.rx_tx_2_file} numa 0\n"
+                "thread 2 pipeline RX enable\n"
+                "thread 2 pipeline TX enable\n"
+            )
+        with self.spec_file.open("w+") as fh:
+            fh.write(
+                "struct metadata_t {\n"
+                "	bit<32> port\n"
+                "}\n"
+                "metadata instanceof metadata_t\n"
+                "apply {\n"
+                "	rx m.port\n"
+                "	tx m.port\n"
+                "}\n"
+            )
+        with self.rx_tx_1_file.open("w+") as fh:
+            fh.write(
+                f"port in 0 ethdev {self.sut_node.config.ports[0].pci} rxq 0 bsz 32\n"
+                "port out 0 ring RXQ0 bsz 32\n"
+            )
+        with self.rx_tx_2_file.open("w+") as fh:
+            fh.write(
+                "port in 0 ring TXQ0 bsz 32\n"
+                f"port out 1 ethdev {self.sut_node.config.ports[1].pci} txq 0 bsz 32\n"
+            )
 
     @func_test
     def softnic(self) -> None:
-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-07-25 15:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-25 15:11 [PATCH 0/6] dts: add file management Luca Vizzarro
2025-07-25 15:11 ` [PATCH 1/6] dts: merge RemoteSession class Luca Vizzarro
2025-07-25 15:11 ` [PATCH 2/6] dts: add node retriever by identifier Luca Vizzarro
2025-07-25 15:11 ` [PATCH 3/6] dts: add current test suite and cases to context Luca Vizzarro
2025-07-25 15:11 ` [PATCH 4/6] dts: add artifact module Luca Vizzarro
2025-07-25 15:11 ` [PATCH 5/6] dts: make log files into artifacts Luca Vizzarro
2025-07-25 15:11 ` [PATCH 6/6] dts: use artifacts in packet capture and softnic Luca Vizzarro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).