From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4433242DE9; Thu, 6 Jul 2023 16:54:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA20941101; Thu, 6 Jul 2023 16:54:03 +0200 (CEST) Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) by mails.dpdk.org (Postfix) with ESMTP id 51C6A410FA for ; Thu, 6 Jul 2023 16:54:02 +0200 (CEST) Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-3fbea147034so8373235e9.0 for ; Thu, 06 Jul 2023 07:54:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pantheon.tech; s=google; t=1688655242; x=1691247242; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=Iats4SwTJR9IyO4xT/NOaQGifqnOSZFqyQ+cG+B9l8M=; b=G3AwdsXJiM6mPlAI5cK0SFBCt4X3s1UOyiTxFU2hh8F3gt9b34C+plyJtmqsor3/EM IODRoZfoZp9Y7F0VtCqLFXT+W7B+zakVAXak207rE71JezoVIwwJMIJCgp3d4QkSQiXJ 4WqPICnm2Bp15kJaU9hIMfjJ6yUv6vxBcmT62gSjrVBG2brPDL2wXsDn7+lmJMvG8zdP 25D55+slmDgy3pIoxvuGjEq1H40eeY+U/szVnDpKxGVbJ8SPojxHX8EWONq6yhUMU8CY OC5wSevK7i32LIK5gV2DkITf2409zibnD+D6VNdSpgGJ/zztWM9JcvKBxk8FkrICCncp m8xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688655242; x=1691247242; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Iats4SwTJR9IyO4xT/NOaQGifqnOSZFqyQ+cG+B9l8M=; b=hXwGueOj2rKVVpQ+qWtrPhBcQLdmf2pAjRwqdW1pOn/PQOync9cgJj6ICKFlxs02LY 4nyrXK6Hfc6oryoHNQGRsfGcd8OA7Oc839fqzkKuoR0W0GF0QrlmgGGTlR6JGbNBJw8T VsHgsIC+gtbwIVKfBlUswBwMdc8lhkQCIfZSgCxCM4KPSqPTx2h7vPqNPkTFOkIAorQg V6DCTaCbAZ+heKAKxmRsum9xXipce/j0QuAzRvYIrB5MyfbK+Bc3ACw7z1v+oRptJZc/ Us/Wiu3XSdY0Lu+ImikuIQjhm11YVmFDqzwud9CX7ZPbvJf1eEAPG7nydjs79Z0xtPwc g7Cg== X-Gm-Message-State: ABy/qLYekN3xhAkizGHtgq1+2+3vYM8Siqu54rbHmY07UJ3kPevPE+At IEdbNUBmHkb37fUWPvvZAGdrHXY0mR43jDD35FDU8g== X-Google-Smtp-Source: APBJJlHLpnc5TBYgVGUMip7bGLlqQ+w+2N3OpfQ7y4PstWpE85xtYABlwGxoNkPUIQCSXA2+Cbl/5LjMjr59qPREJDc= X-Received: by 2002:a7b:c44e:0:b0:3fa:7808:3e16 with SMTP id l14-20020a7bc44e000000b003fa78083e16mr1622291wmi.29.1688655241497; Thu, 06 Jul 2023 07:54:01 -0700 (PDT) MIME-Version: 1.0 References: <20230615201318.13359-2-jspewock@iol.unh.edu> <20230615201318.13359-3-jspewock@iol.unh.edu> In-Reply-To: <20230615201318.13359-3-jspewock@iol.unh.edu> From: =?UTF-8?Q?Juraj_Linke=C5=A1?= Date: Thu, 6 Jul 2023 16:53:48 +0200 Message-ID: Subject: Re: [PATCH v1 1/2] dts: add smoke tests To: jspewock@iol.unh.edu Cc: Honnappa.Nagarahalli@arm.com, thomas@monjalon.net, lijuan.tu@intel.com, wathsala.vithanage@arm.com, probb@iol.unh.edu, dev@dpdk.org Content-Type: multipart/alternative; boundary="000000000000d160e405ffd2af52" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000d160e405ffd2af52 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable There are mypy errors related to paramiko: framework/remote_session/remote/interactive_remote_session.py:8: error: Library stubs not installed for "paramiko" (or incompatible with Python 3.10) framework/remote_session/remote/interactive_remote_session.py:9: error: Library stubs not installed for "paramiko.ssh_exception" (or incompatible with Python 3.10) We do this for pexpect: import pexpect # type: ignore from pexpect import pxssh # type: ignore We should do the same for paramiko. There are also some changes which may require coordination. I'm not sure how we'd do that, the best may be to align that off-list. More inline. On Thu, Jun 15, 2023 at 10:13=E2=80=AFPM wrote: > > From: Jeremy Spewock > > Adds a new test suite as well as configuration changes needed for running > smoke tests that verify general environment aspects of the system > under test. If any of these tests fail, the DTS execution terminates > as part of a "fail-fast" model. > > Signed-off-by: Jeremy Spewock > --- > dts/conf.yaml | 13 ++ > dts/framework/config/__init__.py | 114 +++++++++++++-- > dts/framework/config/conf_yaml_schema.json | 135 +++++++++++++++++- > dts/framework/dts.py | 101 ++++++++++--- > dts/framework/exception.py | 12 ++ > dts/framework/remote_session/__init__.py | 10 +- > dts/framework/remote_session/os_session.py | 34 ++++- > dts/framework/remote_session/posix_session.py | 30 ++++ > .../remote_session/remote/__init__.py | 12 ++ > .../remote/interactive_remote_session.py | 113 +++++++++++++++ > .../remote/interactive_shell.py | 98 +++++++++++++ > .../remote_session/remote/testpmd_shell.py | 58 ++++++++ > dts/framework/test_result.py | 38 ++++- > dts/framework/test_suite.py | 31 +++- > dts/framework/testbed_model/node.py | 2 + > dts/framework/testbed_model/sut_node.py | 110 +++++++++++++- > dts/tests/TestSuite_smoke_tests.py | 101 +++++++++++++ > 17 files changed, 962 insertions(+), 50 deletions(-) > create mode 100644 dts/framework/remote_session/remote/interactive_remote_session.py > create mode 100644 dts/framework/remote_session/remote/interactive_shell.py > create mode 100644 dts/framework/remote_session/remote/testpmd_shell.py > create mode 100644 dts/tests/TestSuite_smoke_tests.py > > diff --git a/dts/conf.yaml b/dts/conf.yaml > index a9bd8a3e..03fd57e1 100644 > --- a/dts/conf.yaml > +++ b/dts/conf.yaml > @@ -10,6 +10,8 @@ executions: > compiler_wrapper: ccache > perf: false > func: true > + vdevs: #names of virtual devices to be used for testing Vdevs are optional, let's mention that in the comment as with the hugepages config. > > + - "crypto_openssl" How (or where) is this configured in the original DTS? The vdevs are only going to be used on SUTs, so it may make sense to put them to node config. On the other hand, each execution using the SUT may use different vdevs, so this is probably the best place. Maybe we could move it under system_under_test in execution? That would make more sense I think - it would be an addition to the existing SUT configuration (as in here's where we augment SUT config specific to the execution). > > test_suites: > - hello_world > system_under_test: "SUT 1" > @@ -20,6 +22,17 @@ nodes: > arch: x86_64 > os: linux > lcores: "" > + ports: > + - pci: "0000:00:08.0" > + dpdk_os_driver: vfio-pci We could find a better name for this driver (and maybe even the regular os driver). It's the OS driver that dpdk uses and there could be better names - os_dpdk_driver, os_driver_dpdk, os_driver_for_dpdk. Which one do you like the most? Or maybe some other name? In either case we probably want to add a comment to both drivers that explain what we mean here. The name doesn't need to be perfect then. :-) I'll make sure to align the change with my patch if we change this. > > + os_driver: i40e > + peer_node: "TG 1" > + peer_pci: "0000:00:08.0" > + - pci: "0000:00:08.1" > + dpdk_os_driver: vfio-pci > + os_driver: i40e > + peer_node: "TG 1" > + peer_pci: "0000:00:08.1" > use_first_core: false > memory_channels: 4 > hugepages: # optional; if removed, will use system hugepage configuration > diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py > index ebb0823f..74e5158d 100644 > --- a/dts/framework/config/__init__.py > +++ b/dts/framework/config/__init__.py > @@ -72,6 +72,21 @@ class HugepageConfiguration: > force_first_numa: bool > > > +@dataclass(slots=3DTrue, frozen=3DTrue) > +class PortConfig: > + id: int I removed id in my latest local version as I didn't see a real use for it. > > + node: str > + pci: str > + dpdk_os_driver: str > + os_driver: str > + peer_node: str > + peer_pci: str > + > + @staticmethod > + def from_dict(id: int, node: str, d: dict) -> "PortConfig": > + return PortConfig(id=3Did, node=3Dnode, **d) This would need to be changed if we remove the id. > + > + > @dataclass(slots=3DTrue, frozen=3DTrue) > class NodeConfiguration: > name: str > @@ -84,6 +99,7 @@ class NodeConfiguration: > use_first_core: bool > memory_channels: int > hugepages: HugepageConfiguration | None > + ports: list[PortConfig] > > @staticmethod > def from_dict(d: dict) -> "NodeConfiguration": > @@ -92,18 +108,46 @@ def from_dict(d: dict) -> "NodeConfiguration": > if "force_first_numa" not in hugepage_config: > hugepage_config["force_first_numa"] =3D False > hugepage_config =3D HugepageConfiguration(**hugepage_config) > + common_config =3D { > + "name": d["name"], > + "hostname": d["hostname"], > + "user": d["user"], > + "password": d.get("password"), > + "arch": Architecture(d["arch"]), > + "os": OS(d["os"]), > + "lcores": d.get("lcores", "1"), > + "use_first_core": d.get("use_first_core", False), > + "memory_channels": d.get("memory_channels", 1), > + "hugepages": hugepage_config, > + "ports": [ > + PortConfig.from_dict(i, d["name"], port) > + for i, port in enumerate(d["ports"]) > + ], This also needs to be modified when removing PortConfig.id: "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]], > > + } > + > + return NodeConfiguration(**common_config) > + > + > +@dataclass(slots=3DTrue) > +class NodeVersionInfo: This looks like we could use it for all sorts of things in the future, so I'd just call it NodeInfo. > > + """Class to hold important versions within the node. > + > + This class, unlike the NodeConfiguration class, cannot be generated at the start. > + This is because we need to initialize a connection with the node before we can > + collet the information needed in this class. Therefore, it cannot be a part of Typo: collect > + the configuration class above. > + """ > > - return NodeConfiguration( > - name=3Dd["name"], > - hostname=3Dd["hostname"], > - user=3Dd["user"], > - password=3Dd.get("password"), > - arch=3DArchitecture(d["arch"]), > - os=3DOS(d["os"]), > - lcores=3Dd.get("lcores", "1"), > - use_first_core=3Dd.get("use_first_core", False), > - memory_channels=3Dd.get("memory_channels", 1), > - hugepages=3Dhugepage_config, > + os_name: str > + os_version: str > + kernel_version: str > + > + @staticmethod > + def from_dict(d: dict): > + return NodeVersionInfo( > + os_name=3Dd["os_name"], > + os_version=3Dd["os_version"], > + kernel_version=3Dd["kernel_version"], > ) > > > @@ -128,6 +172,24 @@ def from_dict(d: dict) -> "BuildTargetConfiguration"= : > ) > > > +@dataclass(slots=3DTrue) > +class BuildTargetVersionInfo: > + """Class to hold important versions within the build target. > + > + This is very similar to the NodeVersionInfo class, it just instead holds information > + for the build target. > + """ > + > + dpdk_version: str > + compiler_version: str > + > + @staticmethod > + def from_dict(d: dict): > + return BuildTargetVersionInfo( > + dpdk_version=3Dd["dpdk_version"], compiler_version=3Dd["compiler_version"] > + ) > + Do you expect we'll be storing more fields than just these two (if so, then we can again rename the class to BuildTargetInfo)? If it's just these two, the extra class may not help that much (we could pass around a tuple). On the other hand, there's not really a reason to not have this class. I could see us adding to it and I like BuildTargetInfo because of that. > > + > class TestSuiteConfigDict(TypedDict): > suite: str > cases: list[str] > @@ -157,6 +219,8 @@ class ExecutionConfiguration: > func: bool > test_suites: list[TestSuiteConfig] > system_under_test: NodeConfiguration > + vdevs: list[str] > + skip_smoke_tests: bool This should be added to conf.yaml to demonstrate the availability of the option. > > > @staticmethod > def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": > @@ -167,14 +231,18 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration": > map(TestSuiteConfig.from_dict, d["test_suites"]) > ) > sut_name =3D d["system_under_test"] > + list_of_vdevs =3D d["vdevs"] > + skip_smoke_tests =3D d["skip_smoke_tests"] if "skip_smoke_tests" in d else False Just as with hugepages, we could skip the else branch if we set the default of skip_smoke_tests to False. I'm not sure which is the better approach. > > assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}" > > return ExecutionConfiguration( > build_targets=3Dbuild_targets, > perf=3Dd["perf"], > func=3Dd["func"], > + skip_smoke_tests=3Dskip_smoke_tests, > test_suites=3Dtest_suites, > system_under_test=3Dnode_map[sut_name], > + vdevs=3Dlist_of_vdevs, > ) > > > @@ -221,3 +289,27 @@ def load_config() -> Configuration: > > > CONFIGURATION =3D load_config() > + > + > +@unique > +class InteractiveApp(Enum): > + """An enum that represents different supported interactive applications > + > + The values in this enum must all be set to objects that have a key called > + "default_path" where "default_path" represents an array for the path to the > + application. This default path will be passed into the handler class for the > + application so that it can start the application. For every key other than > + the default shell option, the path will be appended to the path to the DPDK > + build directory for the current SUT node. > + """ > + > + shell =3D {"default_path": [""]} > + testpmd =3D {"default_path": ["app", "dpdk-testpmd"]} The stored path could be a PurePath object. It'll be instantiated as an OS-specific path, but main_session.join_remote_path will convert it properly. > + > + def get_path(self): > + """A method for getting the default paths of an application > + > + Returns: > + String array that represents an OS agnostic path to the application. > + """ > + return self.value["default_path"] > diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json > index ca2d4a1e..3f7c301a 100644 > --- a/dts/framework/config/conf_yaml_schema.json > +++ b/dts/framework/config/conf_yaml_schema.json > @@ -6,6 +6,76 @@ > "type": "string", > "description": "A unique identifier for a node" > }, > + "NIC": { > + "type": "string", > + "enum": [ > + "ALL", > + "ConnectX3_MT4103", > + "ConnectX4_LX_MT4117", > + "ConnectX4_MT4115", > + "ConnectX5_MT4119", > + "ConnectX5_MT4121", > + "I40E_10G-10G_BASE_T_BC", > + "I40E_10G-10G_BASE_T_X722", > + "I40E_10G-SFP_X722", > + "I40E_10G-SFP_XL710", > + "I40E_10G-X722_A0", > + "I40E_1G-1G_BASE_T_X722", > + "I40E_25G-25G_SFP28", > + "I40E_40G-QSFP_A", > + "I40E_40G-QSFP_B", > + "IAVF-ADAPTIVE_VF", > + "IAVF-VF", > + "IAVF_10G-X722_VF", > + "ICE_100G-E810C_QSFP", > + "ICE_25G-E810C_SFP", > + "ICE_25G-E810_XXV_SFP", > + "IGB-I350_VF", > + "IGB_1G-82540EM", > + "IGB_1G-82545EM_COPPER", > + "IGB_1G-82571EB_COPPER", > + "IGB_1G-82574L", > + "IGB_1G-82576", > + "IGB_1G-82576_QUAD_COPPER", > + "IGB_1G-82576_QUAD_COPPER_ET2", > + "IGB_1G-82580_COPPER", > + "IGB_1G-I210_COPPER", > + "IGB_1G-I350_COPPER", > + "IGB_1G-I354_SGMII", > + "IGB_1G-PCH_LPTLP_I218_LM", > + "IGB_1G-PCH_LPTLP_I218_V", > + "IGB_1G-PCH_LPT_I217_LM", > + "IGB_1G-PCH_LPT_I217_V", > + "IGB_2.5G-I354_BACKPLANE_2_5GBPS", > + "IGC-I225_LM", > + "IGC-I226_LM", > + "IXGBE_10G-82599_SFP", > + "IXGBE_10G-82599_SFP_SF_QP", > + "IXGBE_10G-82599_T3_LOM", > + "IXGBE_10G-82599_VF", > + "IXGBE_10G-X540T", > + "IXGBE_10G-X540_VF", > + "IXGBE_10G-X550EM_A_SFP", > + "IXGBE_10G-X550EM_X_10G_T", > + "IXGBE_10G-X550EM_X_SFP", > + "IXGBE_10G-X550EM_X_VF", > + "IXGBE_10G-X550T", > + "IXGBE_10G-X550_VF", > + "brcm_57414", > + "brcm_P2100G", > + "cavium_0011", > + "cavium_a034", > + "cavium_a063", > + "cavium_a064", > + "fastlinq_ql41000", > + "fastlinq_ql41000_vf", > + "fastlinq_ql45000", > + "fastlinq_ql45000_vf", > + "hi1822", > + "virtio" > + ] > + }, > + All these NICs may be overkill, do we want to trim them? > > "ARCH": { > "type": "string", > "enum": [ > @@ -94,10 +164,24 @@ > "amount" > ] > }, > + "pci_address": { > + "type": "string", > + "pattern": "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$" > + }, > + "port_peer_address": { > + "description": "Peer is a TRex port, and IXIA port or a PCI address", > + "oneOf": [ > + { > + "description": "PCI peer port", > + "$ref": "#/definitions/pci_address" > + } > + ] > + }, > "test_suite": { > "type": "string", > "enum": [ > - "hello_world" > + "hello_world", > + "smoke_tests" Is there a reason to provide both this and the execution.skip_smoke_tests option? If we provide both, we may need to think about what happens when both are present. I'd say it's best for everyone (users and devs) to have only one option and I'd lean towards execution.skip_smoke_tests. > ] > }, > "test_target": { > @@ -165,6 +249,44 @@ > }, > "hugepages": { > "$ref": "#/definitions/hugepages" > + }, > + "ports": { > + "type": "array", > + "items": { > + "type": "object", > + "description": "Each port should be described on both sides of the connection. This makes configuration slightly more verbose but greatly simplifies implementation. If there are an inconsistencies, then DTS will not run until that issue is fixed. An example inconsistency would be port 1, node 1 says it is connected to port 1, node 2, but port 1, node 2 says it is connected to port 2, node 1.", > + "properties": { > + "pci": { > + "$ref": "#/definitions/pci_address", > + "description": "The local PCI address of the port" > + }, > + "dpdk_os_driver": { > + "type": "string", > + "description": "The driver that the kernel should bind this device to for DPDK to use it. (ex: vfio-pci)" > + }, > + "os_driver": { > + "type": "string", > + "description": "The driver normally used by this port (ex: i40e)" > + }, > + "peer_node": { > + "type": "string", > + "description": "The name of the node the peer port is on" > + }, > + "peer_pci": { > + "$ref": "#/definitions/pci_address", > + "description": "The PCI address of the peer port" > + } > + }, > + "additionalProperties": false, > + "required": [ > + "pci", > + "dpdk_os_driver", > + "os_driver", > + "peer_node", > + "peer_pci" > + ] > + }, > + "minimum": 1 > } > }, > "additionalProperties": false, > @@ -211,6 +333,17 @@ > ] > } > }, > + "vdevs": { > + "description": "Names of vdevs to be used in execution", > + "type": "array", > + "items": { > + "type":"string" Missing space before "string". > + } > + }, > + "skip_smoke_tests": { > + "description": "Optional field that allows you to skip smoke testing", > + "type":"boolean" Also the missing space. > + }, > "system_under_test": { > "$ref": "#/definitions/node_name" > } > diff --git a/dts/framework/dts.py b/dts/framework/dts.py > index 05022845..f76c9ceb 100644 > --- a/dts/framework/dts.py > +++ b/dts/framework/dts.py > @@ -5,7 +5,13 @@ > > import sys > > -from .config import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration > +from .config import ( > + CONFIGURATION, > + BuildTargetConfiguration, > + ExecutionConfiguration, > + TestSuiteConfig, > +) > +from .exception import BlockingTestSuiteError > from .logger import DTSLOG, getLogger > from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result > from .test_suite import get_test_suites > @@ -82,7 +88,9 @@ def _run_execution( > running all build targets in the given execution. > """ > dts_logger.info(f"Running execution with SUT '{ execution.system_under_test.name}'.") > - execution_result =3D result.add_execution(sut_node.config) > + execution_result =3D result.add_execution( > + sut_node.config, sut_node.get_node_versions() > + ) > > try: > sut_node.set_up_execution(execution) > @@ -118,14 +126,17 @@ def _run_build_target( > > try: > sut_node.set_up_build_target(build_target) > - result.dpdk_version =3D sut_node.dpdk_version > + # result.dpdk_version =3D sut_node.dpdk_version > + build_target_result.add_build_target_versions( > + sut_node.get_build_target_versions() > + ) > build_target_result.update_setup(Result.PASS) > except Exception as e: > dts_logger.exception("Build target setup failed.") > build_target_result.update_setup(Result.FAIL, e) > > else: > - _run_suites(sut_node, execution, build_target_result) > + _run_all_suites(sut_node, execution, build_target_result) > > finally: > try: > @@ -136,7 +147,46 @@ def _run_build_target( > build_target_result.update_teardown(Result.FAIL, e) > > > -def _run_suites( > +def _run_single_suite( > + sut_node: SutNode, > + execution: ExecutionConfiguration, > + build_target_result: BuildTargetResult, > + test_suite_config: TestSuiteConfig, > +) -> None: > + """Runs a single test suite. > + > + Args: > + sut_node: Node to run tests on. > + execution: Execution the test case belongs to. > + build_target_result: Build target configuration test case is run on > + test_suite_config: Test suite configuration > + > + Raises: > + BlockingTestSuiteError: If a test suite that was marked as blocking fails. > + """ > + try: > + full_suite_path =3D f"tests.TestSuite_{test_suite_config.test_suite}" > + test_suite_classes =3D get_test_suites(full_suite_path) > + suites_str =3D ", ".join((x.__name__ for x in test_suite_classes= )) > + dts_logger.debug(f"Found test suites '{suites_str}' in '{full_suite_path}'.") > + except Exception as e: > + dts_logger.exception("An error occurred when searching for test suites.") > + result.update_setup(Result.ERROR, e) > + > + else: > + for test_suite_class in test_suite_classes: > + test_suite =3D test_suite_class( > + sut_node, > + test_suite_config.test_cases, > + execution.func, > + build_target_result, > + sut_node._build_target_config, > + result, > + ) > + test_suite.run() > + > + > +def _run_all_suites( > sut_node: SutNode, > execution: ExecutionConfiguration, > build_target_result: BuildTargetResult, > @@ -146,27 +196,32 @@ def _run_suites( > with possibly only a subset of test cases. > If no subset is specified, run all test cases. > """ > + end_build_target =3D False > + smoke_test_config =3D TestSuiteConfig.from_dict("smoke_tests") > + if not execution.skip_smoke_tests: > + try: > + _run_single_suite( > + sut_node, execution, build_target_result, smoke_test_config > + ) > + except BlockingTestSuiteError as e: > + dts_logger.exception("Smoke tests failed, stopping build target execution.") > + result.add_error(e) > + return > for test_suite_config in execution.test_suites: > try: > - full_suite_path =3D f"tests.TestSuite_{test_suite_config.test_suite}" > - test_suite_classes =3D get_test_suites(full_suite_path) > - suites_str =3D ", ".join((x.__name__ for x in test_suite_classes)) > - dts_logger.debug( > - f"Found test suites '{suites_str}' in '{full_suite_path}'." > + _run_single_suite( > + sut_node, execution, build_target_result, test_suite_config > ) > - except Exception as e: > - dts_logger.exception("An error occurred when searching for test suites.") > - result.update_setup(Result.ERROR, e) > - > - else: > - for test_suite_class in test_suite_classes: > - test_suite =3D test_suite_class( > - sut_node, > - test_suite_config.test_cases, > - execution.func, > - build_target_result, > - ) > - test_suite.run() > + except BlockingTestSuiteError as e: > + dts_logger.exception( > + f"An error occurred within {test_suite_config.test_suite}. " > + "Skipping build target..." > + ) > + result.add_error(e) > + end_build_target =3D True > + # if a blocking test failed and we need to bail out of suite executions > + if end_build_target: > + break > This doesn't look great to me. I think we could just add a list[TestSuiteConfig] parameter to _run_suites. We could then just pass a list with/without smoke TestSuiteConfig, something like: test_suites =3D execution.test_suites if not execution.skip_smoke_tests: test_suites[:0] =3D [TestSuiteConfig.from_dict("smoke_tests")] _run_suites(sut_node, execution, build_target_result, test_suites) A note on method ordering: _run_single_suite before _run_all_suites was confusing to me, as the methods are used in the reverse order - _run_build_target calls _run_all_suites which only then calls _run_single_suite. I'd like the methods to be ordered in the order they're used in the code. > > def _exit_dts() -> None: > diff --git a/dts/framework/exception.py b/dts/framework/exception.py > index ca353d98..dfb12df4 100644 > --- a/dts/framework/exception.py > +++ b/dts/framework/exception.py > @@ -25,6 +25,7 @@ class ErrorSeverity(IntEnum): > SSH_ERR =3D 4 > DPDK_BUILD_ERR =3D 10 > TESTCASE_VERIFY_ERR =3D 20 > + BLOCKING_TESTSUITE_ERR =3D 25 > > > class DTSError(Exception): > @@ -144,3 +145,14 @@ def __init__(self, value: str): > > def __str__(self) -> str: > return repr(self.value) > + > + > +class BlockingTestSuiteError(DTSError): > + suite_name: str > + severity: ClassVar[ErrorSeverity] =3D ErrorSeverity.BLOCKING_TESTSUITE_ERR > + > + def __init__(self, suite_name: str) -> None: > + self.suite_name =3D suite_name > + > + def __str__(self) -> str: > + return f"Blocking suite {self.suite_name} failed." > diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py > index ee221503..04342b8d 100644 > --- a/dts/framework/remote_session/__init__.py > +++ b/dts/framework/remote_session/__init__.py > @@ -1,5 +1,6 @@ > # SPDX-License-Identifier: BSD-3-Clause > # Copyright(c) 2023 PANTHEON.tech s.r.o. > +# Copyright(c) 2022-2023 University of New Hampshire Should the year be just 2023? I'm not sure about these copyright statements= . > > """ > The package provides modules for managing remote connections to a remote host (node), > @@ -17,7 +18,14 @@ > > from .linux_session import LinuxSession > from .os_session import OSSession > -from .remote import CommandResult, RemoteSession, SSHSession > +from .remote import ( > + CommandResult, > + InteractiveRemoteSession, > + InteractiveShell, > + RemoteSession, > + SSHSession, > + TestPmdShell, > +) > > > def create_session( > diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py > index 4c48ae25..f5f53923 100644 > --- a/dts/framework/remote_session/os_session.py > +++ b/dts/framework/remote_session/os_session.py > @@ -12,7 +12,13 @@ > from framework.testbed_model import LogicalCore > from framework.utils import EnvVarsDict, MesonArgs > > -from .remote import CommandResult, RemoteSession, create_remote_session > +from .remote import ( > + CommandResult, > + InteractiveRemoteSession, > + RemoteSession, > + create_interactive_session, > + create_remote_session, > +) > > > class OSSession(ABC): > @@ -26,6 +32,7 @@ class OSSession(ABC): > name: str > _logger: DTSLOG > remote_session: RemoteSession > + interactive_session: InteractiveRemoteSession > > def __init__( > self, > @@ -37,6 +44,7 @@ def __init__( > self.name =3D name > self._logger =3D logger > self.remote_session =3D create_remote_session(node_config, name, logger) > + self.interactive_session =3D create_interactive_session(node_config, name, logger) We may not want to create the interactive session at this point. This does create a connection to the node which we don't want (it is extra time consumed) when creating an extra session on top of the main_session (with Node.create_session). I think we could move this to OSSession.create_interactive_shell. More below. > > def close(self, force: bool =3D False) -> None: > """ > @@ -173,3 +181,27 @@ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None: > if needed and mount the hugepages if needed. > If force_first_numa is True, configure hugepages just on the first socket. > """ > + > + @abstractmethod > + def get_compiler_version(self, compiler_name: str) -> str: > + """ > + Get installed version of compiler used for DPDK > + """ > + > + @abstractmethod > + def get_os_name(self) -> str: > + """ > + Get name of OS for the node > + """ > + > + @abstractmethod > + def get_os_version(self) -> str: > + """ > + Get version of OS for the node > + """ > + > + @abstractmethod > + def get_kernel_version(self) -> str: > + """ > + Get kernel version for the node > + """ These could be merged into one method (returning NodeVersionInfo), is there a reason to separate them? We could get some performance improvements with one method (e.g. getting /etc/os-release just once and then process locally or we could use a script we execute remotely to have just one remote call, but that's something to think about in the future). > diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py > index d38062e8..dc6689f5 100644 > --- a/dts/framework/remote_session/posix_session.py > +++ b/dts/framework/remote_session/posix_session.py > @@ -219,3 +219,33 @@ def _remove_dpdk_runtime_dirs( > > def get_dpdk_file_prefix(self, dpdk_prefix) -> str: > return "" > + > + def get_compiler_version(self, compiler_name: str) -> str: > + match compiler_name: > + case "gcc": > + return self.send_command(f"{compiler_name} --version", 60).stdout.split( > + "\n" > + )[0] > + case "clang": > + return self.send_command(f"{compiler_name} --version", 60).stdout.split( > + "\n" > + )[0] > + case "msvc": > + return self.send_command("cl", 60).stdout > + case "icc": > + return self.send_command(f"{compiler_name} -V", 60).stdout > + case _: > + raise ValueError(f"Unknown compiler {compiler_name}") > + > + def get_os_name(self) -> str: > + return self.send_command( > + "awk -F=3D '$1=3D=3D\"NAME\" {print $2}' /etc/os-release", 6= 0 > + ).stdout > + > + def get_os_version(self) -> str: > + return self.send_command( > + "awk -F=3D '$1=3D=3D\"VERSION\" {print $2}' /etc/os-release"= , 60, True > + ).stdout > + > + def get_kernel_version(self) -> str: > + return self.send_command("uname -r", 60).stdout These 60 second timeouts seem high to me. All of these commands should execute quickly so the default timeout seems fine to me. > diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py > index 8a151221..ef7b4851 100644 > --- a/dts/framework/remote_session/remote/__init__.py > +++ b/dts/framework/remote_session/remote/__init__.py > @@ -1,16 +1,28 @@ > # SPDX-License-Identifier: BSD-3-Clause > # Copyright(c) 2023 PANTHEON.tech s.r.o. > +# Copyright(c) 2022-2023 University of New Hampshire > > # pylama:ignore=3DW0611 > > +from paramiko import AutoAddPolicy, SSHClient > + > from framework.config import NodeConfiguration > from framework.logger import DTSLOG > > +from .interactive_remote_session import InteractiveRemoteSession > +from .interactive_shell import InteractiveShell > from .remote_session import CommandResult, RemoteSession > from .ssh_session import SSHSession > +from .testpmd_shell import TestPmdShell > > > def create_remote_session( > node_config: NodeConfiguration, name: str, logger: DTSLOG > ) -> RemoteSession: > return SSHSession(node_config, name, logger) > + > + > +def create_interactive_session( > + node_config: NodeConfiguration, name: str, logger: DTSLOG > +) -> InteractiveRemoteSession: > + return InteractiveRemoteSession(node_config, logger) > diff --git a/dts/framework/remote_session/remote/interactive_remote_session.py b/dts/framework/remote_session/remote/interactive_remote_session.py > new file mode 100644 > index 00000000..4e88308a > --- /dev/null > +++ b/dts/framework/remote_session/remote/interactive_remote_session.py > @@ -0,0 +1,113 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2023 University of New Hampshire > + > +import socket > +import traceback > +from pathlib import PurePath > + > +from paramiko import AutoAddPolicy, SSHClient, Transport > +from paramiko.ssh_exception import ( > + AuthenticationException, > + BadHostKeyException, > + NoValidConnectionsError, > + SSHException, > +) > + > +from framework.config import InteractiveApp, NodeConfiguration > +from framework.exception import SSHConnectionError > +from framework.logger import DTSLOG > +from framework.settings import SETTINGS > + > +from .interactive_shell import InteractiveShell > +from .testpmd_shell import TestPmdShell > + > + > +class InteractiveRemoteSession: > + hostname: str > + ip: str > + port: int > + username: str > + password: str > + _logger: DTSLOG > + _node_config: NodeConfiguration > + session: SSHClient > + _transport: Transport | None > + > + def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> None: > + self._node_config =3D node_config > + self._logger =3D _logger > + self.hostname =3D node_config.hostname > + self.username =3D node_config.user > + self.password =3D node_config.password if node_config.password else "" > + port =3D 22 framework/remote_session/remote/interactive_remote_session.py:45: error: Incompatible types in assignment (expression has type "str", variable has type "int") We can address this by making 22 a string (port =3D "22"). > + self.ip =3D node_config.hostname > + if ":" in node_config.hostname: > + self.ip, port =3D node_config.hostname.split(":") > + self.port =3D int(port) > + self._logger.info( > + f"Initializing interactive connection for {self.username}@ {self.hostname}" > + ) > + self._connect() > + self._logger.info( > + f"Interactive connection successful for {self.username}@ {self.hostname}" > + ) > + > + def _connect(self) -> None: > + client =3D SSHClient() > + client.set_missing_host_key_policy(AutoAddPolicy) > + self.session =3D client > + retry_attempts =3D 10 > + for retry_attempt in range(retry_attempts): > + try: > + client.connect( > + self.ip, > + username=3Dself.username, > + port=3Dself.port, > + password=3Dself.password, > + timeout=3D20 if self.port else 10, > + ) > + except (TypeError, BadHostKeyException, AuthenticationException) as e: > + self._logger.exception(e) > + raise SSHConnectionError(self.hostname) from e > + except (NoValidConnectionsError, socket.error, SSHException) as e: > + self._logger.debug(traceback.format_exc()) > + self._logger.warning(e) > + self._logger.info( > + "Retrying interactive session connection: " > + f"retry number {retry_attempt +1}" > + ) > + else: > + break > + else: > + raise SSHConnectionError(self.hostname) > + # Interactive sessions are used on an "as needed" basis so we have > + # to set a keepalive > + self._transport =3D self.session.get_transport() > + if self._transport is not None: > + self._transport.set_keepalive(30) > + > + def create_interactive_shell( > + self, > + shell_type: InteractiveApp, > + path_to_app: PurePath, > + timeout: float =3D SETTINGS.timeout, > + eal_parameters: str =3D "", > + app_parameters=3D"", > + ) -> InteractiveShell: > + """ > + See "create_interactive_shell" in SutNode > + """ > + match (shell_type.name): > + case "shell": Comparing with the enum values seems better. > + return InteractiveShell( > + self.session, self._logger, path_to_app, timeout > + ) > + case "testpmd": > + return TestPmdShell( > + self.session, > + self._logger, > + path_to_app, > + timeout=3Dtimeout, > + eal_flags=3Deal_parameters, > + cmd_line_options=3Dapp_parameters, > + ) framework/remote_session/remote/interactive_remote_session.py:89: error: Missing return statement This is probably because we don't have a case for the rest of the values. > diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/remote/interactive_shell.py > new file mode 100644 > index 00000000..d98503be > --- /dev/null > +++ b/dts/framework/remote_session/remote/interactive_shell.py > @@ -0,0 +1,98 @@ > +from pathlib import PurePath > + > +from paramiko import Channel, SSHClient, channel > + > +from framework.logger import DTSLOG > +from framework.settings import SETTINGS > + > + > +class InteractiveShell: > + > + _interactive_session: SSHClient > + _stdin: channel.ChannelStdinFile > + _stdout: channel.ChannelFile > + _ssh_channel: Channel > + _logger: DTSLOG > + _timeout: float > + > + def __init__( > + self, > + interactive_session: SSHClient, > + logger: DTSLOG, > + path_to_app: PurePath | None =3D None, > + timeout: float =3D SETTINGS.timeout, > + ) -> None: > + self._interactive_session =3D interactive_session > + self._ssh_channel =3D self._interactive_session.invoke_shell() > + self._stdin =3D self._ssh_channel.makefile_stdin("w") > + self._stdout =3D self._ssh_channel.makefile("r") > + self._ssh_channel.settimeout(timeout) > + self._ssh_channel.set_combine_stderr(True) # combines stdout and stderr streams > + self._logger =3D logger > + self._timeout =3D timeout > + if path_to_app: > + self.send_command_no_output(str(path_to_app)) # starts the application This should be part of the derived class - the code to start an app is going to be app-specific. Since this is an interactive session, the assumption is that we're always going to start an app and then work with it, so starting the app in the constructor should be safe. We actually have that for testpmd, so we just need to unify the above with what we have. > + > + def empty_stdout_buffer(self) -> None: Same comment on ordering as above. > + """Removes all data from the stdout buffer. > + > + Because of the way paramiko handles read buffers, there is no way to effectively > + remove data from, or "flush", read buffers. This method essentially moves our > + offset on the buffer to the end and thus "removes" the data from the buffer. > + Timeouts are thrown on read operations of paramiko pipes based on whether data > + had been received before timeout so we assume that if we reach the timeout then > + we are at the end of the buffer. > + """ > + self._ssh_channel.settimeout(1) Waiting a whole second seems to be a lot. We actually may not need this method from the use in the code - that is we change how the app starts. > + try: > + for line in self._stdout: > + pass > + except TimeoutError: > + pass > + self._ssh_channel.settimeout(self._timeout) # reset timeout > + > + def send_command_no_output(self, command: str) -> None: > + """Send command to channel without recording output. > + > + This method will not verify any input or output, it will simply assume the > + command succeeded. This method will also consume all output in the buffer > + after executing the command. > + """ > + self._logger.info( > + f"Sending command {command.strip()} and not collecting output" > + ) > + self._stdin.write(f"{command}\n") > + self._stdin.flush() > + self.empty_stdout_buffer() > + > + def send_command_get_output(self, command: str, prompt: str) -> str: > + """Send a command and get all output before the expected ending string. > + > + Lines that expect input are not included in the stdout buffer so they cannot be > + used for expect. For example, if you were prompted to log into something > + with a username and password, you cannot expect "username:" because it won't > + yet be in the stdout buffer. A work around for this could be consuming an > + extra newline character to force the current prompt into the stdout buffer. > + > + Returns: > + All output in the buffer before expected string > + """ > + self._logger.info(f"Sending command {command.strip()}...") > + self._stdin.write(f"{command}\n") > + self._stdin.flush() > + out: str =3D "" > + for line in self._stdout: > + out +=3D line > + if prompt in line and not line.rstrip().endswith( > + command.rstrip() > + ): # ignore line that sent command > + break > + self._logger.debug(f"Got output: {out}") > + return out > + > + def close(self) -> None: > + self._stdin.close() > + self._ssh_channel.close() > + > + def __del__(self) -> None: > + self.close() > diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/remote/testpmd_shell.py > new file mode 100644 > index 00000000..873f3c82 > --- /dev/null > +++ b/dts/framework/remote_session/remote/testpmd_shell.py > @@ -0,0 +1,58 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2023 University of New Hampshire > + > + > +from pathlib import PurePath > + > +from paramiko import SSHClient > + > +from framework.logger import DTSLOG > +from framework.settings import SETTINGS > + > +from .interactive_shell import InteractiveShell > + > + > +class TestPmdShell(InteractiveShell): > + expected_prompt: str =3D "testpmd>" > + > + def __init__( > + self, > + interactive_session: SSHClient, > + logger: DTSLOG, > + path_to_testpmd: PurePath, > + eal_flags: str =3D "", > + cmd_line_options: str =3D "", > + timeout: float =3D SETTINGS.timeout, > + ) -> None: > + """Initializes an interactive testpmd session using specified parameters.""" > + super(TestPmdShell, self).__init__( > + interactive_session, logger=3Dlogger, timeout=3Dtimeout > + ) > + self.send_command_get_output( > + f"{path_to_testpmd} {eal_flags} -- -i {cmd_line_options}\n", > + self.expected_prompt, > + ) > + > + def send_command(self, command: str, prompt: str =3D expected_prompt= ) -> str: > + """Specific way of handling the command for testpmd > + > + An extra newline character is consumed in order to force the current line into > + the stdout buffer. > + """ > + return self.send_command_get_output(f"{command}\n", prompt) > + > + def get_devices(self) -> list[str]: > + """Get a list of device names that are known to testpmd > + > + Uses the device info listed in testpmd and then parses the output to > + return only the names of the devices. > + > + Returns: > + A list of strings representing device names (e.g. 0000:14:00.1) > + """ > + dev_info: str =3D self.send_command("show device info all") > + dev_list: list[str] =3D [] > + for line in dev_info.split("\n"): > + if "device name:" in line.lower(): > + dev_list.append(line.strip().split(": ")[1].strip()) > + return dev_list This could be an object (or a list of TestPmdDevice object or something similar). We could start with just the name and add to it in the future. > diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py > index 74391982..029665fb 100644 > --- a/dts/framework/test_result.py > +++ b/dts/framework/test_result.py > @@ -1,5 +1,6 @@ > # SPDX-License-Identifier: BSD-3-Clause > # Copyright(c) 2023 PANTHEON.tech s.r.o. > +# Copyright(c) 2022-2023 University of New Hampshire > > """ > Generic result container and reporters > @@ -8,14 +9,17 @@ > import os.path > from collections.abc import MutableSequence > from enum import Enum, auto > +from typing import Dict No need, we can just use built-in dict (since Python3.9). > > from .config import ( > OS, > Architecture, > BuildTargetConfiguration, > + BuildTargetVersionInfo, > Compiler, > CPUType, > NodeConfiguration, > + NodeVersionInfo, > ) > from .exception import DTSError, ErrorSeverity > from .logger import DTSLOG > @@ -67,12 +71,14 @@ class Statistics(dict): > Using a dict provides a convenient way to format the data. > """ > > - def __init__(self, dpdk_version): > + def __init__(self, output_info: Dict[str, str] | None): > super(Statistics, self).__init__() > for result in Result: > self[result.name] =3D 0 > self["PASS RATE"] =3D 0.0 > - self["DPDK VERSION"] =3D dpdk_version > + if output_info: > + for info_key, info_val in output_info.items(): > + self[info_key] =3D info_val > > def __iadd__(self, other: Result) -> "Statistics": > """ > @@ -206,6 +212,8 @@ class BuildTargetResult(BaseResult): > os: OS > cpu: CPUType > compiler: Compiler > + compiler_version: str | None > + dpdk_version: str | None > > def __init__(self, build_target: BuildTargetConfiguration): > super(BuildTargetResult, self).__init__() > @@ -213,6 +221,12 @@ def __init__(self, build_target: BuildTargetConfiguration): > self.os =3D build_target.os > self.cpu =3D build_target.cpu > self.compiler =3D build_target.compiler > + self.compiler_version =3D None > + self.dpdk_version =3D None > + > + def add_build_target_versions(self, versions: BuildTargetVersionInfo) -> None: > + self.compiler_version =3D versions.compiler_version > + self.dpdk_version =3D versions.dpdk_version > > def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: > test_suite_result =3D TestSuiteResult(test_suite_name) > @@ -228,10 +242,17 @@ class ExecutionResult(BaseResult): > """ > > sut_node: NodeConfiguration > + sut_os_name: str > + sut_os_version: str > + sut_kernel_version: str > > - def __init__(self, sut_node: NodeConfiguration): > + def __init__(self, sut_node: NodeConfiguration, sut_version_info: NodeVersionInfo): > super(ExecutionResult, self).__init__() > self.sut_node =3D sut_node > + self.sut_version_info =3D sut_version_info > + self.sut_os_name =3D sut_version_info.os_name > + self.sut_os_version =3D sut_version_info.os_version > + self.sut_kernel_version =3D sut_version_info.kernel_version > > def add_build_target( > self, build_target: BuildTargetConfiguration > @@ -258,6 +279,7 @@ class DTSResult(BaseResult): > """ > > dpdk_version: str | None > + output: dict | None > _logger: DTSLOG > _errors: list[Exception] > _return_code: ErrorSeverity > @@ -267,14 +289,17 @@ class DTSResult(BaseResult): > def __init__(self, logger: DTSLOG): > super(DTSResult, self).__init__() > self.dpdk_version =3D None > + self.output =3D None > self._logger =3D logger > self._errors =3D [] > self._return_code =3D ErrorSeverity.NO_ERR > self._stats_result =3D None > self._stats_filename =3D os.path.join(SETTINGS.output_dir, "statistics.txt") > > - def add_execution(self, sut_node: NodeConfiguration) -> ExecutionResult: > - execution_result =3D ExecutionResult(sut_node) > + def add_execution( > + self, sut_node: NodeConfiguration, sut_version_info: NodeVersionInfo > + ) -> ExecutionResult: > + execution_result =3D ExecutionResult(sut_node, sut_version_info) > self._inner_results.append(execution_result) > return execution_result > > @@ -296,7 +321,8 @@ def process(self) -> None: > for error in self._errors: > self._logger.debug(repr(error)) > > - self._stats_result =3D Statistics(self.dpdk_version) > + self._stats_result =3D Statistics(self.output) > + # add information gathered from the smoke tests to the statistic= s > self.add_stats(self._stats_result) > with open(self._stats_filename, "w+") as stats_file: > stats_file.write(str(self._stats_result)) > diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py > index 0705f38f..ad66014c 100644 > --- a/dts/framework/test_suite.py > +++ b/dts/framework/test_suite.py > @@ -10,11 +10,24 @@ > import inspect > import re > from types import MethodType > +from typing import Dict > > -from .exception import ConfigurationError, SSHTimeoutError, TestCaseVerifyError > +from .config import BuildTargetConfiguration > +from .exception import ( > + BlockingTestSuiteError, > + ConfigurationError, > + SSHTimeoutError, > + TestCaseVerifyError, > +) > from .logger import DTSLOG, getLogger > from .settings import SETTINGS > -from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult > +from .test_result import ( > + BuildTargetResult, > + DTSResult, > + Result, > + TestCaseResult, > + TestSuiteResult, > +) > from .testbed_model import SutNode > > > @@ -37,10 +50,12 @@ class TestSuite(object): > """ > > sut_node: SutNode > + is_blocking =3D False > _logger: DTSLOG > _test_cases_to_run: list[str] > _func: bool > _result: TestSuiteResult > + _dts_result: DTSResult > > def __init__( > self, > @@ -48,6 +63,8 @@ def __init__( > test_cases: list[str], > func: bool, > build_target_result: BuildTargetResult, > + build_target_conf: BuildTargetConfiguration, > + dts_result: DTSResult, > ): > self.sut_node =3D sut_node > self._logger =3D getLogger(self.__class__.__name__) > @@ -55,6 +72,8 @@ def __init__( > self._test_cases_to_run.extend(SETTINGS.test_cases) > self._func =3D func > self._result =3D build_target_result.add_test_suite(self.__class__.__name__) > + self.build_target_info =3D build_target_conf > + self._dts_result =3D dts_result > > def set_up_suite(self) -> None: > """ > @@ -118,6 +137,8 @@ def run(self) -> None: > f"the next test suite may be affected." > ) > self._result.update_setup(Result.ERROR, e) > + if len(self._result.get_errors()) > 0 and self.is_blocking: > + raise BlockingTestSuiteError(test_suite_name) > > def _execute_test_suite(self) -> None: > """ > @@ -232,6 +253,12 @@ def _execute_test_case( > test_case_result.update(Result.SKIP) > raise KeyboardInterrupt("Stop DTS") > > + def write_to_statistics_file(self, output: Dict[str, str]): This isn't used anywhere as far as I can tell. > + if self._dts_result.output is not None: > + self._dts_result.output.update(output) > + else: > + self._dts_result.output =3D output > + This could be simplified if we init output to an empty dict in the constructor. > > def get_test_suites(testsuite_module_path: str) -> list[type[TestSuite]]= : > def is_test_suite(object) -> bool: > diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py > index d48fafe6..c5147e0e 100644 > --- a/dts/framework/testbed_model/node.py > +++ b/dts/framework/testbed_model/node.py > @@ -40,6 +40,7 @@ class Node(object): > lcores: list[LogicalCore] > _logger: DTSLOG > _other_sessions: list[OSSession] > + _execution_config: ExecutionConfiguration > > def __init__(self, node_config: NodeConfiguration): > self.config =3D node_config > @@ -64,6 +65,7 @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -> None: > """ > self._setup_hugepages() > self._set_up_execution(execution_config) > + self._execution_config =3D execution_config > > def _set_up_execution(self, execution_config: ExecutionConfiguration) -> None: > """ > diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py > index 2b2b50d9..27849da7 100644 > --- a/dts/framework/testbed_model/sut_node.py > +++ b/dts/framework/testbed_model/sut_node.py > @@ -1,14 +1,21 @@ > # SPDX-License-Identifier: BSD-3-Clause > # Copyright(c) 2010-2014 Intel Corporation > # Copyright(c) 2023 PANTHEON.tech s.r.o. > +# Copyright(c) 2022-2023 University of New Hampshire > > import os > import tarfile > import time > from pathlib import PurePath > > -from framework.config import BuildTargetConfiguration, NodeConfiguration > -from framework.remote_session import CommandResult, OSSession > +from framework.config import ( > + BuildTargetConfiguration, > + BuildTargetVersionInfo, > + InteractiveApp, > + NodeConfiguration, > + NodeVersionInfo, > +) > +from framework.remote_session import CommandResult, InteractiveShell, OSSession > from framework.settings import SETTINGS > from framework.utils import EnvVarsDict, MesonArgs > > @@ -26,13 +33,17 @@ class SutNode(Node): > > _dpdk_prefix_list: list[str] > _dpdk_timestamp: str > - _build_target_config: BuildTargetConfiguration | None > + _build_target_config: BuildTargetConfiguration What's the reason for this change? My guess would be mypy, but the change still results in mypy complaining: framework/testbed_model/sut_node.py:51: error: Incompatible types in assignment (expression has type "None", variable has type "BuildTargetConfiguration") > _env_vars: EnvVarsDict > _remote_tmp_dir: PurePath > __remote_dpdk_dir: PurePath | None > - _dpdk_version: str | None > _app_compile_timeout: float > _dpdk_kill_session: OSSession | None > + _dpdk_version: str | None > + _os_name: str | None > + _os_version: str | None > + _kernel_version: str | None > + _compiler_version: str | None > > def __init__(self, node_config: NodeConfiguration): > super(SutNode, self).__init__(node_config) > @@ -41,12 +52,16 @@ def __init__(self, node_config: NodeConfiguration): > self._env_vars =3D EnvVarsDict() > self._remote_tmp_dir =3D self.main_session.get_remote_tmp_dir() > self.__remote_dpdk_dir =3D None > - self._dpdk_version =3D None > self._app_compile_timeout =3D 90 > self._dpdk_kill_session =3D None > self._dpdk_timestamp =3D ( > f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}" > ) > + self._dpdk_version =3D None > + self._os_name =3D None > + self._os_version =3D None > + self._kernel_version =3D None > + self._compiler_version =3D None > > @property > def _remote_dpdk_dir(self) -> PurePath: > @@ -75,6 +90,44 @@ def dpdk_version(self) -> str: > ) > return self._dpdk_version > > + @property > + def os_name(self) -> str: > + if self._os_name is None: > + self._os_name =3D self.main_session.get_os_name() > + return self._os_name > + > + @property > + def os_version(self) -> str: > + if self._os_version is None: > + self._os_version =3D self.main_session.get_os_version() > + return self._os_version > + > + @property > + def kernel_version(self) -> str: > + if self._kernel_version is None: > + self._kernel_version =3D self.main_session.get_kernel_versio= n() > + return self._kernel_version > + > + @property > + def compiler_version(self) -> str: > + if self._compiler_version is None: > + self._compiler_version =3D self.main_session.get_compiler_version( > + self._build_target_config.compiler.name > + ) > + return self._compiler_version > + > + def get_node_versions(self) -> NodeVersionInfo: > + return NodeVersionInfo( > + os_name=3Dself.os_name, > + os_version=3Dself.os_version, > + kernel_version=3Dself.kernel_version, > + ) > + > + def get_build_target_versions(self) -> BuildTargetVersionInfo: > + return BuildTargetVersionInfo( > + dpdk_version=3Dself.dpdk_version, compiler_version=3Dself.compiler_version > + ) > + > def _guess_dpdk_remote_dir(self) -> PurePath: > return self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir) > > @@ -84,6 +137,10 @@ def _set_up_build_target( > """ > Setup DPDK on the SUT node. > """ > + # we want to ensure that dpdk_version and compiler_version is reset for new > + # build targets > + self._dpdk_version =3D None > + self._compiler_version =3D None > self._configure_build_target(build_target_config) > self._copy_dpdk_tarball() > self._build_dpdk() > @@ -262,6 +319,49 @@ def run_dpdk_app( > f"{app_path} {eal_args}", timeout, verify=3DTrue > ) > > + def create_interactive_shell( > + self, > + shell_type: InteractiveApp, > + path_to_app: PurePath | None =3D None, Can we join shell_type and path_to_app together? Both contain the same type of information. If we can't, let's drop path_to_app, as we don't provide (in this patch) a way for users to modify this. We can add it later if needed. > + timeout: float =3D SETTINGS.timeout, > + eal_parameters: str =3D "", Any reason we're not using the EalParameters class here? > + app_parameters=3D"", This is also currently not used, so maybe we should drop it and add it in the future, probably as a dedicated object representing the parameters. > + ) -> InteractiveShell: This should return a Union of all the different sessions we support, otherwise mypy complains: tests/TestSuite_smoke_tests.py:68: error: Incompatible types in assignment (expression has type "InteractiveShell", variable has type "TestPmdShell") > + """Create a handler for an interactive session. > + > + This method is a factory that calls a method in OSSession to create shells for > + different DPDK applications. > + > + Args: > + shell_type: Enum value representing the desired application. > + path_to_app: Represents a path to the application you are attempting to > + launch. This path will be executed at the start of the app > + initialization. If one isn't provided, the default specified in the > + enumeration will be used. > + timeout: Timeout for the ssh channel We should specify what sort of timeout this is - for creating the channel or for sent commands? > + eal_parameters: List of EAL parameters to use to launch the app. This is > + ignored for base "shell" types. > + app_parameters: Command-line flags to pass into the application on launch. > + Returns: > + Instance of the desired interactive application. > + """ > + default_path: PurePath | None > + # if we just want a default shell, there is no need to append the DPDK build > + # directory to the path > + if shell_type.name =3D=3D "shell": > + default_path =3D None > + else: > + default_path =3D self.main_session.join_remote_path( > + self.remote_dpdk_build_dir, *shell_type.get_path() > + ) All this path handling looks clunky, it should be part of the InteractiveApp class. Would adding something like update_path to InteractiveApp work? > + return self.main_session.interactive_session.create_interactive_shell( > + shell_type, > + path_to_app or default_path, > + timeout, > + eal_parameters, > + app_parameters, > + ) > + > > class EalParameters(object): > def __init__( > diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py > new file mode 100644 > index 00000000..a2fd9fae > --- /dev/null > +++ b/dts/tests/TestSuite_smoke_tests.py > @@ -0,0 +1,101 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2023 University of New Hampshire > + > +from framework.config import InteractiveApp > +from framework.remote_session import TestPmdShell > +from framework.test_suite import TestSuite > + > + > +class SmokeTests(TestSuite): > + is_blocking =3D True > + # in this list, the first index is the address of the nic and the second is > + # the driver for that nic. > + list_of_nics: list[tuple[str, str]] =3D [] Would using a dict make more sense? > + > + def set_up_suite(self) -> None: > + """ > + Setup: > + build all DPDK This needs to be updated. > + """ > + self.dpdk_build_dir_path =3D self.sut_node.remote_dpdk_build_dir > + for nic in self.sut_node.config.ports: > + new_tuple =3D (nic.pci, nic.os_driver.strip()) > + self.list_of_nics.append(new_tuple) It would be better to rename this based on what list of nics this is - the one we expect to find or the one we're testing? > + > + def test_unit_tests(self) -> None: > + """ > + Test: > + run the fast-test unit-test suite through meson > + """ > + self.sut_node.main_session.send_command( > + f"meson test -C {self.dpdk_build_dir_path} --suite fast-tests", > + 300, > + verify=3DTrue, > + ) > + > + def test_driver_tests(self) -> None: > + """ > + Test: > + run the driver-test unit-test suite through meson > + """ > + list_of_vdevs =3D "" > + for dev in self.sut_node._execution_config.vdevs: > + list_of_vdevs +=3D f"--vdev {dev} " > + list_of_vdevs =3D list_of_vdevs[:-1] > + if list_of_vdevs: > + self._logger.info( > + "Running driver tests with the following virtual " > + f"devices: {list_of_vdevs}" > + ) > + self.sut_node.main_session.send_command( > + f"meson test -C {self.dpdk_build_dir_path} --suite driver-tests " > + f'--test-args "{list_of_vdevs}"', > + 300, > + verify=3DTrue, > + ) > + else: > + self.sut_node.main_session.send_command( > + f"meson test -C {self.dpdk_build_dir_path} --suite driver-tests", > + 300, > + verify=3DTrue, > + ) > + > + def test_devices_listed_in_testpmd(self) -> None: > + """ > + Test: > + Uses testpmd driver to verify that devices have been found by testpmd > + """ Every other test description is written in imperative and with lowecase first letter. Let's unite everything to imperative with uppercase first letter and properly ending the sentence with a dot. > + testpmd_driver: TestPmdShell =3D self.sut_node.create_interactive_shell( > + InteractiveApp.testpmd > + ) > + dev_list: list[str] =3D testpmd_driver.get_devices() > + for nic in self.list_of_nics: Here's where the more explicit naming would be beneficial - I had to think about what's what. > + self.verify( > + nic[0] in dev_list, > + f"Device {nic[0]} was not listed in testpmd's available devices, " > + "please check your configuration", > + ) > + > + def test_device_bound_to_driver(self) -> None: > + """ > + Test: > + ensure that all drivers listed in the config are bound to the correct driver > + """ > + path_to_dev =3D self.sut_node.main_session.join_remote_path( > + self.sut_node._remote_dpdk_dir, "usertools", "dpdk-devbind.py" > + ) > + for nic in self.list_of_nics: > + out =3D self.sut_node.main_session.send_command( > + f"{path_to_dev} --status | grep {nic[0]}", 60 > + ) This may be something we want to put into SutNode, something like get_device_info. We'd only send the command once (not for every NIC as we're doing here) and parse the output. > + self.verify( > + len(out.stdout) !=3D 0, > + f"Failed to find configured device ({nic[0]}) using dpdk-devbind.py", > + ) > + for string in out.stdout.split(" "): > + if "drv=3D" in string: > + self.verify( > + string.split("=3D")[1] =3D=3D nic[1], > + f"Driver for device {nic[0]} does not match driver listed in " > + f'configuration (bound to {string.split("=3D")[1]})', > + ) > -- > 2.41.0 > --000000000000d160e405ffd2af52 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
There are mypy errors related to paramiko:
framework/r= emote_session/remote/interactive_remote_session.py:8: error: Library stubs = not installed for "paramiko" (or incompatible with Python 3.10)framework/remote_session/remote/interactive_remote_session.py:9: error: L= ibrary stubs not installed for "paramiko.ssh_exception" (or incom= patible with Python 3.10)

We do this for pexpect:<= /div>
import pexpect =C2=A0# type: ignore
from pexpect import pxssh = =C2=A0# type: ignore

We should do the same for paramiko.

Ther= e are also some changes which may require coordination. I'm not sure ho= w we'd do that, the best may be to align that off-list.

More inl= ine.

On Thu, Jun 15, 2023 at 10:13=E2=80=AFPM <jspewock@iol.unh.edu> wrote:
>
> F= rom: Jeremy Spewock <jspewock@io= l.unh.edu>
>
> Adds a new test suite as well as configur= ation changes needed for running
> smoke tests that verify general en= vironment aspects of the system
> under test. If any of these tests f= ail, the DTS execution terminates
> as part of a "fail-fast"= ; model.
>
> Signed-off-by: Jeremy Spewock <jspewock@iol.unh.edu>
> ---
> = =C2=A0dts/conf.yaml =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A013 ++
&= gt; =C2=A0dts/framework/config/__init__.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0| 114 +++++++++++++--
> =C2=A0dts/framework/config/c= onf_yaml_schema.json =C2=A0 =C2=A0| 135 +++++++++++++++++-
> =C2=A0dt= s/framework/dts.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 101 ++++++++++---
> =C2=A0dts/fra= mework/exception.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0| =C2=A012 ++
> =C2=A0dts/framework/remote_session/__in= it__.py =C2=A0 =C2=A0 =C2=A0| =C2=A010 +-
> =C2=A0dts/framework/remot= e_session/os_session.py =C2=A0 =C2=A0| =C2=A034 ++++-
> =C2=A0dts/fra= mework/remote_session/posix_session.py | =C2=A030 ++++
> =C2=A0.../re= mote_session/remote/__init__.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A012 ++> =C2=A0.../remote/interactive_remote_session.py =C2=A0 =C2=A0 =C2=A0|= 113 +++++++++++++++
> =C2=A0.../remote/interactive_shell.py =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A098 +++++++++++++
> = =C2=A0.../remote_session/remote/testpmd_shell.py =C2=A0 =C2=A0| =C2=A058 ++= ++++++
> =C2=A0dts/framework/test_result.py =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| =C2=A038 ++++-
> =C2=A0dts/fr= amework/test_suite.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 | =C2=A031 +++-
> =C2=A0dts/framework/testbed_model/node.p= y =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 2 +
> =C2=A0dts/framewo= rk/testbed_model/sut_node.py =C2=A0 =C2=A0 =C2=A0 | 110 +++++++++++++-
&= gt; =C2=A0dts/tests/TestSuite_smoke_tests.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0| 101 +++++++++++++
> =C2=A017 files changed, 962 insert= ions(+), 50 deletions(-)
> =C2=A0create mode 100644 dts/framework/rem= ote_session/remote/interactive_remote_session.py
> =C2=A0create mode = 100644 dts/framework/remote_session/remote/interactive_shell.py
> =C2= =A0create mode 100644 dts/framework/remote_session/remote/testpmd_shell.py<= br>> =C2=A0create mode 100644 dts/tests/TestSuite_smoke_tests.py
>=
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index a9bd8a3e.= .03fd57e1 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml> @@ -10,6 +10,8 @@ executions:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0compiler_wrapper: ccache
> =C2=A0 =C2=A0 =C2=A0perf: false
>= =C2=A0 =C2=A0 =C2=A0func: true
> + =C2=A0 =C2=A0vdevs: #names of vir= tual devices to be used for testing

Vdevs are optional, let's me= ntion=C2=A0that in the comment as with the hugepages config.
=C2=A0
&= gt;
> + =C2=A0 =C2=A0 =C2=A0- "crypto_openssl"

How (= or where) is this configured in the original DTS? The vdevs are only going = to be used on SUTs, so it may make sense to put them to node config. On the= other hand, each execution using the SUT may use different vdevs, so this = is probably the best place. Maybe we could move it under system_under_test = in execution? That would make more sense I think - it would be an addition = to the existing SUT configuration (as in here's where we augment SUT co= nfig specific to the execution).
=C2=A0
>
> =C2=A0 =C2=A0 = =C2=A0test_suites:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0- hello_world
>= =C2=A0 =C2=A0 =C2=A0system_under_test: "SUT 1"
> @@ -20,6 = +22,17 @@ nodes:
> =C2=A0 =C2=A0 =C2=A0arch: x86_64
> =C2=A0 = =C2=A0 =C2=A0os: linux
> =C2=A0 =C2=A0 =C2=A0lcores: ""
= > + =C2=A0 =C2=A0ports:
> + =C2=A0 =C2=A0 =C2=A0- pci: "0000:= 00:08.0"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dpdk_os_driver: vfio-pci=

We could find a better name for this driver (and maybe even the reg= ular os driver). It's the OS driver that dpdk uses and there could be b= etter names - os_dpdk_driver, os_driver_dpdk, os_driver_for_dpdk. Which one= do you like the most? Or maybe some other name?
In either case we proba= bly want to add a comment to both drivers that explain what we mean here. T= he name doesn't need to be perfect then. :-) I'll make sure to alig= n the change with my patch if we change this.
=C2=A0
>
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0os_driver: i40e
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0peer_node: "TG 1"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0peer= _pci: "0000:00:08.0"
> + =C2=A0 =C2=A0 =C2=A0- pci: "0= 000:00:08.1"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dpdk_os_driver: vfio= -pci
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0os_driver: i40e
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0peer_node: "TG 1"
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0peer_pci: "0000:00:08.1"
> =C2=A0 =C2=A0 =C2=A0us= e_first_core: false
> =C2=A0 =C2=A0 =C2=A0memory_channels: 4
> = =C2=A0 =C2=A0 =C2=A0hugepages: =C2=A0# optional; if removed, will use syste= m hugepage configuration
> diff --git a/dts/framework/config/__init__= .py b/dts/framework/config/__init__.py
> index ebb0823f..74e5158d 100= 644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framew= ork/config/__init__.py
> @@ -72,6 +72,21 @@ class HugepageConfigurati= on:
> =C2=A0 =C2=A0 =C2=A0force_first_numa: bool
>
>
&= gt; +@dataclass(slots=3DTrue, frozen=3DTrue)
> +class PortConfig:
= > + =C2=A0 =C2=A0id: int

I removed id in my latest local version = as I didn't see a real use for it.
=C2=A0
>
> + =C2=A0 = =C2=A0node: str
> + =C2=A0 =C2=A0pci: str
> + =C2=A0 =C2=A0dpdk= _os_driver: str
> + =C2=A0 =C2=A0os_driver: str
> + =C2=A0 =C2= =A0peer_node: str
> + =C2=A0 =C2=A0peer_pci: str
> +
> + = =C2=A0 =C2=A0@staticmethod
> + =C2=A0 =C2=A0def from_dict(id: int, no= de: str, d: dict) -> "PortConfig":
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0return PortConfig(id=3Did, node=3Dnode, **d)

=
This would need to be changed if we remove the id.

> = +
> +
> =C2=A0@dataclass(slots=3DTrue, frozen=3DTrue)
> = =C2=A0class NodeConfiguration:
> =C2=A0 =C2=A0 =C2=A0name: str
>= ; @@ -84,6 +99,7 @@ class NodeConfiguration:
> =C2=A0 =C2=A0 =C2=A0us= e_first_core: bool
> =C2=A0 =C2=A0 =C2=A0memory_channels: int
>= =C2=A0 =C2=A0 =C2=A0hugepages: HugepageConfiguration | None
> + =C2= =A0 =C2=A0ports: list[PortConfig]
>
> =C2=A0 =C2=A0 =C2=A0@stat= icmethod
> =C2=A0 =C2=A0 =C2=A0def from_dict(d: dict) -> "Nod= eConfiguration":
> @@ -92,18 +108,46 @@ def from_dict(d: dict) -= > "NodeConfiguration":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0if "force_first_numa" not in hugepage_config:> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0hugepag= e_config["force_first_numa"] =3D False
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0hugepage_config =3D HugepageConfiguration(**= hugepage_config)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0common_config =3D {> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"name": d["= ;name"],
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"host= name": d["hostname"],
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0"user": d["user"],
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0"password": d.get("password"= ;),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"arch": Ar= chitecture(d["arch"]),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"os": OS(d["os"]),
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0"lcores": d.get("lcores", &q= uot;1"),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"use_= first_core": d.get("use_first_core", False),
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"memory_channels": d.get(&q= uot;memory_channels", 1),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0"hugepages": hugepage_config,
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"ports": [
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0PortConfig.from_dict(i, d["name&quo= t;], port)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0for i, port in enumerate(d["ports"])
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0],

This also needs to be modified when re= moving PortConfig.id:
"ports": [PortConfig.from_dict(d["n= ame"], port) for port in d["ports"]],
=C2=A0
>
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> +
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0return NodeConfiguration(**common_config)
> +
> +
>= ; +@dataclass(slots=3DTrue)
> +class NodeVersionInfo:

This loo= ks like we could use it for all sorts of things in the future, so I'd j= ust call it NodeInfo.
=C2=A0
>
> + =C2=A0 =C2=A0""= "Class to hold important versions within the node.
> +
> += =C2=A0 =C2=A0This class, unlike the NodeConfiguration class, cannot be gen= erated at the start.
> + =C2=A0 =C2=A0This is because we need to init= ialize a connection with the node before we can
> + =C2=A0 =C2=A0coll= et the information needed in this class. Therefore, it cannot be a part of<= div>
Typo: collect

> + =C2=A0 =C2=A0the conf= iguration class above.
> + =C2=A0 =C2=A0"""
>> - =C2=A0 =C2=A0 =C2=A0 =C2=A0return NodeConfiguration(
> - =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0name=3Dd["name"],
> -= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0hostname=3Dd["hostname"= ],
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0user=3Dd["user&q= uot;],
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0password=3Dd.get(= "password"),
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0a= rch=3DArchitecture(d["arch"]),
> - =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0os=3DOS(d["os"]),
> - =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0lcores=3Dd.get("lcores", "1"),
= > - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0use_first_core=3Dd.get(&quo= t;use_first_core", False),
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0memory_channels=3Dd.get("memory_channels", 1),
> = - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0hugepages=3Dhugepage_config,
= > + =C2=A0 =C2=A0os_name: str
> + =C2=A0 =C2=A0os_version: str
= > + =C2=A0 =C2=A0kernel_version: str
> +
> + =C2=A0 =C2=A0@s= taticmethod
> + =C2=A0 =C2=A0def from_dict(d: dict):
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0return NodeVersionInfo(
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0os_name=3Dd["os_name"],
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0os_version=3Dd["os_version"],
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0kernel_version=3Dd["ker= nel_version"],
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
>
= >
> @@ -128,6 +172,24 @@ def from_dict(d: dict) -> "BuildT= argetConfiguration":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
&g= t;
>
> +@dataclass(slots=3DTrue)
> +class BuildTargetVers= ionInfo:
> + =C2=A0 =C2=A0"""Class to hold important v= ersions within the build target.
> +
> + =C2=A0 =C2=A0This is v= ery similar to the NodeVersionInfo class, it just instead holds information=
> + =C2=A0 =C2=A0for the build target.
> + =C2=A0 =C2=A0"= ""
> +
> + =C2=A0 =C2=A0dpdk_version: str
> + = =C2=A0 =C2=A0compiler_version: str
> +
> + =C2=A0 =C2=A0@static= method
> + =C2=A0 =C2=A0def from_dict(d: dict):
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0return BuildTargetVersionInfo(
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0dpdk_version=3Dd["dpdk_version"], compile= r_version=3Dd["compiler_version"]
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0)
> +

Do you expect we'll be storing more fields tha= n just these two (if so, then we can again rename the class to BuildTargetI= nfo)? If it's just these two, the extra class may not help that much (w= e could pass around a tuple).

On the other=C2=A0ha= nd, there's not really a reason to not have this class. I could see us = adding to it and I like BuildTargetInfo because of that.
=C2=A0
><= br>> +
> =C2=A0class TestSuiteConfigDict(TypedDict):
> =C2= =A0 =C2=A0 =C2=A0suite: str
> =C2=A0 =C2=A0 =C2=A0cases: list[str]> @@ -157,6 +219,8 @@ class ExecutionConfiguration:
> =C2=A0 =C2= =A0 =C2=A0func: bool
> =C2=A0 =C2=A0 =C2=A0test_suites: list[TestSuit= eConfig]
> =C2=A0 =C2=A0 =C2=A0system_under_test: NodeConfiguration> + =C2=A0 =C2=A0vdevs: list[str]
> + =C2=A0 =C2=A0skip_smoke_te= sts: bool

This should be added to conf.yaml to demonstrate the avail= ability of the option.
=C2=A0
>
>
> =C2=A0 =C2=A0 =C2= =A0@staticmethod
> =C2=A0 =C2=A0 =C2=A0def from_dict(d: dict, node_ma= p: dict) -> "ExecutionConfiguration":
> @@ -167,14 +231,= 18 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfigura= tion":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0map(Tes= tSuiteConfig.from_dict, d["test_suites"])
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_name = =3D d["system_under_test"]
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0l= ist_of_vdevs =3D d["vdevs"]
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= skip_smoke_tests =3D d["skip_smoke_tests"] if "skip_smoke_te= sts" in d else False

Just as with hugepages, we could skip the = else branch if we set the default of skip_smoke_tests to False. I'm not= sure which is the better approach.
=C2=A0
>
> =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0assert sut_name in node_map, f"Unknown SUT {sut_n= ame} in execution {d}"
>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0return ExecutionConfiguration(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0build_targets=3Dbuild_targets,
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0perf=3Dd["perf"],
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0func=3Dd["func"],
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0skip_smoke_tests=3Dskip_smoke_= tests,
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suites= =3Dtest_suites,
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sys= tem_under_test=3Dnode_map[sut_name],
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0vdevs=3Dlist_of_vdevs,
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0)
>
>
> @@ -221,3 +289,27 @@ def load_config() -> C= onfiguration:
>
>
> =C2=A0CONFIGURATION =3D load_config()=
> +
> +
> +@unique
> +class InteractiveApp(Enum):<= br>> + =C2=A0 =C2=A0"""An enum that represents different = supported interactive applications
> +
> + =C2=A0 =C2=A0The val= ues in this enum must all be set to objects that have a key called
> = + =C2=A0 =C2=A0"default_path" where "default_path" repr= esents an array for the path to the
> + =C2=A0 =C2=A0application. Thi= s default path will be passed into the handler class for the
> + =C2= =A0 =C2=A0application so that it can start the application. For every key o= ther than
> + =C2=A0 =C2=A0the default shell option, the path will be= appended to the path to the DPDK
> + =C2=A0 =C2=A0build directory fo= r the current SUT node.
> + =C2=A0 =C2=A0"""
> +=
> + =C2=A0 =C2=A0shell =3D {"default_path": [""]= }
> + =C2=A0 =C2=A0testpmd =3D {"default_path": ["app&= quot;, "dpdk-testpmd"]}

The stored path = could be a PurePath object. It'll be instantiated as an OS-specific pat= h, but main_session.join_remote_path will convert it properly.
> +
> + =C2=A0 =C2=A0def get_path(self):
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0"""A method for getting the defau= lt paths of an application
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0R= eturns:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0String array tha= t represents an OS agnostic path to the application.
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0re= turn self.value["default_path"]
> diff --git a/dts/framewor= k/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json=
> index ca2d4a1e..3f7c301a 100644
> --- a/dts/framework/config= /conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.= json
> @@ -6,6 +6,76 @@
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"type= ": "string",
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"descri= ption": "A unique identifier for a node"
> =C2=A0 =C2= =A0 =C2=A0},
> + =C2=A0 =C2=A0"NIC": {
> + =C2=A0 =C2= =A0 =C2=A0"type": "string",
> + =C2=A0 =C2=A0 =C2= =A0"enum": [
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ALL"= ,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ConnectX3_MT4103",
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ConnectX4_LX_MT4117",
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0"ConnectX4_MT4115",
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"ConnectX5_MT4119",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"ConnectX5_MT4121",
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"I40E_10G-10G_BASE_T_BC",
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"I40E_10G-10G_BASE_T_X722",
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"I40E_10G-SFP_X722",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&= quot;I40E_10G-SFP_XL710",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I= 40E_10G-X722_A0",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_1G-1= G_BASE_T_X722",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_25G-25= G_SFP28",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_40G-QSFP_A&q= uot;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_40G-QSFP_B",
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IAVF-ADAPTIVE_VF",
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0"IAVF-VF",
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0"IAVF_10G-X722_VF",
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"ICE_100G-E810C_QSFP",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&q= uot;ICE_25G-E810C_SFP",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ICE= _25G-E810_XXV_SFP",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB-I35= 0_VF",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82540EM"= ,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82545EM_COPPER",> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82571EB_COPPER",
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82574L",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82576",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"IGB_1G-82576_QUAD_COPPER",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"IGB_1G-82576_QUAD_COPPER_ET2",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"IGB_1G-82580_COPPER",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"IGB_1G-I210_COPPER",
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0"IGB_1G-I350_COPPER",
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"IGB_1G-I354_SGMII",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&quo= t;IGB_1G-PCH_LPTLP_I218_LM",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&quo= t;IGB_1G-PCH_LPTLP_I218_V",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"= ;IGB_1G-PCH_LPT_I217_LM",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I= GB_1G-PCH_LPT_I217_V",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_= 2.5G-I354_BACKPLANE_2_5GBPS",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&qu= ot;IGC-I225_LM",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGC-I226_L= M",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_SFP&quo= t;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_SFP_SF_QP&qu= ot;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_T3_LOM"= ;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_VF",
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X540T",
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X540_VF",
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_A_SFP",
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_X_10G_T",
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_X_SFP",
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_X_VF",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"IXGBE_10G-X550T",
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0"IXGBE_10G-X550_VF",
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"brcm_57414",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"brcm_= P2100G",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"cavium_0011",=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"cavium_a034",
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0"cavium_a063",
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0"cavium_a064",
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"fastlinq_ql41000",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"= ;fastlinq_ql41000_vf",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"fast= linq_ql45000",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"fastlinq_ql4= 5000_vf",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"hi1822",
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0"virtio"
> + =C2=A0 =C2= =A0 =C2=A0]
> + =C2=A0 =C2=A0},
> +

All these NICs may b= e overkill, do we want to trim them?
=C2=A0
>
> =C2=A0 =C2= =A0 =C2=A0"ARCH": {
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"type= ": "string",
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"enum&q= uot;: [
> @@ -94,10 +164,24 @@
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0"amount"
> =C2=A0 =C2=A0 =C2=A0 =C2=A0]
> =C2= =A0 =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0"pci_address": {
&g= t; + =C2=A0 =C2=A0 =C2=A0"type": "string",
> + = =C2=A0 =C2=A0 =C2=A0"pattern": "^[\\da-fA-F]{4}:[\\da-fA-F]{= 2}:[\\da-fA-F]{2}.\\d:?\\w*$"
> + =C2=A0 =C2=A0},
> + =C2= =A0 =C2=A0"port_peer_address": {
> + =C2=A0 =C2=A0 =C2=A0&q= uot;description": "Peer is a TRex port, and IXIA port or a PCI ad= dress",
> + =C2=A0 =C2=A0 =C2=A0"oneOf": [
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0{
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&q= uot;description": "PCI peer port",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0"$ref": "#/definitions/pci_address"=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> + =C2=A0 =C2=A0 =C2=A0]
= > + =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2=A0"test_suite": = {
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"type": "string",<= br>> =C2=A0 =C2=A0 =C2=A0 =C2=A0"enum": [
> - =C2=A0 =C2= =A0 =C2=A0 =C2=A0"hello_world"
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"hello_world",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"smok= e_tests"

Is there a reason to provide both th= is and the execution.skip_smoke_tests option? If we provide both, we may ne= ed to think about what happens when both are present. I'd say it's = best for everyone (users and devs) to have only one option and I'd lean= towards execution.skip_smoke_tests.

> =C2=A0 =C2=A0 =C2= =A0 =C2=A0]
> =C2=A0 =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2=A0&quo= t;test_target": {
> @@ -165,6 +249,44 @@
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0"hugepages": {
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0"$ref": "#/definitions/hugepages"
> = + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"ports": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0"type": "array",
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0"items": {
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"type": "object",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description": "= ;Each port should be described on both sides of the connection. This makes = configuration slightly more verbose but greatly simplifies implementation. = If there are an inconsistencies, then DTS will not run until that issue is = fixed. An example inconsistency would be port 1, node 1 says it is connecte= d to port 1, node 2, but port 1, node 2 says it is connected to port 2, nod= e 1.",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"= ;properties": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0"pci": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"$ref": "#/definitions/pci_ad= dress",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"description": "The local PCI address of the port&= quot;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"dpdk_= os_driver": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0"type": "string",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description"= : "The driver that the kernel should bind this device to for DPDK to u= se it. (ex: vfio-pci)"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0"os_driver": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type": "string",> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"= ;description": "The driver normally used by this port (ex: i40e)&= quot;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"peer_= node": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0"type": "string",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description": &q= uot;The name of the node the peer port is on"
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"peer_pci": {
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"$ref": &q= uot;#/definitions/pci_address",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description": "The PCI ad= dress of the peer port"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0}
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"ad= ditionalProperties": false,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0"required": [
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"pci",
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"dpdk_os_driver",
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"os_driver&quo= t;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"= peer_node",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0"peer_pci"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0]
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"minimum": 1
>= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"additionalP= roperties": false,
> @@ -211,6 +333,17 @@
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0]
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0}
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= },
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"vdevs": {
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description": "= Names of vdevs to be used in execution",
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"type": "array",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"items": {
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type":"string&quo= t;

Missing space before "string".

> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"skip_smoke_tests&qu= ot;: {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description= ": "Optional field that allows you to skip smoke testing",> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type":"bo= olean"

Also the missing space.

> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"= system_under_test": {
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"$ref": "#/definitions/node_name"
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> diff --git a/dts/framework/= dts.py b/dts/framework/dts.py
> index 05022845..f76c9ceb 100644
&g= t; --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@= -5,7 +5,13 @@
>
> =C2=A0import sys
>
> -from .conf= ig import CONFIGURATION, BuildTargetConfiguration, ExecutionConfiguration> +from .config import (
> + =C2=A0 =C2=A0CONFIGURATION,
>= + =C2=A0 =C2=A0BuildTargetConfiguration,
> + =C2=A0 =C2=A0ExecutionC= onfiguration,
> + =C2=A0 =C2=A0TestSuiteConfig,
> +)
> +f= rom .exception import BlockingTestSuiteError
> =C2=A0from .logger imp= ort DTSLOG, getLogger
> =C2=A0from .test_result import BuildTargetRes= ult, DTSResult, ExecutionResult, Result
> =C2=A0from .test_suite impo= rt get_test_suites
> @@ -82,7 +88,9 @@ def _run_execution(
> = =C2=A0 =C2=A0 =C2=A0running all build targets in the given execution.
&g= t; =C2=A0 =C2=A0 =C2=A0"""
> =C2=A0 =C2=A0 =C2=A0dts_logger.info(f"Running execution = with SUT '{executio= n.system_under_test.name}'.")
> - =C2=A0 =C2=A0execution= _result =3D result.add_execution(sut_node.config)
> + =C2=A0 =C2=A0ex= ecution_result =3D result.add_execution(
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0sut_node.config, sut_node.get_node_versions()
> + =C2=A0 =C2=A0)>
> =C2=A0 =C2=A0 =C2=A0try:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0sut_node.set_up_execution(execution)
> @@ -118,14 +126,17 @@ d= ef _run_build_target(
>
> =C2=A0 =C2=A0 =C2=A0try:
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node.set_up_build_target(build_target)> - =C2=A0 =C2=A0 =C2=A0 =C2=A0result.dpdk_version =3D sut_node.dpdk_v= ersion
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# result.dpdk_version =3D sut_n= ode.dpdk_version
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_result.a= dd_build_target_versions(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0sut_node.get_build_target_versions()
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_result.update_s= etup(Result.PASS)
> =C2=A0 =C2=A0 =C2=A0except Exception as e:
>= ; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.exception("Build target= setup failed.")
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0build_targe= t_result.update_setup(Result.FAIL, e)
>
> =C2=A0 =C2=A0 =C2=A0e= lse:
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0_run_suites(sut_node, execution, = build_target_result)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0_run_all_suites(s= ut_node, execution, build_target_result)
>
> =C2=A0 =C2=A0 =C2= =A0finally:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0try:
> @@ -136,= 7 +147,46 @@ def _run_build_target(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0build_target_result.update_teardown(Result.FAIL, e)
>= ;
>
> -def _run_suites(
> +def _run_single_suite(
>= + =C2=A0 =C2=A0sut_node: SutNode,
> + =C2=A0 =C2=A0execution: Execut= ionConfiguration,
> + =C2=A0 =C2=A0build_target_result: BuildTargetRe= sult,
> + =C2=A0 =C2=A0test_suite_config: TestSuiteConfig,
> +)= -> None:
> + =C2=A0 =C2=A0"""Runs a single test su= ite.
> +
> + =C2=A0 =C2=A0Args:
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0sut_node: Node to run tests on.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= execution: Execution the test case belongs to.
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0build_target_result: Build target configuration test case is run = on
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_config: Test suite confi= guration
> +
> + =C2=A0 =C2=A0Raises:
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0BlockingTestSuiteError: If a test suite that was marked as blo= cking fails.
> + =C2=A0 =C2=A0"""
> + =C2=A0 =C2= =A0try:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0full_suite_path =3D f"tes= ts.TestSuite_{test_suite_config.test_suite}"
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0test_suite_classes =3D get_test_suites(full_suite_path)
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0suites_str =3D ", ".join((x.__name= __ for x in test_suite_classes))
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_l= ogger.debug(f"Found test suites '{suites_str}' in '{full_s= uite_path}'.")
> + =C2=A0 =C2=A0except Exception as e:
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.exception("An error occurre= d when searching for test suites.")
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0result.update_setup(Result.ERROR, e)
> +
> + =C2=A0 =C2=A0el= se:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0for test_suite_class in test_suite= _classes:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite =3D= test_suite_class(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0sut_node,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0test_suite_config.test_cases,
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0execution.func,
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_result,
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node._build_target_con= fig,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0resul= t,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite.run()
> +
> +
&= gt; +def _run_all_suites(
> =C2=A0 =C2=A0 =C2=A0sut_node: SutNode,> =C2=A0 =C2=A0 =C2=A0execution: ExecutionConfiguration,
> =C2=A0= =C2=A0 =C2=A0build_target_result: BuildTargetResult,
> @@ -146,27 +1= 96,32 @@ def _run_suites(
> =C2=A0 =C2=A0 =C2=A0with possibly only a = subset of test cases.
> =C2=A0 =C2=A0 =C2=A0If no subset is specified= , run all test cases.
> =C2=A0 =C2=A0 =C2=A0"""
>= ; + =C2=A0 =C2=A0end_build_target =3D False
> + =C2=A0 =C2=A0smoke_te= st_config =3D TestSuiteConfig.from_dict("smoke_tests")
> + = =C2=A0 =C2=A0if not execution.skip_smoke_tests:
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0try:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0_run_sing= le_suite(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= sut_node, execution, build_target_result, smoke_test_config
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0except BlockingTestSuiteError as e:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0dts_logger.exception("Smoke tests failed, stopping bu= ild target execution.")
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0result.add_error(e)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0return
> =C2=A0 =C2=A0 =C2=A0for test_suite_config in execution.te= st_suites:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0try:
> - =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0full_suite_path =3D f"tests.TestSuit= e_{test_suite_config.test_suite}"
> - =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0test_suite_classes =3D get_test_suites(full_suite_path)> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0suites_str =3D ", &qu= ot;.join((x.__name__ for x in test_suite_classes))
> - =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.debug(
> - =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Found test suites '{suites_st= r}' in '{full_suite_path}'."
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0_run_single_suite(
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node, execution, build_target_result,= test_suite_config
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= )
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0except Exception as e:
> - =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.exception("An error o= ccurred when searching for test suites.")
> - =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0result.update_setup(Result.ERROR, e)
> -
&= gt; - =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> - =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0for test_suite_class in test_suite_classes:
> - =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite =3D test_sui= te_class(
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0sut_node,
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_config.test_cases,
> - =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0execution.fun= c,
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0build_target_result,
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0)
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0test_suite.run()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0except B= lockingTestSuiteError as e:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0dts_logger.exception(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0f"An error occurred within {test_suite_config.test= _suite}. "
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0"Skipping build target..."
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0re= sult.add_error(e)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0end_bu= ild_target =3D True
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# if a blocking te= st failed and we need to bail out of suite executions
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0if end_build_target:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0break
>

This doesn't look great to me. I thi= nk we could just add a list[TestSuiteConfig] parameter to _run_suites. We c= ould then just pass a list with/without smoke TestSuiteConfig, something li= ke:
test_suites =3D execution.test_suites<= br>if not execution.skip_smoke_tests:
=C2=A0 =C2=A0 test_suites[:0] =3D = [TestSuiteConfig.from_dict("smoke_tests")]
_run_suites(sut_nod= e, execution, build_target_result, test_suites)

A note on method ordering:
_run_single_suite before= =C2=A0_run_all_suites was confusing to me, as the methods are used in the r= everse order -=C2=A0_run_build_target calls=C2=A0_run_all_suites which only= then calls=C2=A0_run_single_suite. I'd like the methods to be ordered = in the order they're used in the code.

>
> = =C2=A0def _exit_dts() -> None:
> diff --git a/dts/framework/except= ion.py b/dts/framework/exception.py
> index ca353d98..dfb12df4 100644=
> --- a/dts/framework/exception.py
> +++ b/dts/framework/excep= tion.py
> @@ -25,6 +25,7 @@ class ErrorSeverity(IntEnum):
> =C2= =A0 =C2=A0 =C2=A0SSH_ERR =3D 4
> =C2=A0 =C2=A0 =C2=A0DPDK_BUILD_ERR = =3D 10
> =C2=A0 =C2=A0 =C2=A0TESTCASE_VERIFY_ERR =3D 20
> + =C2= =A0 =C2=A0BLOCKING_TESTSUITE_ERR =3D 25
>
>
> =C2=A0class= DTSError(Exception):
> @@ -144,3 +145,14 @@ def __init__(self, value= : str):
>
> =C2=A0 =C2=A0 =C2=A0def __str__(self) -> str:> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return repr(self.value)
> +> +
> +class BlockingTestSuiteError(DTSError):
> + =C2=A0 = =C2=A0suite_name: str
> + =C2=A0 =C2=A0severity: ClassVar[ErrorSeveri= ty] =3D ErrorSeverity.BLOCKING_TESTSUITE_ERR
> +
> + =C2=A0 =C2= =A0def __init__(self, suite_name: str) -> None:
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0self.suite_name =3D suite_name
> +
> + =C2=A0 =C2= =A0def __str__(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return= f"Blocking suite {self.suite_name} failed."
> diff --git a= /dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__= init__.py
> index ee221503..04342b8d 100644
> --- a/dts/framewo= rk/remote_session/__init__.py
> +++ b/dts/framework/remote_session/__= init__.py
> @@ -1,5 +1,6 @@
> =C2=A0# SPDX-License-Identifier: = BSD-3-Clause
> =C2=A0# Copyright(c) 2023 PANTHEON.tech s.r.o.
>= +# Copyright(c) 2022-2023 University of New Hampshire

=
Should the year be just 2023? I'm not sure about these copyright s= tatements.

>
> =C2=A0"""
> =C2= =A0The package provides modules for managing remote connections to a remote= host (node),
> @@ -17,7 +18,14 @@
>
> =C2=A0from .linux_= session import LinuxSession
> =C2=A0from .os_session import OSSession=
> -from .remote import CommandResult, RemoteSession, SSHSession
&= gt; +from .remote import (
> + =C2=A0 =C2=A0CommandResult,
> + = =C2=A0 =C2=A0InteractiveRemoteSession,
> + =C2=A0 =C2=A0InteractiveSh= ell,
> + =C2=A0 =C2=A0RemoteSession,
> + =C2=A0 =C2=A0SSHSessio= n,
> + =C2=A0 =C2=A0TestPmdShell,
> +)
>
>
> = =C2=A0def create_session(
> diff --git a/dts/framework/remote_session= /os_session.py b/dts/framework/remote_session/os_session.py
> index 4= c48ae25..f5f53923 100644
> --- a/dts/framework/remote_session/os_sess= ion.py
> +++ b/dts/framework/remote_session/os_session.py
> @@ = -12,7 +12,13 @@
> =C2=A0from framework.testbed_model import LogicalCo= re
> =C2=A0from framework.utils import EnvVarsDict, MesonArgs
>=
> -from .remote import CommandResult, RemoteSession, create_remote_s= ession
> +from .remote import (
> + =C2=A0 =C2=A0CommandResult,=
> + =C2=A0 =C2=A0InteractiveRemoteSession,
> + =C2=A0 =C2=A0Re= moteSession,
> + =C2=A0 =C2=A0create_interactive_session,
> + = =C2=A0 =C2=A0create_remote_session,
> +)
>
>
> =C2= =A0class OSSession(ABC):
> @@ -26,6 +32,7 @@ class OSSession(ABC):> =C2=A0 =C2=A0 =C2=A0name: str
> =C2=A0 =C2=A0 =C2=A0_logger: DT= SLOG
> =C2=A0 =C2=A0 =C2=A0remote_session: RemoteSession
> + = =C2=A0 =C2=A0interactive_session: InteractiveRemoteSession
>
> = =C2=A0 =C2=A0 =C2=A0def __init__(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self,
> @@ -37,6 +44,7 @@ def __init__(
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0self.name =3D name
>= ; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger =3D logger
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.remote_session =3D create_remote_session(no= de_config, name, logger)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.interact= ive_session =3D create_interactive_session(node_config, name, logger)
=

We may not want to create the interactive session at th= is point. This does create a connection to the node which we don't want= (it is extra time consumed) when creating an extra session on top of the m= ain_session (with Node.create_session). I think we could move this to OSSes= sion.create_interactive_shell. More below.

>
> =C2= =A0 =C2=A0 =C2=A0def close(self, force: bool =3D False) -> None:
>= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> @@ -173,3 +18= 1,27 @@ def setup_hugepages(self, hugepage_amount: int, force_first_numa: b= ool) -> None:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if needed and mo= unt the hugepages if needed.
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0If f= orce_first_numa is True, configure hugepages just on the first socket.
&= gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> +
> = + =C2=A0 =C2=A0@abstractmethod
> + =C2=A0 =C2=A0def get_compiler_vers= ion(self, compiler_name: str) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get installed ve= rsion of compiler used for DPDK
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"= ""
> +
> + =C2=A0 =C2=A0@abstractmethod
> + =C2= =A0 =C2=A0def get_os_name(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get name of O= S for the node
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
&= gt; +
> + =C2=A0 =C2=A0@abstractmethod
> + =C2=A0 =C2=A0def get= _os_version(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"&qu= ot;"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get version of OS for the no= de
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> +
>= ; + =C2=A0 =C2=A0@abstractmethod
> + =C2=A0 =C2=A0def get_kernel_vers= ion(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""= ;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get kernel version for the node
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""

These could be merged into one method (returning NodeVersionInfo), is the= re a reason to separate them? We could get some performance improvements wi= th one method (e.g. getting /etc/os-release just once and then process loca= lly or we could use a script we execute remotely to have just one remote ca= ll, but that's something to think about in the future).

&= gt; diff --git a/dts/framework/remote_session/posix_session.py b/dts/framew= ork/remote_session/posix_session.py
> index d38062e8..dc6689f5 100644=
> --- a/dts/framework/remote_session/posix_session.py
> +++ b/= dts/framework/remote_session/posix_session.py
> @@ -219,3 +219,33 @@ = def _remove_dpdk_runtime_dirs(
>
> =C2=A0 =C2=A0 =C2=A0def get_= dpdk_file_prefix(self, dpdk_prefix) -> str:
> =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0return ""
> +
> + =C2=A0 =C2=A0def get_= compiler_version(self, compiler_name: str) -> str:
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0match compiler_name:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0case "gcc":
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(f"{compiler_name} = --version", 60).stdout.split(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"\n"
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)[0]
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case "clang":
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(f&quo= t;{compiler_name} --version", 60).stdout.split(
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"\n"> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)[0]
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case "msvc":
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_= command("cl", 60).stdout
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0case "icc":
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(f"{compiler_name} = -V", 60).stdout
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cas= e _:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raise= ValueError(f"Unknown compiler {compiler_name}")
> +
>= ; + =C2=A0 =C2=A0def get_os_name(self) -> str:
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0return self.send_command(
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0"awk -F=3D '$1=3D=3D\"NAME\" {print $2}= ' /etc/os-release", 60
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0).stdo= ut
> +
> + =C2=A0 =C2=A0def get_os_version(self) -> str:
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"awk -F=3D '$1=3D=3D\"V= ERSION\" {print $2}' /etc/os-release", 60, True
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0).stdout
> +
> + =C2=A0 =C2=A0def get_k= ernel_version(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return = self.send_command("uname -r", 60).stdout

These 60 second timeouts seem high to me. All of these commands should exe= cute quickly so the default timeout seems fine to me.

> di= ff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/= remote_session/remote/__init__.py
> index 8a151221..ef7b4851 100644> --- a/dts/framework/remote_session/remote/__init__.py
> +++ b/= dts/framework/remote_session/remote/__init__.py
> @@ -1,16 +1,28 @@> =C2=A0# SPDX-License-Identifier: BSD-3-Clause
> =C2=A0# Copyri= ght(c) 2023 PANTHEON.tech s.r.o.
> +# Copyright(c) 2022-2023 Universi= ty of New Hampshire
>
> =C2=A0# pylama:ignore=3DW0611
>> +from paramiko import AutoAddPolicy, SSHClient
> +
> =C2= =A0from framework.config import NodeConfiguration
> =C2=A0from framew= ork.logger import DTSLOG
>
> +from .interactive_remote_session = import InteractiveRemoteSession
> +from .interactive_shell import Int= eractiveShell
> =C2=A0from .remote_session import CommandResult, Remo= teSession
> =C2=A0from .ssh_session import SSHSession
> +from .= testpmd_shell import TestPmdShell
>
>
> =C2=A0def create_= remote_session(
> =C2=A0 =C2=A0 =C2=A0node_config: NodeConfiguration,= name: str, logger: DTSLOG
> =C2=A0) -> RemoteSession:
> =C2= =A0 =C2=A0 =C2=A0return SSHSession(node_config, name, logger)
> +
= > +
> +def create_interactive_session(
> + =C2=A0 =C2=A0node= _config: NodeConfiguration, name: str, logger: DTSLOG
> +) -> Inte= ractiveRemoteSession:
> + =C2=A0 =C2=A0return InteractiveRemoteSessio= n(node_config, logger)
> diff --git a/dts/framework/remote_session/re= mote/interactive_remote_session.py b/dts/framework/remote_session/remote/in= teractive_remote_session.py
> new file mode 100644
> index 0000= 0000..4e88308a
> --- /dev/null
> +++ b/dts/framework/remote_ses= sion/remote/interactive_remote_session.py
> @@ -0,0 +1,113 @@
>= +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 Univ= ersity of New Hampshire
> +
> +import socket
> +import tr= aceback
> +from pathlib import PurePath
> +
> +from param= iko import AutoAddPolicy, SSHClient, Transport
> +from paramiko.ssh_e= xception import (
> + =C2=A0 =C2=A0AuthenticationException,
> += =C2=A0 =C2=A0BadHostKeyException,
> + =C2=A0 =C2=A0NoValidConnection= sError,
> + =C2=A0 =C2=A0SSHException,
> +)
> +
> += from framework.config import InteractiveApp, NodeConfiguration
> +fro= m framework.exception import SSHConnectionError
> +from framework.log= ger import DTSLOG
> +from framework.settings import SETTINGS
> = +
> +from .interactive_shell import InteractiveShell
> +from .t= estpmd_shell import TestPmdShell
> +
> +
> +class Interac= tiveRemoteSession:
> + =C2=A0 =C2=A0hostname: str
> + =C2=A0 = =C2=A0ip: str
> + =C2=A0 =C2=A0port: int
> + =C2=A0 =C2=A0usern= ame: str
> + =C2=A0 =C2=A0password: str
> + =C2=A0 =C2=A0_logge= r: DTSLOG
> + =C2=A0 =C2=A0_node_config: NodeConfiguration
> + = =C2=A0 =C2=A0session: SSHClient
> + =C2=A0 =C2=A0_transport: Transpor= t | None
> +
> + =C2=A0 =C2=A0def __init__(self, node_config: N= odeConfiguration, _logger: DTSLOG) -> None:
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._node_config =3D node_config
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0self._logger =3D _logger
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.ho= stname =3D node_config.hostname
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.u= sername =3D node_config.user
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.pass= word =3D node_config.password if node_config.password else ""
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0port =3D 22

fram= ework/remote_session/remote/interactive_remote_session.py:45: error: Incomp= atible types in assignment (expression has type "str", variable h= as type "int")

We can address this b= y making 22 a string (port =3D "22").

> + =C2=A0= =C2=A0 =C2=A0 =C2=A0self.ip =3D node_config.hostname
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0if ":" in node_config.hostname:
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.ip, port =3D node_config.hostname.s= plit(":")
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.port =3D int(= port)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.info(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f&= quot;Initializing interactive connection for {self.username}@{self.hostname= }"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._connect()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.info(
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0f"Interactive connection successful for {self.user= name}@{self.hostname}"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> = +
> + =C2=A0 =C2=A0def _connect(self) -> None:
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0client =3D SSHClient()
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0client.set_missing_host_key_policy(AutoAddPolicy)
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0self.session =3D client
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0retry_attempts =3D 10
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0for retry_= attempt in range(retry_attempts):
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0try:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0client.connect(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.ip,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0username=3Dself.username,
> = + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0port= =3Dself.port,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0password=3Dself.password,
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout=3D20 if self.po= rt else 10,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0except (TypeError, = BadHostKeyException, AuthenticationException) as e:
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.exception(e)
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raise SSHConnecti= onError(self.hostname) from e
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0except (NoValidConnectionsError, socket.error, SSHException) as e:> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger= .debug(traceback.format_exc())
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0self._logger.warning(e)
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._log= ger.info(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0"Retrying interactive session connection: "> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0f"retry number {retry_attempt +1}"
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0else:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0break
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raise SSHConnectionError(self.hostnam= e)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# Interactive sessions are used on = an "as needed" basis so we have
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0# to set a keepalive
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._trans= port =3D self.session.get_transport()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= if self._transport is not None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._transport.set_keepalive(30)
> +
> + =C2=A0 =C2= =A0def create_interactive_shell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self,=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0shell_type: InteractiveApp,
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_app: PurePath,
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0timeout: float =3D SETTINGS.timeout,
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0eal_parameters: str =3D "",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0app_parameters=3D"",
> + =C2=A0 =C2=A0) -> = InteractiveShell:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""> + =C2=A0 =C2=A0 =C2=A0 =C2=A0See "create_interactive_shell"= ; in SutNode
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0match (sh= ell_type.name):
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case= "shell":

Comparing with the enum values= seems better.

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0return InteractiveShell(
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.session, self._logger,= path_to_app, timeout
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case &quo= t;testpmd":
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0return TestPmdShell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.session,
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger,
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0path_= to_app,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0timeout=3Dtimeout,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0eal_flags=3Deal_parameters,
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cmd_l= ine_options=3Dapp_parameters,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0)

framework/remote_session/rem= ote/interactive_remote_session.py:89: error: Missing return statement

This is probably because we don't have a case f= or the rest of the values.

> diff --git a/dts/framework/re= mote_session/remote/interactive_shell.py b/dts/framework/remote_session/rem= ote/interactive_shell.py
> new file mode 100644
> index 0000000= 0..d98503be
> --- /dev/null
> +++ b/dts/framework/remote_sessio= n/remote/interactive_shell.py
> @@ -0,0 +1,98 @@
> +from pathli= b import PurePath
> +
> +from paramiko import Channel, SSHClien= t, channel
> +
> +from framework.logger import DTSLOG
> += from framework.settings import SETTINGS
> +
> +
> +class = InteractiveShell:
> +
> + =C2=A0 =C2=A0_interactive_session: SS= HClient
> + =C2=A0 =C2=A0_stdin: channel.ChannelStdinFile
> + = =C2=A0 =C2=A0_stdout: channel.ChannelFile
> + =C2=A0 =C2=A0_ssh_chann= el: Channel
> + =C2=A0 =C2=A0_logger: DTSLOG
> + =C2=A0 =C2=A0_= timeout: float
> +
> + =C2=A0 =C2=A0def __init__(
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0interact= ive_session: SSHClient,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0logger: DTSLOG= ,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_app: PurePath | None =3D Non= e,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout: float =3D SETTINGS.timeout= ,
> + =C2=A0 =C2=A0) -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._interactive_session =3D interactive_session
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0self._ssh_channel =3D self._interactive_session.invoke_she= ll()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stdin =3D self._ssh_channel= .makefile_stdin("w")
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._s= tdout =3D self._ssh_channel.makefile("r")
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0self._ssh_channel.settimeout(timeout)
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0self._ssh_channel.set_combine_stderr(True) =C2=A0# combines s= tdout and stderr streams
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger = =3D logger
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._timeout =3D timeout> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if path_to_app:
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0self.send_command_no_output(str(path_to_app)) = =C2=A0# starts the application

This should be part= of the derived class - the code to start an app is going to be app-specifi= c. Since this is an interactive session, the assumption is that we're a= lways going to start an app and then work with it, so starting the app in t= he constructor should be safe.
We actually have that for testpmd,= so we just need to unify the above with what we have.

> +=
> + =C2=A0 =C2=A0def empty_stdout_buffer(self) -> None:

Same comment on ordering as above.

> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"""Removes all data from the stdout = buffer.
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Because of the way p= aramiko handles read buffers, there is no way to effectively
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0remove data from, or "flush", read buffer= s. This method essentially moves our
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0o= ffset on the buffer to the end and thus "removes" the data from t= he buffer.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Timeouts are thrown on read= operations of paramiko pipes based on whether data
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0had been received before timeout so we assume that if we reac= h the timeout then
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0we are at the end o= f the buffer.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._ssh_channel.settimeout(1)
<= br>
Waiting a whole second seems to be a lot. We actually may not= need this method from the use in the code - that is we change how the app = starts.

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0try:
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for line in self._stdout:
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pass
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0except TimeoutError:
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0pass
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._ssh_ch= annel.settimeout(self._timeout) =C2=A0# reset timeout
> +
> + = =C2=A0 =C2=A0def send_command_no_output(self, command: str) -> None:
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""Send command to channel= without recording output.
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0T= his method will not verify any input or output, it will simply assume the> + =C2=A0 =C2=A0 =C2=A0 =C2=A0command succeeded. This method will als= o consume all output in the buffer
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0aft= er executing the command.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0""= "
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.info(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f= "Sending command {command.strip()} and not collecting output"
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0se= lf._stdin.write(f"{command}\n")
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0self._stdin.flush()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.empty_s= tdout_buffer()
> +
> + =C2=A0 =C2=A0def send_command_get_output= (self, command: str, prompt: str) -> str:
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0"""Send a command and get all output before the expec= ted ending string.
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Lines tha= t expect input are not included in the stdout buffer so they cannot be
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0used for expect. For example, if you were = prompted to log into something
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0with a = username and password, you cannot expect "username:" because it w= on't
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0yet be in the stdout buffer. = A work around for this could be consuming an
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0extra newline character to force the current prompt into the stdout = buffer.
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Returns:
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0All output in the buffer before ex= pected string
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger= .info(f"Sending command {command.strip()}...")
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._stdin.write(f"{command}\n")
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stdin.flush()
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0out: str =3D ""
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0for line in self._stdout:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0out +=3D line
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if p= rompt in line and not line.rstrip().endswith(
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0command.rstrip()
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0): =C2=A0# ignore line that sent command
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.debug(f"Got output: {out}&q= uot;)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return out
> +
> + = =C2=A0 =C2=A0def close(self) -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._stdin.close()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._ssh_chann= el.close()
> +
> + =C2=A0 =C2=A0def __del__(self) -> None:> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.close()
> diff --git a/dts/f= ramework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_sess= ion/remote/testpmd_shell.py
> new file mode 100644
> index 0000= 0000..873f3c82
> --- /dev/null
> +++ b/dts/framework/remote_ses= sion/remote/testpmd_shell.py
> @@ -0,0 +1,58 @@
> +# SPDX-Licen= se-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 University of New = Hampshire
> +
> +
> +from pathlib import PurePath
>= +
> +from paramiko import SSHClient
> +
> +from framewor= k.logger import DTSLOG
> +from framework.settings import SETTINGS
= > +
> +from .interactive_shell import InteractiveShell
> +> +
> +class TestPmdShell(InteractiveShell):
> + =C2=A0 = =C2=A0expected_prompt: str =3D "testpmd>"
> +
> + = =C2=A0 =C2=A0def __init__(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self,
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0interactive_session: SSHClient,
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0logger: DTSLOG,
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0path_to_testpmd: PurePath,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0eal_f= lags: str =3D "",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0cmd_line_o= ptions: str =3D "",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout:= float =3D SETTINGS.timeout,
> + =C2=A0 =C2=A0) -> None:
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0"""Initializes an interactive te= stpmd session using specified parameters."""
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0super(TestPmdShell, self).__init__(
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0interactive_session, logger=3Dlogger,= timeout=3Dtimeout
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0self.send_command_get_output(
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0f"{path_to_testpmd} {eal_flags} -- -i {cmd_= line_options}\n",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0s= elf.expected_prompt,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +
&g= t; + =C2=A0 =C2=A0def send_command(self, command: str, prompt: str =3D expe= cted_prompt) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0""&q= uot;Specific way of handling the command for testpmd
> +
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0An extra newline character is consumed in order = to force the current line into
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0the std= out buffer.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command_get_output(f"{c= ommand}\n", prompt)
> +
> + =C2=A0 =C2=A0def get_devices(s= elf) -> list[str]:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0""&quo= t;Get a list of device names that are known to testpmd
> +
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0Uses the device info listed in testpmd and then = parses the output to
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return only the n= ames of the devices.
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Returns= :
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0A list of strings repr= esenting device names (e.g. 0000:14:00.1)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dev_info: str= =3D self.send_command("show device info all")
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0dev_list: list[str] =3D []
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0for line in dev_info.split("\n"):
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if "device name:" in line.lower():=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dev_list.= append(line.strip().split(": ")[1].strip())
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0return dev_list

This could be an = object (or a list of TestPmdDevice object or something similar). We could s= tart with just the name and add to it in the future.

> dif= f --git a/dts/framework/test_result.py b/dts/framework/test_result.py
&g= t; index 74391982..029665fb 100644
> --- a/dts/framework/test_result.= py
> +++ b/dts/framework/test_result.py
> @@ -1,5 +1,6 @@
&g= t; =C2=A0# SPDX-License-Identifier: BSD-3-Clause
> =C2=A0# Copyright(= c) 2023 PANTHEON.tech s.r.o.
> +# Copyright(c) 2022-2023 University o= f New Hampshire
>
> =C2=A0"""
> =C2=A0Gene= ric result container and reporters
> @@ -8,14 +9,17 @@
> =C2=A0= import os.path
> =C2=A0from collections.abc import MutableSequence> =C2=A0from enum import Enum, auto
> +from typing import Dict

No need, we can just use built-in dict (since Python= 3.9).

>
> =C2=A0from .config import (
> =C2=A0= =C2=A0 =C2=A0OS,
> =C2=A0 =C2=A0 =C2=A0Architecture,
> =C2=A0 = =C2=A0 =C2=A0BuildTargetConfiguration,
> + =C2=A0 =C2=A0BuildTargetVe= rsionInfo,
> =C2=A0 =C2=A0 =C2=A0Compiler,
> =C2=A0 =C2=A0 =C2= =A0CPUType,
> =C2=A0 =C2=A0 =C2=A0NodeConfiguration,
> + =C2=A0= =C2=A0NodeVersionInfo,
> =C2=A0)
> =C2=A0from .exception impor= t DTSError, ErrorSeverity
> =C2=A0from .logger import DTSLOG
> = @@ -67,12 +71,14 @@ class Statistics(dict):
> =C2=A0 =C2=A0 =C2=A0Usi= ng a dict provides a convenient way to format the data.
> =C2=A0 =C2= =A0 =C2=A0"""
>
> - =C2=A0 =C2=A0def __init__(se= lf, dpdk_version):
> + =C2=A0 =C2=A0def __init__(self, output_info: D= ict[str, str] | None):
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0super(Stat= istics, self).__init__()
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for resu= lt in Result:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self[= result.name] =3D 0
> =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0self["PASS RATE"] =3D 0.0
> - =C2=A0 = =C2=A0 =C2=A0 =C2=A0self["DPDK VERSION"] =3D dpdk_version
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0if output_info:
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0for info_key, info_val in output_info.items():
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self[info_key]= =3D info_val
>
> =C2=A0 =C2=A0 =C2=A0def __iadd__(self, other:= Result) -> "Statistics":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0"""
> @@ -206,6 +212,8 @@ class BuildTargetResul= t(BaseResult):
> =C2=A0 =C2=A0 =C2=A0os: OS
> =C2=A0 =C2=A0 =C2= =A0cpu: CPUType
> =C2=A0 =C2=A0 =C2=A0compiler: Compiler
> + = =C2=A0 =C2=A0compiler_version: str | None
> + =C2=A0 =C2=A0dpdk_versi= on: str | None
>
> =C2=A0 =C2=A0 =C2=A0def __init__(self, build= _target: BuildTargetConfiguration):
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0super(BuildTargetResult, self).__init__()
> @@ -213,6 +221,12 @@ d= ef __init__(self, build_target: BuildTargetConfiguration):
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.os =3D build_target.os
> =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0self.cpu =3D build_target.cpu
> =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0self.compiler =3D build_target.compiler
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self.compiler_version =3D None
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0self.dpdk_version =3D None
> +
> + =C2=A0 =C2= =A0def add_build_target_versions(self, versions: BuildTargetVersionInfo) -&= gt; None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.compiler_version =3D ve= rsions.compiler_version
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.dpdk_vers= ion =3D versions.dpdk_version
>
> =C2=A0 =C2=A0 =C2=A0def add_t= est_suite(self, test_suite_name: str) -> TestSuiteResult:
> =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_result =3D TestSuiteResult(test_suit= e_name)
> @@ -228,10 +242,17 @@ class ExecutionResult(BaseResult):> =C2=A0 =C2=A0 =C2=A0"""
>
> =C2=A0 =C2=A0 = =C2=A0sut_node: NodeConfiguration
> + =C2=A0 =C2=A0sut_os_name: str> + =C2=A0 =C2=A0sut_os_version: str
> + =C2=A0 =C2=A0sut_kernel= _version: str
>
> - =C2=A0 =C2=A0def __init__(self, sut_node: N= odeConfiguration):
> + =C2=A0 =C2=A0def __init__(self, sut_node: Node= Configuration, sut_version_info: NodeVersionInfo):
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0super(ExecutionResult, self).__init__()
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut_node =3D sut_node
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0self.sut_version_info =3D sut_version_info
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self.sut_os_name =3D sut_version_info.os_name
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut_os_version =3D sut_version_info.os= _version
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut_kernel_version =3D s= ut_version_info.kernel_version
>
> =C2=A0 =C2=A0 =C2=A0def add_= build_target(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self, build_target:= BuildTargetConfiguration
> @@ -258,6 +279,7 @@ class DTSResult(BaseR= esult):
> =C2=A0 =C2=A0 =C2=A0"""
>
> =C2= =A0 =C2=A0 =C2=A0dpdk_version: str | None
> + =C2=A0 =C2=A0output: di= ct | None
> =C2=A0 =C2=A0 =C2=A0_logger: DTSLOG
> =C2=A0 =C2=A0= =C2=A0_errors: list[Exception]
> =C2=A0 =C2=A0 =C2=A0_return_code: E= rrorSeverity
> @@ -267,14 +289,17 @@ class DTSResult(BaseResult):
= > =C2=A0 =C2=A0 =C2=A0def __init__(self, logger: DTSLOG):
> =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0super(DTSResult, self).__init__()
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.dpdk_version =3D None
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0self.output =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._logger =3D logger
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._errors =3D []
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._retu= rn_code =3D ErrorSeverity.NO_ERR
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= self._stats_result =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.= _stats_filename =3D os.path.join(SETTINGS.output_dir, "statistics.txt&= quot;)
>
> - =C2=A0 =C2=A0def add_execution(self, sut_node: Nod= eConfiguration) -> ExecutionResult:
> - =C2=A0 =C2=A0 =C2=A0 =C2= =A0execution_result =3D ExecutionResult(sut_node)
> + =C2=A0 =C2=A0de= f add_execution(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self, sut_node: NodeC= onfiguration, sut_version_info: NodeVersionInfo
> + =C2=A0 =C2=A0) -&= gt; ExecutionResult:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0execution_result = =3D ExecutionResult(sut_node, sut_version_info)
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0self._inner_results.append(execution_result)
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return execution_result
>
> @@ -= 296,7 +321,8 @@ def process(self) -> None:
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0for error in self._errors:
> =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.debug(repr(er= ror))
>
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stats_result =3D S= tatistics(self.dpdk_version)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._sta= ts_result =3D Statistics(self.output)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= # add information gathered from the smoke tests to the statistics
> = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.add_stats(self._stats_result)
>= ; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0with open(self._stats_filename, "w= +") as stats_file:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0stats_file.write(str(self._stats_result))
> diff --git a/dts/fr= amework/test_suite.py b/dts/framework/test_suite.py
> index 0705f38f.= .ad66014c 100644
> --- a/dts/framework/test_suite.py
> +++ b/dt= s/framework/test_suite.py
> @@ -10,11 +10,24 @@
> =C2=A0import = inspect
> =C2=A0import re
> =C2=A0from types import MethodType<= br>> +from typing import Dict
>
> -from .exception import Co= nfigurationError, SSHTimeoutError, TestCaseVerifyError
> +from .confi= g import BuildTargetConfiguration
> +from .exception import (
>= + =C2=A0 =C2=A0BlockingTestSuiteError,
> + =C2=A0 =C2=A0Configuratio= nError,
> + =C2=A0 =C2=A0SSHTimeoutError,
> + =C2=A0 =C2=A0Test= CaseVerifyError,
> +)
> =C2=A0from .logger import DTSLOG, getLo= gger
> =C2=A0from .settings import SETTINGS
> -from .test_resul= t import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
>= +from .test_result import (
> + =C2=A0 =C2=A0BuildTargetResult,
&= gt; + =C2=A0 =C2=A0DTSResult,
> + =C2=A0 =C2=A0Result,
> + =C2= =A0 =C2=A0TestCaseResult,
> + =C2=A0 =C2=A0TestSuiteResult,
> += )
> =C2=A0from .testbed_model import SutNode
>
>
> = @@ -37,10 +50,12 @@ class TestSuite(object):
> =C2=A0 =C2=A0 =C2=A0&q= uot;""
>
> =C2=A0 =C2=A0 =C2=A0sut_node: SutNode
&= gt; + =C2=A0 =C2=A0is_blocking =3D False
> =C2=A0 =C2=A0 =C2=A0_logge= r: DTSLOG
> =C2=A0 =C2=A0 =C2=A0_test_cases_to_run: list[str]
>= =C2=A0 =C2=A0 =C2=A0_func: bool
> =C2=A0 =C2=A0 =C2=A0_result: TestS= uiteResult
> + =C2=A0 =C2=A0_dts_result: DTSResult
>
> = =C2=A0 =C2=A0 =C2=A0def __init__(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self,
> @@ -48,6 +63,8 @@ def __init__(
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0test_cases: list[str],
> =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0func: bool,
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_r= esult: BuildTargetResult,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target= _conf: BuildTargetConfiguration,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_r= esult: DTSResult,
> =C2=A0 =C2=A0 =C2=A0):
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0self.sut_node =3D sut_node
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0self._logger =3D getLogger(self.__class__.__name__)
> @@= -55,6 +72,8 @@ def __init__(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sel= f._test_cases_to_run.extend(SETTINGS.test_cases)
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0self._func =3D func
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0self._result =3D build_target_result.add_test_suite(self.__class__.__= name__)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.build_target_info =3D bui= ld_target_conf
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dts_result =3D dt= s_result
>
> =C2=A0 =C2=A0 =C2=A0def set_up_suite(self) -> N= one:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> @= @ -118,6 +137,8 @@ def run(self) -> None:
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"the next test= suite may be affected."
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._result.update_setup(Result.ERROR, e)
> = + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if len(self._result.get_errors()= ) > 0 and self.is_blocking:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0raise BlockingTestSuiteError(test_suite_name)
><= br>> =C2=A0 =C2=A0 =C2=A0def _execute_test_suite(self) -> None:
&g= t; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> @@ -232,6 += 253,12 @@ def _execute_test_case(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0test_case_result.update(Result.SKIP)
> =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raise KeyboardInterrupt("Stop DTS&q= uot;)
>
> + =C2=A0 =C2=A0def write_to_statistics_file(self, out= put: Dict[str, str]):

This isn't used anywhere= as far as I can tell.

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if s= elf._dts_result.output is not None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0self._dts_result.output.update(output)
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0else:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self= ._dts_result.output =3D output
> +

This coul= d be simplified if we init output to an empty dict in the constructor.

>
> =C2=A0def get_test_suites(testsuite_module_path: str= ) -> list[type[TestSuite]]:
> =C2=A0 =C2=A0 =C2=A0def is_test_suit= e(object) -> bool:
> diff --git a/dts/framework/testbed_model/node= .py b/dts/framework/testbed_model/node.py
> index d48fafe6..c5147e0e = 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/= framework/testbed_model/node.py
> @@ -40,6 +40,7 @@ class Node(object= ):
> =C2=A0 =C2=A0 =C2=A0lcores: list[LogicalCore]
> =C2=A0 =C2= =A0 =C2=A0_logger: DTSLOG
> =C2=A0 =C2=A0 =C2=A0_other_sessions: list= [OSSession]
> + =C2=A0 =C2=A0_execution_config: ExecutionConfiguratio= n
>
> =C2=A0 =C2=A0 =C2=A0def __init__(self, node_config: NodeC= onfiguration):
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.config =3D no= de_config
> @@ -64,6 +65,7 @@ def set_up_execution(self, execution_co= nfig: ExecutionConfiguration) -> None:
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"""
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self= ._setup_hugepages()
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._set_up_= execution(execution_config)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._exec= ution_config =3D execution_config
>
> =C2=A0 =C2=A0 =C2=A0def _= set_up_execution(self, execution_config: ExecutionConfiguration) -> None= :
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> diff= --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_mo= del/sut_node.py
> index 2b2b50d9..27849da7 100644
> --- a/dts/f= ramework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_mode= l/sut_node.py
> @@ -1,14 +1,21 @@
> =C2=A0# SPDX-License-Identi= fier: BSD-3-Clause
> =C2=A0# Copyright(c) 2010-2014 Intel Corporation=
> =C2=A0# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +# Copyrigh= t(c) 2022-2023 University of New Hampshire
>
> =C2=A0import os<= br>> =C2=A0import tarfile
> =C2=A0import time
> =C2=A0from p= athlib import PurePath
>
> -from framework.config import BuildT= argetConfiguration, NodeConfiguration
> -from framework.remote_sessio= n import CommandResult, OSSession
> +from framework.config import (> + =C2=A0 =C2=A0BuildTargetConfiguration,
> + =C2=A0 =C2=A0Buil= dTargetVersionInfo,
> + =C2=A0 =C2=A0InteractiveApp,
> + =C2=A0= =C2=A0NodeConfiguration,
> + =C2=A0 =C2=A0NodeVersionInfo,
> += )
> +from framework.remote_session import CommandResult, InteractiveS= hell, OSSession
> =C2=A0from framework.settings import SETTINGS
&g= t; =C2=A0from framework.utils import EnvVarsDict, MesonArgs
>
>= @@ -26,13 +33,17 @@ class SutNode(Node):
>
> =C2=A0 =C2=A0 =C2= =A0_dpdk_prefix_list: list[str]
> =C2=A0 =C2=A0 =C2=A0_dpdk_timestamp= : str
> - =C2=A0 =C2=A0_build_target_config: BuildTargetConfiguration= | None
> + =C2=A0 =C2=A0_build_target_config: BuildTargetConfigurati= on

What's the reason for this change? My guess= would be mypy, but the change still results in mypy complaining:
framework/testbed_model/sut_node.py:51: error: In= compatible types in assignment (expression has type "None", varia= ble has type "BuildTargetConfiguration")
> =C2=A0 =C2=A0 =C2=A0_env_vars: EnvVarsDict
> =C2=A0 =C2=A0 =C2= =A0_remote_tmp_dir: PurePath
> =C2=A0 =C2=A0 =C2=A0__remote_dpdk_dir:= PurePath | None
> - =C2=A0 =C2=A0_dpdk_version: str | None
> = =C2=A0 =C2=A0 =C2=A0_app_compile_timeout: float
> =C2=A0 =C2=A0 =C2= =A0_dpdk_kill_session: OSSession | None
> + =C2=A0 =C2=A0_dpdk_versio= n: str | None
> + =C2=A0 =C2=A0_os_name: str | None
> + =C2=A0 = =C2=A0_os_version: str | None
> + =C2=A0 =C2=A0_kernel_version: str |= None
> + =C2=A0 =C2=A0_compiler_version: str | None
>
> = =C2=A0 =C2=A0 =C2=A0def __init__(self, node_config: NodeConfiguration):
= > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0super(SutNode, self).__init__(node_c= onfig)
> @@ -41,12 +52,16 @@ def __init__(self, node_config: NodeConf= iguration):
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._env_vars =3D En= vVarsDict()
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._remote_tmp_dir = =3D self.main_session.get_remote_tmp_dir()
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self.__remote_dpdk_dir =3D None
> - =C2=A0 =C2=A0 =C2=A0 = =C2=A0self._dpdk_version =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._app_compile_timeout =3D 90
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._dpdk_kill_session =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._dpdk_timestamp =3D (
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S'= ;, time.localtime())}"
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dpdk_version =3D None
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._os_name =3D None
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._os_version =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0se= lf._kernel_version =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._comp= iler_version =3D None
>
> =C2=A0 =C2=A0 =C2=A0@property
>= =C2=A0 =C2=A0 =C2=A0def _remote_dpdk_dir(self) -> PurePath:
> @@ = -75,6 +90,44 @@ def dpdk_version(self) -> str:
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= return self._dpdk_version
>
> + =C2=A0 =C2=A0@property
> = + =C2=A0 =C2=A0def os_name(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0if self._os_name is None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._os_name =3D self.main_session.get_os_name()
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0return self._os_name
> +
> + =C2=A0 =C2= =A0@property
> + =C2=A0 =C2=A0def os_version(self) -> str:
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0if self._os_version is None:
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._os_version =3D self.main_session.g= et_os_version()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return self._os_versio= n
> +
> + =C2=A0 =C2=A0@property
> + =C2=A0 =C2=A0def ker= nel_version(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if self._= kernel_version is None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= self._kernel_version =3D self.main_session.get_kernel_version()
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0return self._kernel_version
> +
> + = =C2=A0 =C2=A0@property
> + =C2=A0 =C2=A0def compiler_version(self) -&= gt; str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if self._compiler_version is = None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._compiler_ver= sion =3D self.main_session.get_compiler_version(
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._build_target_config.compiler.name
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0return self._compiler_version
> +
> + =C2=A0 =C2=A0def ge= t_node_versions(self) -> NodeVersionInfo:
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0return NodeVersionInfo(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0os_name=3Dself.os_name,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0os_version=3Dself.os_version,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0kernel_version=3Dself.kernel_version,
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0)
> +
> + =C2=A0 =C2=A0def get_build_target_versio= ns(self) -> BuildTargetVersionInfo:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0return BuildTargetVersionInfo(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0dpdk_version=3Dself.dpdk_version, compiler_version=3Dself.compile= r_version
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +
> =C2=A0 = =C2=A0 =C2=A0def _guess_dpdk_remote_dir(self) -> PurePath:
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.main_session.guess_dpdk_remote_d= ir(self._remote_tmp_dir)
>
> @@ -84,6 +137,10 @@ def _set_up_bu= ild_target(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Setup DPDK on the SUT node.
>= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0# we want to ensure that dpdk_version and compiler_version= is reset for new
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# build targets
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dpdk_version =3D None
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._compiler_version =3D None
> =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._configure_build_target(build_target_config)> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._copy_dpdk_tarball()
> = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._build_dpdk()
> @@ -262,6 +319= ,49 @@ def run_dpdk_app(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0f"{app_path} {eal_args}", timeout, verify=3DTrue
> = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
>
> + =C2=A0 =C2=A0def crea= te_interactive_shell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self,
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0shell_type: InteractiveApp,
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0path_to_app: PurePath | None =3D None,

Can we join shell_type and path_to_app together? Both contain the sam= e type of information.
If we can't, let's drop path_to_ap= p, as we don't provide (in this patch) a way for users to modify this. = We can add it later if needed.

> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0timeout: float =3D SETTINGS.timeout,
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0eal_parameters: str =3D "",

Any re= ason we're not using the EalParameters class here?

> += =C2=A0 =C2=A0 =C2=A0 =C2=A0app_parameters=3D"",

This is also currently not used, so maybe we should drop it and ad= d it in the future, probably as a dedicated object representing the paramet= ers.

> + =C2=A0 =C2=A0) -> InteractiveShell:
=
This should return a Union of all the different sessions we = support, otherwise mypy complains:
tests= /TestSuite_smoke_tests.py:68: error: Incompatible types in assignment (expr= ession has type "InteractiveShell", variable has type "TestP= mdShell")

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&= quot;""Create a handler for an interactive session.
> +
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0This method is a factory that calls a met= hod in OSSession to create shells for
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= different DPDK applications.
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0Args:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0shell_type: Enu= m value representing the desired application.
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0path_to_app: Represents a path to the application y= ou are attempting to
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0launch. This path will be executed at the start of the app
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0initializatio= n. If one isn't provided, the default specified in the
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0enumeration will be used.<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout: Timeout for the= ssh channel

We should specify what sort of timeou= t this is - for creating the channel or for sent commands?

&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0eal_parameters: List of EAL p= arameters to use to launch the app. This is
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ignored for base "shell" types.=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0app_parameters: Command= -line flags to pass into the application on launch.
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0Returns:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0I= nstance of the desired interactive application.
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0default_p= ath: PurePath | None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# if we just want= a default shell, there is no need to append the DPDK build
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0# directory to the path
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0if shell_type.name =3D=3D = "shell":
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0defau= lt_path =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0default_path =3D self.main_session.jo= in_remote_path(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0self.remote_dpdk_build_dir, *shell_type.get_path()
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)

All this path = handling looks clunky, it should be part of the=C2=A0InteractiveApp class. = Would adding something like update_path to InteractiveApp work?
<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.main_session.interactive_s= ession.create_interactive_shell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0shell_type,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pa= th_to_app or default_path,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0timeout,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0eal_paramete= rs,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0app_parameters,
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +
>
> =C2=A0class Ea= lParameters(object):
> =C2=A0 =C2=A0 =C2=A0def __init__(
> diff= --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tes= ts.py
> new file mode 100644
> index 00000000..a2fd9fae
>= --- /dev/null
> +++ b/dts/tests/TestSuite_smoke_tests.py
> @@ = -0,0 +1,101 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# = Copyright(c) 2023 University of New Hampshire
> +
> +from frame= work.config import InteractiveApp
> +from framework.remote_session im= port TestPmdShell
> +from framework.test_suite import TestSuite
&g= t; +
> +
> +class SmokeTests(TestSuite):
> + =C2=A0 =C2= =A0is_blocking =3D True
> + =C2=A0 =C2=A0# in this list, the first in= dex is the address of the nic and the second is
> + =C2=A0 =C2=A0# th= e driver for that nic.
> + =C2=A0 =C2=A0list_of_nics: list[tuple[str,= str]] =3D []

Would using a dict make more sense?<= /div>

> +
> + =C2=A0 =C2=A0def set_up_suite(self) -> No= ne:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0Setup:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0build all DPDK

This needs to be updated.

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.dpdk_build_dir_path =3D self.sut_node.remot= e_dpdk_build_dir
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0for nic in self.sut_n= ode.config.ports:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0new_tu= ple =3D (nic.pci, nic.os_driver.strip())
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0self.list_of_nics.append(new_tuple)

It would be better to rename this based on what list of nics this is - t= he one we expect to find or the one we're testing?

> +=
> + =C2=A0 =C2=A0def test_unit_tests(self) -> None:
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0run the fast= -test unit-test suite through meson
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&q= uot;""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut_node.main_se= ssion.send_command(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f&qu= ot;meson test -C {self.dpdk_build_dir_path} --suite fast-tests",
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0300,
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0verify=3DTrue,
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0)
> +
> + =C2=A0 =C2=A0def test_driver_tests(self) -> = None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0run the driver-test unit-test suite through meson
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0list_of_vdevs =3D ""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0for = dev in self.sut_node._execution_config.vdevs:
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0list_of_vdevs +=3D f"--vdev {dev} "
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0list_of_vdevs =3D list_of_vdevs[:-1]
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if list_of_vdevs:
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.info= (
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&quo= t;Running driver tests with the following virtual "
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"devices: {list_of_vd= evs}"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut_node.main_session.send_co= mmand(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f&q= uot;meson test -C {self.dpdk_build_dir_path} --suite driver-tests "> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f'--test= -args "{list_of_vdevs}"',
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0300,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0verify=3DTrue,
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut_node.main_session.send_co= mmand(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f&q= uot;meson test -C {self.dpdk_build_dir_path} --suite driver-tests",> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0300,
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0verify=3DTrue,> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +
> + =C2= =A0 =C2=A0def test_devices_listed_in_testpmd(self) -> None:
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Uses tes= tpmd driver to verify that devices have been found by testpmd
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"""

Every o= ther test description is written in imperative and with lowecase first lett= er. Let's unite everything to imperative with uppercase first letter an= d properly ending the sentence with a dot.

> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0testpmd_driver: TestPmdShell =3D self.sut_node.create_inte= ractive_shell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Interacti= veApp.testpmd
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0dev_list: list[str] =3D testpmd_driver.get_devices()
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0for nic in self.list_of_nics:
Here's where the more explicit naming would be beneficial -= I had to think about what's what.

> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.verify(
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0nic[0] in dev_list,
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Device {nic[0]} was = not listed in testpmd's available devices, "
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"please check y= our configuration",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0)
> +
> + =C2=A0 =C2=A0def test_device_bound_to_driver(self)= -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0ensure that all drivers listed in the config are bound to the = correct driver
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_dev =3D self.sut_node.main_session= .join_remote_path(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.= sut_node._remote_dpdk_dir, "usertools", "dpdk-devbind.py&quo= t;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0for nic in self.list_of_nics:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0out =3D self.sut_node.main_session.send_command(
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"{path_to_dev} --= status | grep {nic[0]}", 60
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0)

This may be something we want to put i= nto SutNode, something like get_device_info. We'd only send the command= once (not for every NIC as we're doing here) and parse the output.

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.verify(
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0len(out.stdou= t) !=3D 0,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0f"Failed to find configured device ({nic[0]}) using dpdk-devbind.py= ",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for string in out.stdout.split("= "):
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= if "drv=3D" in string:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.verify(
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0string= .split("=3D")[1] =3D=3D nic[1],
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Driver= for device {nic[0]} does not match driver listed in "
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0f'configuration (bound to {string.split("=3D")[1]})'= ;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0)
> --
> 2.41.0
>
--000000000000d160e405ffd2af52--