From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 788E242DFC; Sat, 8 Jul 2023 01:07:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0303140A8A; Sat, 8 Jul 2023 01:07:08 +0200 (CEST) Received: from mail-oa1-f44.google.com (mail-oa1-f44.google.com [209.85.160.44]) by mails.dpdk.org (Postfix) with ESMTP id 451944021F for ; Sat, 8 Jul 2023 01:07:06 +0200 (CEST) Received: by mail-oa1-f44.google.com with SMTP id 586e51a60fabf-1b3c503af99so2235875fac.0 for ; Fri, 07 Jul 2023 16:07:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1688771225; x=1691363225; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=QpYVOBmZ6I2ISfxlRUXBvDSFEqqF5y4OJkg+n2tIaAA=; b=d4a+Ql29IQ6aYOSjW48FMpv40JFfJ/U7OtqPMaMLvtXlGPlG9BJWaS9k2ghaiAEoui HgWDEwvirvQCehODShAfKAcNX/OsMAd3Y4B9Z5xvTIE8l1jkx4wefawfYUbDnWd+Sv2H 4GmbozRilLKtj/1fUjNF8B1VfcGP3UxmN0qmc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688771225; x=1691363225; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=QpYVOBmZ6I2ISfxlRUXBvDSFEqqF5y4OJkg+n2tIaAA=; b=YVS2wGTJ3ZQ7tKXC+kKj/yoZysTieZYpUFlKwDvgc3vmsHYRYIFBI4aTDFawjvWN9+ dWULOGWzOBlldh7ODoMJlswbvUbjyvpWWDW6+XA/qpd093ed3/D8WYhVHOcS4cz0wwfI UcbqFT15VX/AjSkV09xn3/Pb2j+KEgyt4rK/bxjpA3HanrjeyH3HZUePaSUgJiK1GulL OhjkMIlEwpc8v+gapG5h+RUCLjA486HZBojaRCL/tvNPfd9VFaxVI+vEf8sbiSdFwNBA dKlJbnzFe9nDI7U9c6WoFQY8HhrrC6l9sdFkwDytAnDwXFqX+W+phVRe0Volb8BrrLIj zFHg== X-Gm-Message-State: ABy/qLYeodQcYP1pjNbr2aN2CmhIkZYPITtmrc4pnse9bq3yX7AvUKos 5MPOQYb1/xMkQ3P566rlAt3PmgovefIiLUDK8KTRHA== X-Google-Smtp-Source: APBJJlFOyayA4kms1PCx68sOUr5XZtkvAcK6bpZkpJD+zr+BJBglr9awouiP3BAAz0UJy1OO59Gt/Rv6BkjJNeRNDnY= X-Received: by 2002:a05:6870:1cc:b0:1b0:6dec:265 with SMTP id n12-20020a05687001cc00b001b06dec0265mr6976719oad.55.1688771224982; Fri, 07 Jul 2023 16:07:04 -0700 (PDT) MIME-Version: 1.0 References: <20230615201318.13359-2-jspewock@iol.unh.edu> <20230615201318.13359-3-jspewock@iol.unh.edu> In-Reply-To: From: Jeremy Spewock Date: Fri, 7 Jul 2023 19:06:53 -0400 Message-ID: Subject: Re: [PATCH v1 1/2] dts: add smoke tests To: =?UTF-8?Q?Juraj_Linke=C5=A1?= Cc: Honnappa.Nagarahalli@arm.com, thomas@monjalon.net, lijuan.tu@intel.com, wathsala.vithanage@arm.com, probb@iol.unh.edu, dev@dpdk.org Content-Type: multipart/alternative; boundary="000000000000f8f02805ffedb07c" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000f8f02805ffedb07c Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Thu, Jul 6, 2023 at 10:54=E2=80=AFAM Juraj Linke=C5=A1 wrote: > There are mypy errors related to paramiko: > framework/remote_session/remote/interactive_remote_session.py:8: error: > Library stubs not installed for "paramiko" (or incompatible with Python > 3.10) > framework/remote_session/remote/interactive_remote_session.py:9: error: > Library stubs not installed for "paramiko.ssh_exception" (or incompatible > with Python 3.10) > > We do this for pexpect: > import pexpect # type: ignore > from pexpect import pxssh # type: ignore > > We should do the same for paramiko. > Good to know, I'll be sure to change that to make sure it passes. > > There are also some changes which may require coordination. I'm not sure > how we'd do that, the best may be to align that off-list. > > More inline. > > On Thu, Jun 15, 2023 at 10:13=E2=80=AFPM wrote: > > > > From: Jeremy Spewock > > > > Adds a new test suite as well as configuration changes needed for runni= ng > > smoke tests that verify general environment aspects of the system > > under test. If any of these tests fail, the DTS execution terminates > > as part of a "fail-fast" model. > > > > Signed-off-by: Jeremy Spewock > > --- > > dts/conf.yaml | 13 ++ > > dts/framework/config/__init__.py | 114 +++++++++++++-- > > dts/framework/config/conf_yaml_schema.json | 135 +++++++++++++++++- > > dts/framework/dts.py | 101 ++++++++++--- > > dts/framework/exception.py | 12 ++ > > dts/framework/remote_session/__init__.py | 10 +- > > dts/framework/remote_session/os_session.py | 34 ++++- > > dts/framework/remote_session/posix_session.py | 30 ++++ > > .../remote_session/remote/__init__.py | 12 ++ > > .../remote/interactive_remote_session.py | 113 +++++++++++++++ > > .../remote/interactive_shell.py | 98 +++++++++++++ > > .../remote_session/remote/testpmd_shell.py | 58 ++++++++ > > dts/framework/test_result.py | 38 ++++- > > dts/framework/test_suite.py | 31 +++- > > dts/framework/testbed_model/node.py | 2 + > > dts/framework/testbed_model/sut_node.py | 110 +++++++++++++- > > dts/tests/TestSuite_smoke_tests.py | 101 +++++++++++++ > > 17 files changed, 962 insertions(+), 50 deletions(-) > > create mode 100644 > dts/framework/remote_session/remote/interactive_remote_session.py > > create mode 100644 > dts/framework/remote_session/remote/interactive_shell.py > > create mode 100644 dts/framework/remote_session/remote/testpmd_shell.p= y > > create mode 100644 dts/tests/TestSuite_smoke_tests.py > > > > diff --git a/dts/conf.yaml b/dts/conf.yaml > > index a9bd8a3e..03fd57e1 100644 > > --- a/dts/conf.yaml > > +++ b/dts/conf.yaml > > @@ -10,6 +10,8 @@ executions: > > compiler_wrapper: ccache > > perf: false > > func: true > > + vdevs: #names of virtual devices to be used for testing > > Vdevs are optional, let's mention that in the comment as with the > hugepages config. > That's a good point, I'll be sure to include that for clarity. > > > > > + - "crypto_openssl" > > How (or where) is this configured in the original DTS? The vdevs are only > going to be used on SUTs, so it may make sense to put them to node config= . > On the other hand, each execution using the SUT may use different vdevs, = so > this is probably the best place. Maybe we could move it under > system_under_test in execution? That would make more sense I think - it > would be an addition to the existing SUT configuration (as in here's wher= e > we augment SUT config specific to the execution). > > I added it under execution under that same line of thinking that you mentioned: different executions might call for use of different vdevs. I think that is a good point however that these are only going to be used on the SUT so it makes sense to add it under that section. I think this would potentially be good for other things we mentioned as well like how to handle ports on the SUT node and if those should be listed under SUT or execution. I think if we turn this system_under_test key into an object that we can add things to the SUT based on the execution. Then, in the future, we could create a devices object that could be used for CPU devices, NICs, or things of that nature and add them to the SUT based on the execution. > > > test_suites: > > - hello_world > > system_under_test: "SUT 1" > > @@ -20,6 +22,17 @@ nodes: > > arch: x86_64 > > os: linux > > lcores: "" > > + ports: > > + - pci: "0000:00:08.0" > > + dpdk_os_driver: vfio-pci > > We could find a better name for this driver (and maybe even the regular o= s > driver). It's the OS driver that dpdk uses and there could be better name= s > - os_dpdk_driver, os_driver_dpdk, os_driver_for_dpdk. Which one do you li= ke > the most? Or maybe some other name? > In either case we probably want to add a comment to both drivers that > explain what we mean here. The name doesn't need to be perfect then. :-) > I'll make sure to align the change with my patch if we change this. > I think out of those I like os_driver_for_dpdk which makes it a little more clear that it is just an OS driver that DPDK is going to use, not something specific to DPDK. I agree that this still isn't perfect but it is at least more clear and the comment will help clear it the rest of the way. > > > > + os_driver: i40e > > + peer_node: "TG 1" > > + peer_pci: "0000:00:08.0" > > + - pci: "0000:00:08.1" > > + dpdk_os_driver: vfio-pci > > + os_driver: i40e > > + peer_node: "TG 1" > > + peer_pci: "0000:00:08.1" > > use_first_core: false > > memory_channels: 4 > > hugepages: # optional; if removed, will use system hugepage > configuration > > diff --git a/dts/framework/config/__init__.py > b/dts/framework/config/__init__.py > > index ebb0823f..74e5158d 100644 > > --- a/dts/framework/config/__init__.py > > +++ b/dts/framework/config/__init__.py > > @@ -72,6 +72,21 @@ class HugepageConfiguration: > > force_first_numa: bool > > > > > > +@dataclass(slots=3DTrue, frozen=3DTrue) > > +class PortConfig: > > + id: int > > I removed id in my latest local version as I didn't see a real use for it= . > > > > > + node: str > > + pci: str > > + dpdk_os_driver: str > > + os_driver: str > > + peer_node: str > > + peer_pci: str > > + > > + @staticmethod > > + def from_dict(id: int, node: str, d: dict) -> "PortConfig": > > + return PortConfig(id=3Did, node=3Dnode, **d) > > This would need to be changed if we remove the id. > > That makes sense, I don't really see a need for it either so I will also remove it. > > + > > + > > @dataclass(slots=3DTrue, frozen=3DTrue) > > class NodeConfiguration: > > name: str > > @@ -84,6 +99,7 @@ class NodeConfiguration: > > use_first_core: bool > > memory_channels: int > > hugepages: HugepageConfiguration | None > > + ports: list[PortConfig] > > > > @staticmethod > > def from_dict(d: dict) -> "NodeConfiguration": > > @@ -92,18 +108,46 @@ def from_dict(d: dict) -> "NodeConfiguration": > > if "force_first_numa" not in hugepage_config: > > hugepage_config["force_first_numa"] =3D False > > hugepage_config =3D HugepageConfiguration(**hugepage_confi= g) > > + common_config =3D { > > + "name": d["name"], > > + "hostname": d["hostname"], > > + "user": d["user"], > > + "password": d.get("password"), > > + "arch": Architecture(d["arch"]), > > + "os": OS(d["os"]), > > + "lcores": d.get("lcores", "1"), > > + "use_first_core": d.get("use_first_core", False), > > + "memory_channels": d.get("memory_channels", 1), > > + "hugepages": hugepage_config, > > + "ports": [ > > + PortConfig.from_dict(i, d["name"], port) > > + for i, port in enumerate(d["ports"]) > > + ], > > This also needs to be modified when removing PortConfig.id: > "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]], > > > > > + } > > + > > + return NodeConfiguration(**common_config) > > + > > + > > +@dataclass(slots=3DTrue) > > +class NodeVersionInfo: > > This looks like we could use it for all sorts of things in the future, so > I'd just call it NodeInfo. > > I agree that it could be used for a lot of things, I think I just added the version so it would be more specific but in hindsight I agree that this could be shortened and used for multiple things. > > > > + """Class to hold important versions within the node. > > + > > + This class, unlike the NodeConfiguration class, cannot be generate= d > at the start. > > + This is because we need to initialize a connection with the node > before we can > > + collet the information needed in this class. Therefore, it cannot > be a part of > > Typo: collect > > Oops, good catch. > > + the configuration class above. > > + """ > > > > - return NodeConfiguration( > > - name=3Dd["name"], > > - hostname=3Dd["hostname"], > > - user=3Dd["user"], > > - password=3Dd.get("password"), > > - arch=3DArchitecture(d["arch"]), > > - os=3DOS(d["os"]), > > - lcores=3Dd.get("lcores", "1"), > > - use_first_core=3Dd.get("use_first_core", False), > > - memory_channels=3Dd.get("memory_channels", 1), > > - hugepages=3Dhugepage_config, > > + os_name: str > > + os_version: str > > + kernel_version: str > > + > > + @staticmethod > > + def from_dict(d: dict): > > + return NodeVersionInfo( > > + os_name=3Dd["os_name"], > > + os_version=3Dd["os_version"], > > + kernel_version=3Dd["kernel_version"], > > ) > > > > > > @@ -128,6 +172,24 @@ def from_dict(d: dict) -> > "BuildTargetConfiguration": > > ) > > > > > > +@dataclass(slots=3DTrue) > > +class BuildTargetVersionInfo: > > + """Class to hold important versions within the build target. > > + > > + This is very similar to the NodeVersionInfo class, it just instead > holds information > > + for the build target. > > + """ > > + > > + dpdk_version: str > > + compiler_version: str > > + > > + @staticmethod > > + def from_dict(d: dict): > > + return BuildTargetVersionInfo( > > + dpdk_version=3Dd["dpdk_version"], > compiler_version=3Dd["compiler_version"] > > + ) > > + > > Do you expect we'll be storing more fields than just these two (if so, > then we can again rename the class to BuildTargetInfo)? If it's just thes= e > two, the extra class may not help that much (we could pass around a tuple= ). > > On the other hand, there's not really a reason to not have this class. I > could see us adding to it and I like BuildTargetInfo because of that. > I figured that it could be useful for storing any extra information in the future and like you said, does no harm so I made it a class. I think keeping it a class just makes it easier to extend it in the future but I'll rename it like the above info class. > > > > > + > > class TestSuiteConfigDict(TypedDict): > > suite: str > > cases: list[str] > > @@ -157,6 +219,8 @@ class ExecutionConfiguration: > > func: bool > > test_suites: list[TestSuiteConfig] > > system_under_test: NodeConfiguration > > + vdevs: list[str] > > + skip_smoke_tests: bool > > This should be added to conf.yaml to demonstrate the availability of the > option. > Very good point, it would be hard for people to know it was an option otherwise. > > > > > > @staticmethod > > def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration"= : > > @@ -167,14 +231,18 @@ def from_dict(d: dict, node_map: dict) -> > "ExecutionConfiguration": > > map(TestSuiteConfig.from_dict, d["test_suites"]) > > ) > > sut_name =3D d["system_under_test"] > > + list_of_vdevs =3D d["vdevs"] > > + skip_smoke_tests =3D d["skip_smoke_tests"] if "skip_smoke_test= s" > in d else False > > Just as with hugepages, we could skip the else branch if we set the > default of skip_smoke_tests to False. I'm not sure which is the better > approach. > I am unsure of which is better and it doesn't make a massive difference, but I will change this to a .get() call where the default is false to more closely align with the huge pages configuration. > > > > > assert sut_name in node_map, f"Unknown SUT {sut_name} in > execution {d}" > > > > return ExecutionConfiguration( > > build_targets=3Dbuild_targets, > > perf=3Dd["perf"], > > func=3Dd["func"], > > + skip_smoke_tests=3Dskip_smoke_tests, > > test_suites=3Dtest_suites, > > system_under_test=3Dnode_map[sut_name], > > + vdevs=3Dlist_of_vdevs, > > ) > > > > > > @@ -221,3 +289,27 @@ def load_config() -> Configuration: > > > > > > CONFIGURATION =3D load_config() > > + > > + > > +@unique > > +class InteractiveApp(Enum): > > + """An enum that represents different supported interactive > applications > > + > > + The values in this enum must all be set to objects that have a key > called > > + "default_path" where "default_path" represents an array for the > path to the > > + application. This default path will be passed into the handler > class for the > > + application so that it can start the application. For every key > other than > > + the default shell option, the path will be appended to the path to > the DPDK > > + build directory for the current SUT node. > > + """ > > + > > + shell =3D {"default_path": [""]} > > + testpmd =3D {"default_path": ["app", "dpdk-testpmd"]} > > The stored path could be a PurePath object. It'll be instantiated as an > OS-specific path, but main_session.join_remote_path will convert it > properly. > > I actually was thinking about this because I saw it done when I was looking through a different snippet of DTS code in one of your patches I believe and I realized it would be a better approach. I much prefer using PurePath here just because that is more clear and feels cleaner so I will make this change. > > + > > + def get_path(self): > > + """A method for getting the default paths of an application > > + > > + Returns: > > + String array that represents an OS agnostic path to the > application. > > + """ > > + return self.value["default_path"] > > diff --git a/dts/framework/config/conf_yaml_schema.json > b/dts/framework/config/conf_yaml_schema.json > > index ca2d4a1e..3f7c301a 100644 > > --- a/dts/framework/config/conf_yaml_schema.json > > +++ b/dts/framework/config/conf_yaml_schema.json > > @@ -6,6 +6,76 @@ > > "type": "string", > > "description": "A unique identifier for a node" > > }, > > + "NIC": { > > + "type": "string", > > + "enum": [ > > + "ALL", > > + "ConnectX3_MT4103", > > + "ConnectX4_LX_MT4117", > > + "ConnectX4_MT4115", > > + "ConnectX5_MT4119", > > + "ConnectX5_MT4121", > > + "I40E_10G-10G_BASE_T_BC", > > + "I40E_10G-10G_BASE_T_X722", > > + "I40E_10G-SFP_X722", > > + "I40E_10G-SFP_XL710", > > + "I40E_10G-X722_A0", > > + "I40E_1G-1G_BASE_T_X722", > > + "I40E_25G-25G_SFP28", > > + "I40E_40G-QSFP_A", > > + "I40E_40G-QSFP_B", > > + "IAVF-ADAPTIVE_VF", > > + "IAVF-VF", > > + "IAVF_10G-X722_VF", > > + "ICE_100G-E810C_QSFP", > > + "ICE_25G-E810C_SFP", > > + "ICE_25G-E810_XXV_SFP", > > + "IGB-I350_VF", > > + "IGB_1G-82540EM", > > + "IGB_1G-82545EM_COPPER", > > + "IGB_1G-82571EB_COPPER", > > + "IGB_1G-82574L", > > + "IGB_1G-82576", > > + "IGB_1G-82576_QUAD_COPPER", > > + "IGB_1G-82576_QUAD_COPPER_ET2", > > + "IGB_1G-82580_COPPER", > > + "IGB_1G-I210_COPPER", > > + "IGB_1G-I350_COPPER", > > + "IGB_1G-I354_SGMII", > > + "IGB_1G-PCH_LPTLP_I218_LM", > > + "IGB_1G-PCH_LPTLP_I218_V", > > + "IGB_1G-PCH_LPT_I217_LM", > > + "IGB_1G-PCH_LPT_I217_V", > > + "IGB_2.5G-I354_BACKPLANE_2_5GBPS", > > + "IGC-I225_LM", > > + "IGC-I226_LM", > > + "IXGBE_10G-82599_SFP", > > + "IXGBE_10G-82599_SFP_SF_QP", > > + "IXGBE_10G-82599_T3_LOM", > > + "IXGBE_10G-82599_VF", > > + "IXGBE_10G-X540T", > > + "IXGBE_10G-X540_VF", > > + "IXGBE_10G-X550EM_A_SFP", > > + "IXGBE_10G-X550EM_X_10G_T", > > + "IXGBE_10G-X550EM_X_SFP", > > + "IXGBE_10G-X550EM_X_VF", > > + "IXGBE_10G-X550T", > > + "IXGBE_10G-X550_VF", > > + "brcm_57414", > > + "brcm_P2100G", > > + "cavium_0011", > > + "cavium_a034", > > + "cavium_a063", > > + "cavium_a064", > > + "fastlinq_ql41000", > > + "fastlinq_ql41000_vf", > > + "fastlinq_ql45000", > > + "fastlinq_ql45000_vf", > > + "hi1822", > > + "virtio" > > + ] > > + }, > > + > > All these NICs may be overkill, do we want to trim them? > > I think in general that the more we have the better to make it more universally usable. If a NIC isn't supported by DTS anymore we could pull it out but I don't see a problem with maintaining a list that has all supported NICs even if it does end up being long. > > > > "ARCH": { > > "type": "string", > > "enum": [ > > @@ -94,10 +164,24 @@ > > "amount" > > ] > > }, > > + "pci_address": { > > + "type": "string", > > + "pattern": > "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$" > > + }, > > + "port_peer_address": { > > + "description": "Peer is a TRex port, and IXIA port or a PCI > address", > > + "oneOf": [ > > + { > > + "description": "PCI peer port", > > + "$ref": "#/definitions/pci_address" > > + } > > + ] > > + }, > > "test_suite": { > > "type": "string", > > "enum": [ > > - "hello_world" > > + "hello_world", > > + "smoke_tests" > > Is there a reason to provide both this and the execution.skip_smoke_tests > option? If we provide both, we may need to think about what happens when > both are present. I'd say it's best for everyone (users and devs) to have > only one option and I'd lean towards execution.skip_smoke_tests. > Good catch, I meant to remove this. In theory if you added it in both places it wouldn't cause too much of an issue, you would just be running smoke tests twice in your execution. I will remove it anyway though. > > ] > > }, > > "test_target": { > > @@ -165,6 +249,44 @@ > > }, > > "hugepages": { > > "$ref": "#/definitions/hugepages" > > + }, > > + "ports": { > > + "type": "array", > > + "items": { > > + "type": "object", > > + "description": "Each port should be described on both > sides of the connection. This makes configuration slightly more verbose b= ut > greatly simplifies implementation. If there are an inconsistencies, then > DTS will not run until that issue is fixed. An example inconsistency woul= d > be port 1, node 1 says it is connected to port 1, node 2, but port 1, nod= e > 2 says it is connected to port 2, node 1.", > > + "properties": { > > + "pci": { > > + "$ref": "#/definitions/pci_address", > > + "description": "The local PCI address of the port" > > + }, > > + "dpdk_os_driver": { > > + "type": "string", > > + "description": "The driver that the kernel should > bind this device to for DPDK to use it. (ex: vfio-pci)" > > + }, > > + "os_driver": { > > + "type": "string", > > + "description": "The driver normally used by this por= t > (ex: i40e)" > > + }, > > + "peer_node": { > > + "type": "string", > > + "description": "The name of the node the peer port i= s > on" > > + }, > > + "peer_pci": { > > + "$ref": "#/definitions/pci_address", > > + "description": "The PCI address of the peer port" > > + } > > + }, > > + "additionalProperties": false, > > + "required": [ > > + "pci", > > + "dpdk_os_driver", > > + "os_driver", > > + "peer_node", > > + "peer_pci" > > + ] > > + }, > > + "minimum": 1 > > } > > }, > > "additionalProperties": false, > > @@ -211,6 +333,17 @@ > > ] > > } > > }, > > + "vdevs": { > > + "description": "Names of vdevs to be used in execution", > > + "type": "array", > > + "items": { > > + "type":"string" > > Missing space before "string". > > > + } > > + }, > > + "skip_smoke_tests": { > > + "description": "Optional field that allows you to skip > smoke testing", > > + "type":"boolean" > > Also the missing space. > Good call, I'll correct these. > > > + }, > > "system_under_test": { > > "$ref": "#/definitions/node_name" > > } > > diff --git a/dts/framework/dts.py b/dts/framework/dts.py > > index 05022845..f76c9ceb 100644 > > --- a/dts/framework/dts.py > > +++ b/dts/framework/dts.py > > @@ -5,7 +5,13 @@ > > > > import sys > > > > -from .config import CONFIGURATION, BuildTargetConfiguration, > ExecutionConfiguration > > +from .config import ( > > + CONFIGURATION, > > + BuildTargetConfiguration, > > + ExecutionConfiguration, > > + TestSuiteConfig, > > +) > > +from .exception import BlockingTestSuiteError > > from .logger import DTSLOG, getLogger > > from .test_result import BuildTargetResult, DTSResult, ExecutionResult= , > Result > > from .test_suite import get_test_suites > > @@ -82,7 +88,9 @@ def _run_execution( > > running all build targets in the given execution. > > """ > > dts_logger.info(f"Running execution with SUT '{ > execution.system_under_test.name}'.") > > - execution_result =3D result.add_execution(sut_node.config) > > + execution_result =3D result.add_execution( > > + sut_node.config, sut_node.get_node_versions() > > + ) > > > > try: > > sut_node.set_up_execution(execution) > > @@ -118,14 +126,17 @@ def _run_build_target( > > > > try: > > sut_node.set_up_build_target(build_target) > > - result.dpdk_version =3D sut_node.dpdk_version > > + # result.dpdk_version =3D sut_node.dpdk_version > > + build_target_result.add_build_target_versions( > > + sut_node.get_build_target_versions() > > + ) > > build_target_result.update_setup(Result.PASS) > > except Exception as e: > > dts_logger.exception("Build target setup failed.") > > build_target_result.update_setup(Result.FAIL, e) > > > > else: > > - _run_suites(sut_node, execution, build_target_result) > > + _run_all_suites(sut_node, execution, build_target_result) > > > > finally: > > try: > > @@ -136,7 +147,46 @@ def _run_build_target( > > build_target_result.update_teardown(Result.FAIL, e) > > > > > > -def _run_suites( > > +def _run_single_suite( > > + sut_node: SutNode, > > + execution: ExecutionConfiguration, > > + build_target_result: BuildTargetResult, > > + test_suite_config: TestSuiteConfig, > > +) -> None: > > + """Runs a single test suite. > > + > > + Args: > > + sut_node: Node to run tests on. > > + execution: Execution the test case belongs to. > > + build_target_result: Build target configuration test case is > run on > > + test_suite_config: Test suite configuration > > + > > + Raises: > > + BlockingTestSuiteError: If a test suite that was marked as > blocking fails. > > + """ > > + try: > > + full_suite_path =3D > f"tests.TestSuite_{test_suite_config.test_suite}" > > + test_suite_classes =3D get_test_suites(full_suite_path) > > + suites_str =3D ", ".join((x.__name__ for x in test_suite_class= es)) > > + dts_logger.debug(f"Found test suites '{suites_str}' in > '{full_suite_path}'.") > > + except Exception as e: > > + dts_logger.exception("An error occurred when searching for tes= t > suites.") > > + result.update_setup(Result.ERROR, e) > > + > > + else: > > + for test_suite_class in test_suite_classes: > > + test_suite =3D test_suite_class( > > + sut_node, > > + test_suite_config.test_cases, > > + execution.func, > > + build_target_result, > > + sut_node._build_target_config, > > + result, > > + ) > > + test_suite.run() > > + > > + > > +def _run_all_suites( > > sut_node: SutNode, > > execution: ExecutionConfiguration, > > build_target_result: BuildTargetResult, > > @@ -146,27 +196,32 @@ def _run_suites( > > with possibly only a subset of test cases. > > If no subset is specified, run all test cases. > > """ > > + end_build_target =3D False > > + smoke_test_config =3D TestSuiteConfig.from_dict("smoke_tests") > > + if not execution.skip_smoke_tests: > > + try: > > + _run_single_suite( > > + sut_node, execution, build_target_result, > smoke_test_config > > + ) > > + except BlockingTestSuiteError as e: > > + dts_logger.exception("Smoke tests failed, stopping build > target execution.") > > + result.add_error(e) > > + return > > for test_suite_config in execution.test_suites: > > try: > > - full_suite_path =3D > f"tests.TestSuite_{test_suite_config.test_suite}" > > - test_suite_classes =3D get_test_suites(full_suite_path) > > - suites_str =3D ", ".join((x.__name__ for x in > test_suite_classes)) > > - dts_logger.debug( > > - f"Found test suites '{suites_str}' in > '{full_suite_path}'." > > + _run_single_suite( > > + sut_node, execution, build_target_result, > test_suite_config > > ) > > - except Exception as e: > > - dts_logger.exception("An error occurred when searching for > test suites.") > > - result.update_setup(Result.ERROR, e) > > - > > - else: > > - for test_suite_class in test_suite_classes: > > - test_suite =3D test_suite_class( > > - sut_node, > > - test_suite_config.test_cases, > > - execution.func, > > - build_target_result, > > - ) > > - test_suite.run() > > + except BlockingTestSuiteError as e: > > + dts_logger.exception( > > + f"An error occurred within > {test_suite_config.test_suite}. " > > + "Skipping build target..." > > + ) > > + result.add_error(e) > > + end_build_target =3D True > > + # if a blocking test failed and we need to bail out of suite > executions > > + if end_build_target: > > + break > > > > This doesn't look great to me. I think we could just add a > list[TestSuiteConfig] parameter to _run_suites. We could then just pass a > list with/without smoke TestSuiteConfig, something like: > test_suites =3D execution.test_suites > if not execution.skip_smoke_tests: > test_suites[:0] =3D [TestSuiteConfig.from_dict("smoke_tests")] > _run_suites(sut_node, execution, build_target_result, test_suites) > > A note on method ordering: > _run_single_suite before _run_all_suites was confusing to me, as the > methods are used in the reverse order - _run_build_target > calls _run_all_suites which only then calls _run_single_suite. I'd like t= he > methods to be ordered in the order they're used in the code. > I agree this was ugly, that's a good way to handle it and I'll migrate to just adding it to the front of the list. I think I can actually just use the execution.test_suites array directly that is already being used and just append it to the front of that. This way the list of test suites in the execution configuration is accurate to what actually ran. I don't recall why this happened with the method ordering either but I'll get that corrected as well. I'm thinking it'll look something more like this: > > > > def _exit_dts() -> None: > > diff --git a/dts/framework/exception.py b/dts/framework/exception.py > > index ca353d98..dfb12df4 100644 > > --- a/dts/framework/exception.py > > +++ b/dts/framework/exception.py > > @@ -25,6 +25,7 @@ class ErrorSeverity(IntEnum): > > SSH_ERR =3D 4 > > DPDK_BUILD_ERR =3D 10 > > TESTCASE_VERIFY_ERR =3D 20 > > + BLOCKING_TESTSUITE_ERR =3D 25 > > > > > > class DTSError(Exception): > > @@ -144,3 +145,14 @@ def __init__(self, value: str): > > > > def __str__(self) -> str: > > return repr(self.value) > > + > > + > > +class BlockingTestSuiteError(DTSError): > > + suite_name: str > > + severity: ClassVar[ErrorSeverity] =3D > ErrorSeverity.BLOCKING_TESTSUITE_ERR > > + > > + def __init__(self, suite_name: str) -> None: > > + self.suite_name =3D suite_name > > + > > + def __str__(self) -> str: > > + return f"Blocking suite {self.suite_name} failed." > > diff --git a/dts/framework/remote_session/__init__.py > b/dts/framework/remote_session/__init__.py > > index ee221503..04342b8d 100644 > > --- a/dts/framework/remote_session/__init__.py > > +++ b/dts/framework/remote_session/__init__.py > > @@ -1,5 +1,6 @@ > > # SPDX-License-Identifier: BSD-3-Clause > > # Copyright(c) 2023 PANTHEON.tech s.r.o. > > +# Copyright(c) 2022-2023 University of New Hampshire > > Should the year be just 2023? I'm not sure about these copyright > statements. > I think just 2023 makes sense, I wasn't completely sure either but I was trying to align more with older copyright statements throughout the code and wasn't sure why some others had a range. In anycase, just 2023 makes sense to me and it matches the one above so I have no problem making it that. > > > > """ > > The package provides modules for managing remote connections to a > remote host (node), > > @@ -17,7 +18,14 @@ > > > > from .linux_session import LinuxSession > > from .os_session import OSSession > > -from .remote import CommandResult, RemoteSession, SSHSession > > +from .remote import ( > > + CommandResult, > > + InteractiveRemoteSession, > > + InteractiveShell, > > + RemoteSession, > > + SSHSession, > > + TestPmdShell, > > +) > > > > > > def create_session( > > diff --git a/dts/framework/remote_session/os_session.py > b/dts/framework/remote_session/os_session.py > > index 4c48ae25..f5f53923 100644 > > --- a/dts/framework/remote_session/os_session.py > > +++ b/dts/framework/remote_session/os_session.py > > @@ -12,7 +12,13 @@ > > from framework.testbed_model import LogicalCore > > from framework.utils import EnvVarsDict, MesonArgs > > > > -from .remote import CommandResult, RemoteSession, create_remote_sessio= n > > +from .remote import ( > > + CommandResult, > > + InteractiveRemoteSession, > > + RemoteSession, > > + create_interactive_session, > > + create_remote_session, > > +) > > > > > > class OSSession(ABC): > > @@ -26,6 +32,7 @@ class OSSession(ABC): > > name: str > > _logger: DTSLOG > > remote_session: RemoteSession > > + interactive_session: InteractiveRemoteSession > > > > def __init__( > > self, > > @@ -37,6 +44,7 @@ def __init__( > > self.name =3D name > > self._logger =3D logger > > self.remote_session =3D create_remote_session(node_config, nam= e, > logger) > > + self.interactive_session =3D > create_interactive_session(node_config, name, logger) > > We may not want to create the interactive session at this point. This doe= s > create a connection to the node which we don't want (it is extra time > consumed) when creating an extra session on top of the main_session (with > Node.create_session). I think we could move this to > OSSession.create_interactive_shell. More below. > I think the idea of initializing it here was what we had discussed before about having an open SSH session for interactive shells open in the background throughout the entire run. If I understand what you're saying, you're suggesting that we would only initialize this connection when we need it the first time and then leave it in the background afterwards. I can see how this would be more efficient if you had a run where the interactive session was never used, but it would add extra logic to make sure that the connection is only initialized once and that it doesn't happen every time you create an interactive shell. This is something that could be done, but considering that this will always be initialized with smoke tests, the only way you wouldn't have an interactive remote session created at the start is if you disable smoke tests. I think it is easier to just have it running in the background rather than spawn it when it's used and have to worry about if a connection currently exists or not. > > > > > def close(self, force: bool =3D False) -> None: > > """ > > @@ -173,3 +181,27 @@ def setup_hugepages(self, hugepage_amount: int, > force_first_numa: bool) -> None: > > if needed and mount the hugepages if needed. > > If force_first_numa is True, configure hugepages just on the > first socket. > > """ > > + > > + @abstractmethod > > + def get_compiler_version(self, compiler_name: str) -> str: > > + """ > > + Get installed version of compiler used for DPDK > > + """ > > + > > + @abstractmethod > > + def get_os_name(self) -> str: > > + """ > > + Get name of OS for the node > > + """ > > + > > + @abstractmethod > > + def get_os_version(self) -> str: > > + """ > > + Get version of OS for the node > > + """ > > + > > + @abstractmethod > > + def get_kernel_version(self) -> str: > > + """ > > + Get kernel version for the node > > + """ > > These could be merged into one method (returning NodeVersionInfo), is > there a reason to separate them? We could get some performance improvemen= ts > with one method (e.g. getting /etc/os-release just once and then process > locally or we could use a script we execute remotely to have just one > remote call, but that's something to think about in the future). > That's a good point. Originally I left them separate so that during the run you could just access what you need, but you can just access the data you need from the object. > > diff --git a/dts/framework/remote_session/posix_session.py > b/dts/framework/remote_session/posix_session.py > > index d38062e8..dc6689f5 100644 > > --- a/dts/framework/remote_session/posix_session.py > > +++ b/dts/framework/remote_session/posix_session.py > > @@ -219,3 +219,33 @@ def _remove_dpdk_runtime_dirs( > > > > def get_dpdk_file_prefix(self, dpdk_prefix) -> str: > > return "" > > + > > + def get_compiler_version(self, compiler_name: str) -> str: > > + match compiler_name: > > + case "gcc": > > + return self.send_command(f"{compiler_name} --version", > 60).stdout.split( > > + "\n" > > + )[0] > > + case "clang": > > + return self.send_command(f"{compiler_name} --version", > 60).stdout.split( > > + "\n" > > + )[0] > > + case "msvc": > > + return self.send_command("cl", 60).stdout > > + case "icc": > > + return self.send_command(f"{compiler_name} -V", > 60).stdout > > + case _: > > + raise ValueError(f"Unknown compiler {compiler_name}") > > + > > + def get_os_name(self) -> str: > > + return self.send_command( > > + "awk -F=3D '$1=3D=3D\"NAME\" {print $2}' /etc/os-release",= 60 > > + ).stdout > > + > > + def get_os_version(self) -> str: > > + return self.send_command( > > + "awk -F=3D '$1=3D=3D\"VERSION\" {print $2}' /etc/os-releas= e", 60, > True > > + ).stdout > > + > > + def get_kernel_version(self) -> str: > > + return self.send_command("uname -r", 60).stdout > > These 60 second timeouts seem high to me. All of these commands should > execute quickly so the default timeout seems fine to me. > That makes sense to me, no sense in making it too long. > > > diff --git a/dts/framework/remote_session/remote/__init__.py > b/dts/framework/remote_session/remote/__init__.py > > index 8a151221..ef7b4851 100644 > > --- a/dts/framework/remote_session/remote/__init__.py > > +++ b/dts/framework/remote_session/remote/__init__.py > > @@ -1,16 +1,28 @@ > > # SPDX-License-Identifier: BSD-3-Clause > > # Copyright(c) 2023 PANTHEON.tech s.r.o. > > +# Copyright(c) 2022-2023 University of New Hampshire > > > > # pylama:ignore=3DW0611 > > > > +from paramiko import AutoAddPolicy, SSHClient > > + > > from framework.config import NodeConfiguration > > from framework.logger import DTSLOG > > > > +from .interactive_remote_session import InteractiveRemoteSession > > +from .interactive_shell import InteractiveShell > > from .remote_session import CommandResult, RemoteSession > > from .ssh_session import SSHSession > > +from .testpmd_shell import TestPmdShell > > > > > > def create_remote_session( > > node_config: NodeConfiguration, name: str, logger: DTSLOG > > ) -> RemoteSession: > > return SSHSession(node_config, name, logger) > > + > > + > > +def create_interactive_session( > > + node_config: NodeConfiguration, name: str, logger: DTSLOG > > +) -> InteractiveRemoteSession: > > + return InteractiveRemoteSession(node_config, logger) > > diff --git > a/dts/framework/remote_session/remote/interactive_remote_session.py > b/dts/framework/remote_session/remote/interactive_remote_session.py > > new file mode 100644 > > index 00000000..4e88308a > > --- /dev/null > > +++ b/dts/framework/remote_session/remote/interactive_remote_session.py > > @@ -0,0 +1,113 @@ > > +# SPDX-License-Identifier: BSD-3-Clause > > +# Copyright(c) 2023 University of New Hampshire > > + > > +import socket > > +import traceback > > +from pathlib import PurePath > > + > > +from paramiko import AutoAddPolicy, SSHClient, Transport > > +from paramiko.ssh_exception import ( > > + AuthenticationException, > > + BadHostKeyException, > > + NoValidConnectionsError, > > + SSHException, > > +) > > + > > +from framework.config import InteractiveApp, NodeConfiguration > > +from framework.exception import SSHConnectionError > > +from framework.logger import DTSLOG > > +from framework.settings import SETTINGS > > + > > +from .interactive_shell import InteractiveShell > > +from .testpmd_shell import TestPmdShell > > + > > + > > +class InteractiveRemoteSession: > > + hostname: str > > + ip: str > > + port: int > > + username: str > > + password: str > > + _logger: DTSLOG > > + _node_config: NodeConfiguration > > + session: SSHClient > > + _transport: Transport | None > > + > > + def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG= ) > -> None: > > + self._node_config =3D node_config > > + self._logger =3D _logger > > + self.hostname =3D node_config.hostname > > + self.username =3D node_config.user > > + self.password =3D node_config.password if node_config.password > else "" > > + port =3D 22 > > framework/remote_session/remote/interactive_remote_session.py:45: error: > Incompatible types in assignment (expression has type "str", variable has > type "int") > > We can address this by making 22 a string (port =3D "22"). > I'll fix this as well. > > > + self.ip =3D node_config.hostname > > + if ":" in node_config.hostname: > > + self.ip, port =3D node_config.hostname.split(":") > > + self.port =3D int(port) > > + self._logger.info( > > + f"Initializing interactive connection for {self.username}@ > {self.hostname}" > > + ) > > + self._connect() > > + self._logger.info( > > + f"Interactive connection successful for {self.username}@ > {self.hostname}" > > + ) > > + > > + def _connect(self) -> None: > > + client =3D SSHClient() > > + client.set_missing_host_key_policy(AutoAddPolicy) > > + self.session =3D client > > + retry_attempts =3D 10 > > + for retry_attempt in range(retry_attempts): > > + try: > > + client.connect( > > + self.ip, > > + username=3Dself.username, > > + port=3Dself.port, > > + password=3Dself.password, > > + timeout=3D20 if self.port else 10, > > + ) > > + except (TypeError, BadHostKeyException, > AuthenticationException) as e: > > + self._logger.exception(e) > > + raise SSHConnectionError(self.hostname) from e > > + except (NoValidConnectionsError, socket.error, > SSHException) as e: > > + self._logger.debug(traceback.format_exc()) > > + self._logger.warning(e) > > + self._logger.info( > > + "Retrying interactive session connection: " > > + f"retry number {retry_attempt +1}" > > + ) > > + else: > > + break > > + else: > > + raise SSHConnectionError(self.hostname) > > + # Interactive sessions are used on an "as needed" basis so we > have > > + # to set a keepalive > > + self._transport =3D self.session.get_transport() > > + if self._transport is not None: > > + self._transport.set_keepalive(30) > > + > > + def create_interactive_shell( > > + self, > > + shell_type: InteractiveApp, > > + path_to_app: PurePath, > > + timeout: float =3D SETTINGS.timeout, > > + eal_parameters: str =3D "", > > + app_parameters=3D"", > > + ) -> InteractiveShell: > > + """ > > + See "create_interactive_shell" in SutNode > > + """ > > + match (shell_type.name): > > + case "shell": > > Comparing with the enum values seems better. > Good point, that does seem like it would be more consistent, I'll make the change. > > > + return InteractiveShell( > > + self.session, self._logger, path_to_app, timeout > > + ) > > + case "testpmd": > > + return TestPmdShell( > > + self.session, > > + self._logger, > > + path_to_app, > > + timeout=3Dtimeout, > > + eal_flags=3Deal_parameters, > > + cmd_line_options=3Dapp_parameters, > > + ) > > framework/remote_session/remote/interactive_remote_session.py:89: error: > Missing return statement > > This is probably because we don't have a case for the rest of the values. > You're right, I didn't handle the default case. I'll change it to defaulting to just returning a base interactive shell with a message explaining that they didn't handle the type. > > > diff --git a/dts/framework/remote_session/remote/interactive_shell.py > b/dts/framework/remote_session/remote/interactive_shell.py > > new file mode 100644 > > index 00000000..d98503be > > --- /dev/null > > +++ b/dts/framework/remote_session/remote/interactive_shell.py > > @@ -0,0 +1,98 @@ > > +from pathlib import PurePath > > + > > +from paramiko import Channel, SSHClient, channel > > + > > +from framework.logger import DTSLOG > > +from framework.settings import SETTINGS > > + > > + > > +class InteractiveShell: > > + > > + _interactive_session: SSHClient > > + _stdin: channel.ChannelStdinFile > > + _stdout: channel.ChannelFile > > + _ssh_channel: Channel > > + _logger: DTSLOG > > + _timeout: float > > + > > + def __init__( > > + self, > > + interactive_session: SSHClient, > > + logger: DTSLOG, > > + path_to_app: PurePath | None =3D None, > > + timeout: float =3D SETTINGS.timeout, > > + ) -> None: > > + self._interactive_session =3D interactive_session > > + self._ssh_channel =3D self._interactive_session.invoke_shell() > > + self._stdin =3D self._ssh_channel.makefile_stdin("w") > > + self._stdout =3D self._ssh_channel.makefile("r") > > + self._ssh_channel.settimeout(timeout) > > + self._ssh_channel.set_combine_stderr(True) # combines stdout > and stderr streams > > + self._logger =3D logger > > + self._timeout =3D timeout > > + if path_to_app: > > + self.send_command_no_output(str(path_to_app)) # starts th= e > application > > This should be part of the derived class - the code to start an app is > going to be app-specific. Since this is an interactive session, the > assumption is that we're always going to start an app and then work with > it, so starting the app in the constructor should be safe. > We actually have that for testpmd, so we just need to unify the above wit= h > what we have. > That makes sense, the only reason I didn't do this before was because I was starting the application differently in test_pmd (with EAL flags and command line options) so I would call it without this path when I called the super init(). However, I can change this by just making a method called _start_application() that can be overridden in the subclasses and that will clean this process up quite a bit. > > > + > > + def empty_stdout_buffer(self) -> None: > > Same comment on ordering as above. > > > + """Removes all data from the stdout buffer. > > + > > + Because of the way paramiko handles read buffers, there is no > way to effectively > > + remove data from, or "flush", read buffers. This method > essentially moves our > > + offset on the buffer to the end and thus "removes" the data > from the buffer. > > + Timeouts are thrown on read operations of paramiko pipes based > on whether data > > + had been received before timeout so we assume that if we reach > the timeout then > > + we are at the end of the buffer. > > + """ > > + self._ssh_channel.settimeout(1) > > Waiting a whole second seems to be a lot. We actually may not need this > method from the use in the code - that is we change how the app starts. > We will still need this code whenever you send a command and don't get its output. What it is doing is essentially moving the pointer for the output buffer to the end of the file which is needed because if you send a command and you don't want any output, if we don't still move past the output in the buffer then it will persist and bleed into when you send a command and do want to collect output. Having this method allows you to know you are starting from an empty buffer when you are getting the output of the commands. In the case of timing however, I could cut this down to half a second, it just gives the chance that a command that takes longer to produce its output will still contaminate the buffer. If this causes a problem we could always increase the time in the future. > > > + try: > > + for line in self._stdout: > > + pass > > + except TimeoutError: > > + pass > > + self._ssh_channel.settimeout(self._timeout) # reset timeout > > + > > + def send_command_no_output(self, command: str) -> None: > > + """Send command to channel without recording output. > > + > > + This method will not verify any input or output, it will simpl= y > assume the > > + command succeeded. This method will also consume all output in > the buffer > > + after executing the command. > > + """ > > + self._logger.info( > > + f"Sending command {command.strip()} and not collecting > output" > > + ) > > + self._stdin.write(f"{command}\n") > > + self._stdin.flush() > > + self.empty_stdout_buffer() > > + > > + def send_command_get_output(self, command: str, prompt: str) -> st= r: > > + """Send a command and get all output before the expected endin= g > string. > > + > > + Lines that expect input are not included in the stdout buffer > so they cannot be > > + used for expect. For example, if you were prompted to log into > something > > + with a username and password, you cannot expect "username:" > because it won't > > + yet be in the stdout buffer. A work around for this could be > consuming an > > + extra newline character to force the current prompt into the > stdout buffer. > > + > > + Returns: > > + All output in the buffer before expected string > > + """ > > + self._logger.info(f"Sending command {command.strip()}...") > > + self._stdin.write(f"{command}\n") > > + self._stdin.flush() > > + out: str =3D "" > > + for line in self._stdout: > > + out +=3D line > > + if prompt in line and not line.rstrip().endswith( > > + command.rstrip() > > + ): # ignore line that sent command > > + break > > + self._logger.debug(f"Got output: {out}") > > + return out > > + > > + def close(self) -> None: > > + self._stdin.close() > > + self._ssh_channel.close() > > + > > + def __del__(self) -> None: > > + self.close() > > diff --git a/dts/framework/remote_session/remote/testpmd_shell.py > b/dts/framework/remote_session/remote/testpmd_shell.py > > new file mode 100644 > > index 00000000..873f3c82 > > --- /dev/null > > +++ b/dts/framework/remote_session/remote/testpmd_shell.py > > @@ -0,0 +1,58 @@ > > +# SPDX-License-Identifier: BSD-3-Clause > > +# Copyright(c) 2023 University of New Hampshire > > + > > + > > +from pathlib import PurePath > > + > > +from paramiko import SSHClient > > + > > +from framework.logger import DTSLOG > > +from framework.settings import SETTINGS > > + > > +from .interactive_shell import InteractiveShell > > + > > + > > +class TestPmdShell(InteractiveShell): > > + expected_prompt: str =3D "testpmd>" > > + > > + def __init__( > > + self, > > + interactive_session: SSHClient, > > + logger: DTSLOG, > > + path_to_testpmd: PurePath, > > + eal_flags: str =3D "", > > + cmd_line_options: str =3D "", > > + timeout: float =3D SETTINGS.timeout, > > + ) -> None: > > + """Initializes an interactive testpmd session using specified > parameters.""" > > + super(TestPmdShell, self).__init__( > > + interactive_session, logger=3Dlogger, timeout=3Dtimeout > > + ) > > + self.send_command_get_output( > > + f"{path_to_testpmd} {eal_flags} -- -i {cmd_line_options}\n= ", > > + self.expected_prompt, > > + ) > > + > > + def send_command(self, command: str, prompt: str =3D expected_prom= pt) > -> str: > > + """Specific way of handling the command for testpmd > > + > > + An extra newline character is consumed in order to force the > current line into > > + the stdout buffer. > > + """ > > + return self.send_command_get_output(f"{command}\n", prompt) > > + > > + def get_devices(self) -> list[str]: > > + """Get a list of device names that are known to testpmd > > + > > + Uses the device info listed in testpmd and then parses the > output to > > + return only the names of the devices. > > + > > + Returns: > > + A list of strings representing device names (e.g. > 0000:14:00.1) > > + """ > > + dev_info: str =3D self.send_command("show device info all") > > + dev_list: list[str] =3D [] > > + for line in dev_info.split("\n"): > > + if "device name:" in line.lower(): > > + dev_list.append(line.strip().split(": ")[1].strip()) > > + return dev_list > > This could be an object (or a list of TestPmdDevice object or something > similar). We could start with just the name and add to it in the future. > I think for the scope of smoke tests it would make sense to leave this as just the names for now because all we are doing is making sure we can discover them and that they appear. I do agree that it could be useful to have more info, but it would likely make more sense to add to it in the future when we need that additional information. > > > diff --git a/dts/framework/test_result.py b/dts/framework/test_result.p= y > > index 74391982..029665fb 100644 > > --- a/dts/framework/test_result.py > > +++ b/dts/framework/test_result.py > > @@ -1,5 +1,6 @@ > > # SPDX-License-Identifier: BSD-3-Clause > > # Copyright(c) 2023 PANTHEON.tech s.r.o. > > +# Copyright(c) 2022-2023 University of New Hampshire > > > > """ > > Generic result container and reporters > > @@ -8,14 +9,17 @@ > > import os.path > > from collections.abc import MutableSequence > > from enum import Enum, auto > > +from typing import Dict > > No need, we can just use built-in dict (since Python3.9). > > > > > from .config import ( > > OS, > > Architecture, > > BuildTargetConfiguration, > > + BuildTargetVersionInfo, > > Compiler, > > CPUType, > > NodeConfiguration, > > + NodeVersionInfo, > > ) > > from .exception import DTSError, ErrorSeverity > > from .logger import DTSLOG > > @@ -67,12 +71,14 @@ class Statistics(dict): > > Using a dict provides a convenient way to format the data. > > """ > > > > - def __init__(self, dpdk_version): > > + def __init__(self, output_info: Dict[str, str] | None): > > super(Statistics, self).__init__() > > for result in Result: > > self[result.name] =3D 0 > > self["PASS RATE"] =3D 0.0 > > - self["DPDK VERSION"] =3D dpdk_version > > + if output_info: > > + for info_key, info_val in output_info.items(): > > + self[info_key] =3D info_val > > > > def __iadd__(self, other: Result) -> "Statistics": > > """ > > @@ -206,6 +212,8 @@ class BuildTargetResult(BaseResult): > > os: OS > > cpu: CPUType > > compiler: Compiler > > + compiler_version: str | None > > + dpdk_version: str | None > > > > def __init__(self, build_target: BuildTargetConfiguration): > > super(BuildTargetResult, self).__init__() > > @@ -213,6 +221,12 @@ def __init__(self, build_target: > BuildTargetConfiguration): > > self.os =3D build_target.os > > self.cpu =3D build_target.cpu > > self.compiler =3D build_target.compiler > > + self.compiler_version =3D None > > + self.dpdk_version =3D None > > + > > + def add_build_target_versions(self, versions: > BuildTargetVersionInfo) -> None: > > + self.compiler_version =3D versions.compiler_version > > + self.dpdk_version =3D versions.dpdk_version > > > > def add_test_suite(self, test_suite_name: str) -> TestSuiteResult: > > test_suite_result =3D TestSuiteResult(test_suite_name) > > @@ -228,10 +242,17 @@ class ExecutionResult(BaseResult): > > """ > > > > sut_node: NodeConfiguration > > + sut_os_name: str > > + sut_os_version: str > > + sut_kernel_version: str > > > > - def __init__(self, sut_node: NodeConfiguration): > > + def __init__(self, sut_node: NodeConfiguration, sut_version_info: > NodeVersionInfo): > > super(ExecutionResult, self).__init__() > > self.sut_node =3D sut_node > > + self.sut_version_info =3D sut_version_info > > + self.sut_os_name =3D sut_version_info.os_name > > + self.sut_os_version =3D sut_version_info.os_version > > + self.sut_kernel_version =3D sut_version_info.kernel_version > > > > def add_build_target( > > self, build_target: BuildTargetConfiguration > > @@ -258,6 +279,7 @@ class DTSResult(BaseResult): > > """ > > > > dpdk_version: str | None > > + output: dict | None > > _logger: DTSLOG > > _errors: list[Exception] > > _return_code: ErrorSeverity > > @@ -267,14 +289,17 @@ class DTSResult(BaseResult): > > def __init__(self, logger: DTSLOG): > > super(DTSResult, self).__init__() > > self.dpdk_version =3D None > > + self.output =3D None > > self._logger =3D logger > > self._errors =3D [] > > self._return_code =3D ErrorSeverity.NO_ERR > > self._stats_result =3D None > > self._stats_filename =3D os.path.join(SETTINGS.output_dir, > "statistics.txt") > > > > - def add_execution(self, sut_node: NodeConfiguration) -> > ExecutionResult: > > - execution_result =3D ExecutionResult(sut_node) > > + def add_execution( > > + self, sut_node: NodeConfiguration, sut_version_info: > NodeVersionInfo > > + ) -> ExecutionResult: > > + execution_result =3D ExecutionResult(sut_node, sut_version_inf= o) > > self._inner_results.append(execution_result) > > return execution_result > > > > @@ -296,7 +321,8 @@ def process(self) -> None: > > for error in self._errors: > > self._logger.debug(repr(error)) > > > > - self._stats_result =3D Statistics(self.dpdk_version) > > + self._stats_result =3D Statistics(self.output) > > + # add information gathered from the smoke tests to the > statistics > > self.add_stats(self._stats_result) > > with open(self._stats_filename, "w+") as stats_file: > > stats_file.write(str(self._stats_result)) > > diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py > > index 0705f38f..ad66014c 100644 > > --- a/dts/framework/test_suite.py > > +++ b/dts/framework/test_suite.py > > @@ -10,11 +10,24 @@ > > import inspect > > import re > > from types import MethodType > > +from typing import Dict > > > > -from .exception import ConfigurationError, SSHTimeoutError, > TestCaseVerifyError > > +from .config import BuildTargetConfiguration > > +from .exception import ( > > + BlockingTestSuiteError, > > + ConfigurationError, > > + SSHTimeoutError, > > + TestCaseVerifyError, > > +) > > from .logger import DTSLOG, getLogger > > from .settings import SETTINGS > > -from .test_result import BuildTargetResult, Result, TestCaseResult, > TestSuiteResult > > +from .test_result import ( > > + BuildTargetResult, > > + DTSResult, > > + Result, > > + TestCaseResult, > > + TestSuiteResult, > > +) > > from .testbed_model import SutNode > > > > > > @@ -37,10 +50,12 @@ class TestSuite(object): > > """ > > > > sut_node: SutNode > > + is_blocking =3D False > > _logger: DTSLOG > > _test_cases_to_run: list[str] > > _func: bool > > _result: TestSuiteResult > > + _dts_result: DTSResult > > > > def __init__( > > self, > > @@ -48,6 +63,8 @@ def __init__( > > test_cases: list[str], > > func: bool, > > build_target_result: BuildTargetResult, > > + build_target_conf: BuildTargetConfiguration, > > + dts_result: DTSResult, > > ): > > self.sut_node =3D sut_node > > self._logger =3D getLogger(self.__class__.__name__) > > @@ -55,6 +72,8 @@ def __init__( > > self._test_cases_to_run.extend(SETTINGS.test_cases) > > self._func =3D func > > self._result =3D > build_target_result.add_test_suite(self.__class__.__name__) > > + self.build_target_info =3D build_target_conf > > + self._dts_result =3D dts_result > > > > def set_up_suite(self) -> None: > > """ > > @@ -118,6 +137,8 @@ def run(self) -> None: > > f"the next test suite may be affected." > > ) > > self._result.update_setup(Result.ERROR, e) > > + if len(self._result.get_errors()) > 0 and self.is_blocking= : > > + raise BlockingTestSuiteError(test_suite_name) > > > > def _execute_test_suite(self) -> None: > > """ > > @@ -232,6 +253,12 @@ def _execute_test_case( > > test_case_result.update(Result.SKIP) > > raise KeyboardInterrupt("Stop DTS") > > > > + def write_to_statistics_file(self, output: Dict[str, str]): > > This isn't used anywhere as far as I can tell. > It was previously but when we discussed removing the statistics output I left this in but you're right, thinking about it now it doesn't make sense to leave it in until we reformat statistics.txt. > > > + if self._dts_result.output is not None: > > + self._dts_result.output.update(output) > > + else: > > + self._dts_result.output =3D output > > + > > This could be simplified if we init output to an empty dict in the > constructor. > > > > > def get_test_suites(testsuite_module_path: str) -> > list[type[TestSuite]]: > > def is_test_suite(object) -> bool: > > diff --git a/dts/framework/testbed_model/node.py > b/dts/framework/testbed_model/node.py > > index d48fafe6..c5147e0e 100644 > > --- a/dts/framework/testbed_model/node.py > > +++ b/dts/framework/testbed_model/node.py > > @@ -40,6 +40,7 @@ class Node(object): > > lcores: list[LogicalCore] > > _logger: DTSLOG > > _other_sessions: list[OSSession] > > + _execution_config: ExecutionConfiguration > > > > def __init__(self, node_config: NodeConfiguration): > > self.config =3D node_config > > @@ -64,6 +65,7 @@ def set_up_execution(self, execution_config: > ExecutionConfiguration) -> None: > > """ > > self._setup_hugepages() > > self._set_up_execution(execution_config) > > + self._execution_config =3D execution_config > > > > def _set_up_execution(self, execution_config: > ExecutionConfiguration) -> None: > > """ > > diff --git a/dts/framework/testbed_model/sut_node.py > b/dts/framework/testbed_model/sut_node.py > > index 2b2b50d9..27849da7 100644 > > --- a/dts/framework/testbed_model/sut_node.py > > +++ b/dts/framework/testbed_model/sut_node.py > > @@ -1,14 +1,21 @@ > > # SPDX-License-Identifier: BSD-3-Clause > > # Copyright(c) 2010-2014 Intel Corporation > > # Copyright(c) 2023 PANTHEON.tech s.r.o. > > +# Copyright(c) 2022-2023 University of New Hampshire > > > > import os > > import tarfile > > import time > > from pathlib import PurePath > > > > -from framework.config import BuildTargetConfiguration, NodeConfigurati= on > > -from framework.remote_session import CommandResult, OSSession > > +from framework.config import ( > > + BuildTargetConfiguration, > > + BuildTargetVersionInfo, > > + InteractiveApp, > > + NodeConfiguration, > > + NodeVersionInfo, > > +) > > +from framework.remote_session import CommandResult, InteractiveShell, > OSSession > > from framework.settings import SETTINGS > > from framework.utils import EnvVarsDict, MesonArgs > > > > @@ -26,13 +33,17 @@ class SutNode(Node): > > > > _dpdk_prefix_list: list[str] > > _dpdk_timestamp: str > > - _build_target_config: BuildTargetConfiguration | None > > + _build_target_config: BuildTargetConfiguration > > What's the reason for this change? My guess would be mypy, but the change > still results in mypy complaining: > framework/testbed_model/sut_node.py:51: error: Incompatible types in > assignment (expression has type "None", variable has type > "BuildTargetConfiguration") > I would guess it was for mypy as well which I got to be passing before I submitted the patch, but if there is still an error with it then I will revert. > > > > _env_vars: EnvVarsDict > > _remote_tmp_dir: PurePath > > __remote_dpdk_dir: PurePath | None > > - _dpdk_version: str | None > > _app_compile_timeout: float > > _dpdk_kill_session: OSSession | None > > + _dpdk_version: str | None > > + _os_name: str | None > > + _os_version: str | None > > + _kernel_version: str | None > > + _compiler_version: str | None > > > > def __init__(self, node_config: NodeConfiguration): > > super(SutNode, self).__init__(node_config) > > @@ -41,12 +52,16 @@ def __init__(self, node_config: NodeConfiguration): > > self._env_vars =3D EnvVarsDict() > > self._remote_tmp_dir =3D self.main_session.get_remote_tmp_dir(= ) > > self.__remote_dpdk_dir =3D None > > - self._dpdk_version =3D None > > self._app_compile_timeout =3D 90 > > self._dpdk_kill_session =3D None > > self._dpdk_timestamp =3D ( > > f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', > time.localtime())}" > > ) > > + self._dpdk_version =3D None > > + self._os_name =3D None > > + self._os_version =3D None > > + self._kernel_version =3D None > > + self._compiler_version =3D None > > > > @property > > def _remote_dpdk_dir(self) -> PurePath: > > @@ -75,6 +90,44 @@ def dpdk_version(self) -> str: > > ) > > return self._dpdk_version > > > > + @property > > + def os_name(self) -> str: > > + if self._os_name is None: > > + self._os_name =3D self.main_session.get_os_name() > > + return self._os_name > > + > > + @property > > + def os_version(self) -> str: > > + if self._os_version is None: > > + self._os_version =3D self.main_session.get_os_version() > > + return self._os_version > > + > > + @property > > + def kernel_version(self) -> str: > > + if self._kernel_version is None: > > + self._kernel_version =3D > self.main_session.get_kernel_version() > > + return self._kernel_version > > + > > + @property > > + def compiler_version(self) -> str: > > + if self._compiler_version is None: > > + self._compiler_version =3D > self.main_session.get_compiler_version( > > + self._build_target_config.compiler.name > > + ) > > + return self._compiler_version > > + > > + def get_node_versions(self) -> NodeVersionInfo: > > + return NodeVersionInfo( > > + os_name=3Dself.os_name, > > + os_version=3Dself.os_version, > > + kernel_version=3Dself.kernel_version, > > + ) > > + > > + def get_build_target_versions(self) -> BuildTargetVersionInfo: > > + return BuildTargetVersionInfo( > > + dpdk_version=3Dself.dpdk_version, > compiler_version=3Dself.compiler_version > > + ) > > + > > def _guess_dpdk_remote_dir(self) -> PurePath: > > return > self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir) > > > > @@ -84,6 +137,10 @@ def _set_up_build_target( > > """ > > Setup DPDK on the SUT node. > > """ > > + # we want to ensure that dpdk_version and compiler_version is > reset for new > > + # build targets > > + self._dpdk_version =3D None > > + self._compiler_version =3D None > > self._configure_build_target(build_target_config) > > self._copy_dpdk_tarball() > > self._build_dpdk() > > @@ -262,6 +319,49 @@ def run_dpdk_app( > > f"{app_path} {eal_args}", timeout, verify=3DTrue > > ) > > > > + def create_interactive_shell( > > + self, > > + shell_type: InteractiveApp, > > + path_to_app: PurePath | None =3D None, > > Can we join shell_type and path_to_app together? Both contain the same > type of information. > If we can't, let's drop path_to_app, as we don't provide (in this patch) = a > way for users to modify this. We can add it later if needed. > The idea behind it was allowing users to specify where their application is when they call this method. Users can pass a path into this application if their application isn't at the default path which I think is a good addition to have right now in the case of a user having their binary in a different location. > > > + timeout: float =3D SETTINGS.timeout, > > + eal_parameters: str =3D "", > > Any reason we're not using the EalParameters class here? > I didn't use it originally because it requires an lcore list and memory channels if you provide them. Although, I could use EalParameters and just make them optional which would have the same effect but be more structured is what makes the most sense to me so I will make that change. > > > + app_parameters=3D"", > > This is also currently not used, so maybe we should drop it and add it in > the future, probably as a dedicated object representing the parameters. > That makes sense, you're right that a more structured class for providing parameters would work better. The difficulty with a dedicated object however is that you have different parameters for different applications so you couldn't get very granular. Regardless, it can be removed from this patch for now and added back in the future. > > > + ) -> InteractiveShell: > > This should return a Union of all the different sessions we support, > otherwise mypy complains: > tests/TestSuite_smoke_tests.py:68: error: Incompatible types in assignmen= t > (expression has type "InteractiveShell", variable has type "TestPmdShell"= ) > > I figured this would be allowed because they were subclasses and didn't realize mypy would have a problem with it. I will fix this in the next version. > > + """Create a handler for an interactive session. > > + > > + This method is a factory that calls a method in OSSession to > create shells for > > + different DPDK applications. > > + > > + Args: > > + shell_type: Enum value representing the desired applicatio= n. > > + path_to_app: Represents a path to the application you are > attempting to > > + launch. This path will be executed at the start of the > app > > + initialization. If one isn't provided, the default > specified in the > > + enumeration will be used. > > + timeout: Timeout for the ssh channel > > We should specify what sort of timeout this is - for creating the channel > or for sent commands? > Very good point, this is pretty vague if you weren't the one who wrote it. I'll make sure to clear this up. > > > + eal_parameters: List of EAL parameters to use to launch th= e > app. This is > > + ignored for base "shell" types. > > + app_parameters: Command-line flags to pass into the > application on launch. > > + Returns: > > + Instance of the desired interactive application. > > + """ > > + default_path: PurePath | None > > + # if we just want a default shell, there is no need to append > the DPDK build > > + # directory to the path > > + if shell_type.name =3D=3D "shell": > > + default_path =3D None > > + else: > > + default_path =3D self.main_session.join_remote_path( > > + self.remote_dpdk_build_dir, *shell_type.get_path() > > + ) > > All this path handling looks clunky, it should be part of > the InteractiveApp class. Would adding something like update_path to > InteractiveApp work? > I agree this was a little messy, but after changing the default paths to PurePaths, I could clean this up more by just defaulting to shell_type.get_path() and then having a single if statement that checks if it isn't a shell type and, if not, set the default path with appending the build dir. it would look something more like this: default_path =3D shell_type.get_path() if shell_type !=3D InteractiveApp.shell: default_path =3D self.main_session.join_remote_path( self.remote_dpdk_build_dir, shell_type.get_path() ) > > + return > self.main_session.interactive_session.create_interactive_shell( > > + shell_type, > > + path_to_app or default_path, > > + timeout, > > + eal_parameters, > > + app_parameters, > > + ) > > + > > > > class EalParameters(object): > > def __init__( > > diff --git a/dts/tests/TestSuite_smoke_tests.py > b/dts/tests/TestSuite_smoke_tests.py > > new file mode 100644 > > index 00000000..a2fd9fae > > --- /dev/null > > +++ b/dts/tests/TestSuite_smoke_tests.py > > @@ -0,0 +1,101 @@ > > +# SPDX-License-Identifier: BSD-3-Clause > > +# Copyright(c) 2023 University of New Hampshire > > + > > +from framework.config import InteractiveApp > > +from framework.remote_session import TestPmdShell > > +from framework.test_suite import TestSuite > > + > > + > > +class SmokeTests(TestSuite): > > + is_blocking =3D True > > + # in this list, the first index is the address of the nic and the > second is > > + # the driver for that nic. > > + list_of_nics: list[tuple[str, str]] =3D [] > > Would using a dict make more sense? > I thought a tuple was a good option because of the two pieces of data but I think a dict would also be useful because it would be more clear when it is accessed later. I think turning this into a dict (and maybe migrating to an object later) is a good idea. > > > + > > + def set_up_suite(self) -> None: > > + """ > > + Setup: > > + build all DPDK > > This needs to be updated. > You're right, I guess I missed updating this. > > + """ > > + self.dpdk_build_dir_path =3D self.sut_node.remote_dpdk_build_d= ir > > + for nic in self.sut_node.config.ports: > > + new_tuple =3D (nic.pci, nic.os_driver.strip()) > > + self.list_of_nics.append(new_tuple) > > It would be better to rename this based on what list of nics this is - th= e > one we expect to find or the one we're testing? > Another spot that you're right, it is not nearly as clear if you weren't the one who wrote it, I'll update this as well. > > > + > > + def test_unit_tests(self) -> None: > > + """ > > + Test: > > + run the fast-test unit-test suite through meson > > + """ > > + self.sut_node.main_session.send_command( > > + f"meson test -C {self.dpdk_build_dir_path} --suite > fast-tests", > > + 300, > > + verify=3DTrue, > > + ) > > + > > + def test_driver_tests(self) -> None: > > + """ > > + Test: > > + run the driver-test unit-test suite through meson > > + """ > > + list_of_vdevs =3D "" > > + for dev in self.sut_node._execution_config.vdevs: > > + list_of_vdevs +=3D f"--vdev {dev} " > > + list_of_vdevs =3D list_of_vdevs[:-1] > > + if list_of_vdevs: > > + self._logger.info( > > + "Running driver tests with the following virtual " > > + f"devices: {list_of_vdevs}" > > + ) > > + self.sut_node.main_session.send_command( > > + f"meson test -C {self.dpdk_build_dir_path} --suite > driver-tests " > > + f'--test-args "{list_of_vdevs}"', > > + 300, > > + verify=3DTrue, > > + ) > > + else: > > + self.sut_node.main_session.send_command( > > + f"meson test -C {self.dpdk_build_dir_path} --suite > driver-tests", > > + 300, > > + verify=3DTrue, > > + ) > > + > > + def test_devices_listed_in_testpmd(self) -> None: > > + """ > > + Test: > > + Uses testpmd driver to verify that devices have been found > by testpmd > > + """ > > Every other test description is written in imperative and with lowecase > first letter. Let's unite everything to imperative with uppercase first > letter and properly ending the sentence with a dot. > Sounds good to me, I'll reformat them. > > > + testpmd_driver: TestPmdShell =3D > self.sut_node.create_interactive_shell( > > + InteractiveApp.testpmd > > + ) > > + dev_list: list[str] =3D testpmd_driver.get_devices() > > + for nic in self.list_of_nics: > > Here's where the more explicit naming would be beneficial - I had to thin= k > about what's what. > > > + self.verify( > > + nic[0] in dev_list, > > + f"Device {nic[0]} was not listed in testpmd's availabl= e > devices, " > > + "please check your configuration", > > + ) > > + > > + def test_device_bound_to_driver(self) -> None: > > + """ > > + Test: > > + ensure that all drivers listed in the config are bound to > the correct driver > > + """ > > + path_to_dev =3D self.sut_node.main_session.join_remote_path( > > + self.sut_node._remote_dpdk_dir, "usertools", > "dpdk-devbind.py" > > + ) > > + for nic in self.list_of_nics: > > + out =3D self.sut_node.main_session.send_command( > > + f"{path_to_dev} --status | grep {nic[0]}", 60 > > + ) > > This may be something we want to put into SutNode, something like > get_device_info. We'd only send the command once (not for every NIC as > we're doing here) and parse the output. > I think similar to the above test for testpmd, it makes sense to isolate the checking if the devices are bound to the right driver to smoke tests. Just because after smoke tests verifies that they are bound to the right driver it would never be used in the SutNode again because you could then trust the config file. Also, if we were to later change this to be isolated to only checking the NICs that are being used for this execution like we had mentioned, you would want to isolate this to those nics that are actually being used in the execution, not all of the NICs in the SutNode again so I think keeping it in smoke tests makes more sense. I do agree though that this can be done in only one call to dev-bind which I wasn't thinking about when writing it and that I can refactor for the next version of the patch. > > > + self.verify( > > + len(out.stdout) !=3D 0, > > + f"Failed to find configured device ({nic[0]}) using > dpdk-devbind.py", > > + ) > > + for string in out.stdout.split(" "): > > + if "drv=3D" in string: > > + self.verify( > > + string.split("=3D")[1] =3D=3D nic[1], > > + f"Driver for device {nic[0]} does not match > driver listed in " > > + f'configuration (bound to > {string.split("=3D")[1]})', > > + ) > > -- > > 2.41.0 > > > Thank you for the feedback, it's always very helpful. -Jeremy --000000000000f8f02805ffedb07c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


<= div dir=3D"ltr" class=3D"gmail_attr">On Thu, Jul 6, 2023 at 10:54=E2=80=AFA= M Juraj Linke=C5=A1 <juraj.linkes@pantheon.tech> wrote:
There are my= py errors related to paramiko:
framework/remote_session/remote/interact= ive_remote_session.py:8: error: Library stubs not installed for "param= iko" (or incompatible with Python 3.10)
framework/remote_session/re= mote/interactive_remote_session.py:9: error: Library stubs not installed fo= r "paramiko.ssh_exception" (or incompatible with Python 3.10)

We do this for pexpect:
import pexpect =C2= =A0# type: ignore
from pexpect import pxssh =C2=A0# type: ignore

= We should do the same for paramiko.

Good to know, I'll be sure to change that to make sure it passes.
=
=C2=A0

There are also some changes which may require= coordination. I'm not sure how we'd do that, the best may be to al= ign that off-list.

More inline.

On Thu, Jun 15, 2023 at 10:13= =E2=80=AFPM <j= spewock@iol.unh.edu> wrote:
>
> From: Jeremy Spewock <= ;jspewock@iol.unh= .edu>
>
> Adds a new test suite as well as configuration= changes needed for running
> smoke tests that verify general environ= ment aspects of the system
> under test. If any of these tests fail, = the DTS execution terminates
> as part of a "fail-fast" mod= el.
>
> Signed-off-by: Jeremy Spewock <jspewock@iol.unh.edu>
> -= --
> =C2=A0dts/conf.yaml =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2= =A013 ++
> =C2=A0dts/framework/config/__init__.py =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 114 +++++++++++++--
> =C2=A0dts/fram= ework/config/conf_yaml_schema.json =C2=A0 =C2=A0| 135 +++++++++++++++++-> =C2=A0dts/framework/dts.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 101 ++++++++++---
>= =C2=A0dts/framework/exception.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0| =C2=A012 ++
> =C2=A0dts/framework/remot= e_session/__init__.py =C2=A0 =C2=A0 =C2=A0| =C2=A010 +-
> =C2=A0dts/f= ramework/remote_session/os_session.py =C2=A0 =C2=A0| =C2=A034 ++++-
>= =C2=A0dts/framework/remote_session/posix_session.py | =C2=A030 ++++
>= ; =C2=A0.../remote_session/remote/__init__.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 |= =C2=A012 ++
> =C2=A0.../remote/interactive_remote_session.py =C2=A0 = =C2=A0 =C2=A0| 113 +++++++++++++++
> =C2=A0.../remote/interactive_she= ll.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A098 +++++++++= ++++
> =C2=A0.../remote_session/remote/testpmd_shell.py =C2=A0 =C2=A0= | =C2=A058 ++++++++
> =C2=A0dts/framework/test_result.py =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| =C2=A038 ++++-
>= ; =C2=A0dts/framework/test_suite.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A031 +++-
> =C2=A0dts/framework/testbe= d_model/node.py =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | =C2=A0 2 +
> =C2= =A0dts/framework/testbed_model/sut_node.py =C2=A0 =C2=A0 =C2=A0 | 110 +++++= ++++++++-
> =C2=A0dts/tests/TestSuite_smoke_tests.py =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0| 101 +++++++++++++
> =C2=A017 files chang= ed, 962 insertions(+), 50 deletions(-)
> =C2=A0create mode 100644 dts= /framework/remote_session/remote/interactive_remote_session.py
> =C2= =A0create mode 100644 dts/framework/remote_session/remote/interactive_shell= .py
> =C2=A0create mode 100644 dts/framework/remote_session/remote/te= stpmd_shell.py
> =C2=A0create mode 100644 dts/tests/TestSuite_smoke_t= ests.py
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> = index a9bd8a3e..03fd57e1 100644
> --- a/dts/conf.yaml
> +++ b/d= ts/conf.yaml
> @@ -10,6 +10,8 @@ executions:
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0compiler_wrapper: ccache
> =C2=A0 =C2=A0 =C2=A0pe= rf: false
> =C2=A0 =C2=A0 =C2=A0func: true
> + =C2=A0 =C2=A0vde= vs: #names of virtual devices to be used for testing

Vdevs are optio= nal, let's mention=C2=A0that in the comment as with the hugepages confi= g.

That's a good point, I'= ll be sure to include that for clarity.
=C2=A0
=C2=A0
= >
> + =C2=A0 =C2=A0 =C2=A0- "crypto_openssl"

How = (or where) is this configured in the original DTS? The vdevs are only going= to be used on SUTs, so it may make sense to put them to node config. On th= e other hand, each execution using the SUT may use different vdevs, so this= is probably the best place. Maybe we could move it under system_under_test= in execution? That would make more sense I think - it would be an addition= to the existing SUT configuration (as in here's where we augment SUT c= onfig specific to the execution).
=C2=A0

I added it under execution under that same line of thinking that you m= entioned: different executions might call for use of different vdevs. I thi= nk that is a good point however that these are only going to be used on the= SUT so it makes sense to add it under that section.=C2=A0

I think this w= ould potentially be good for other things we mentioned as well like how to = handle ports on the SUT node and if those should be listed under SUT or exe= cution. I think if we turn this system_under_test key into an object that w= e can add things to the SUT based on the execution. Then, in the future, we= could create a devices object that could be used for CPU devices, NICs, or= things of that nature and add them to the SUT based on the execution.

= >
> =C2=A0 =C2=A0 =C2=A0test_suites:
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0- hello_world
> =C2=A0 =C2=A0 =C2=A0system_under_test: "SU= T 1"
> @@ -20,6 +22,17 @@ nodes:
> =C2=A0 =C2=A0 =C2=A0arc= h: x86_64
> =C2=A0 =C2=A0 =C2=A0os: linux
> =C2=A0 =C2=A0 =C2= =A0lcores: ""
> + =C2=A0 =C2=A0ports:
> + =C2=A0 =C2= =A0 =C2=A0- pci: "0000:00:08.0"
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0dpdk_os_driver: vfio-pci

We could find a better name for this = driver (and maybe even the regular os driver). It's the OS driver that = dpdk uses and there could be better names - os_dpdk_driver, os_driver_dpdk,= os_driver_for_dpdk. Which one do you like the most? Or maybe some other na= me?
In either case we probably want to add a comment to both drivers tha= t explain what we mean here. The name doesn't need to be perfect then. = :-) I'll make sure to align the change with my patch if we change this.=

I think out of those I like os_driver_= for_dpdk which makes it a little more clear that it is just an OS driver th= at DPDK is going to use, not something specific to DPDK. I agree that this = still isn't perfect but it is at least more clear and the comment will = help clear it the rest of the way.

=C2=A0
>
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0os_driver: i40e
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0peer_= node: "TG 1"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0peer_pci: "= ;0000:00:08.0"
> + =C2=A0 =C2=A0 =C2=A0- pci: "0000:00:08.1= "
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dpdk_os_driver: vfio-pci
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0os_driver: i40e
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0peer_node: "TG 1"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= peer_pci: "0000:00:08.1"
> =C2=A0 =C2=A0 =C2=A0use_first_co= re: false
> =C2=A0 =C2=A0 =C2=A0memory_channels: 4
> =C2=A0 =C2= =A0 =C2=A0hugepages: =C2=A0# optional; if removed, will use system hugepage= configuration
> diff --git a/dts/framework/config/__init__.py b/dts/= framework/config/__init__.py
> index ebb0823f..74e5158d 100644
>= ; --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config= /__init__.py
> @@ -72,6 +72,21 @@ class HugepageConfiguration:
>= ; =C2=A0 =C2=A0 =C2=A0force_first_numa: bool
>
>
> +@data= class(slots=3DTrue, frozen=3DTrue)
> +class PortConfig:
> + =C2= =A0 =C2=A0id: int

I removed id in my latest local version as I didn&= #39;t see a real use for it.
=C2=A0
>
> + =C2=A0 =C2=A0node:= str
> + =C2=A0 =C2=A0pci: str
> + =C2=A0 =C2=A0dpdk_os_driver:= str
> + =C2=A0 =C2=A0os_driver: str
> + =C2=A0 =C2=A0peer_node= : str
> + =C2=A0 =C2=A0peer_pci: str
> +
> + =C2=A0 =C2= =A0@staticmethod
> + =C2=A0 =C2=A0def from_dict(id: int, node: str, d= : dict) -> "PortConfig":
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= return PortConfig(id=3Did, node=3Dnode, **d)

This = would need to be changed if we remove the id.


That makes sense, I don't really see a need for = it either so I will also remove it.
=C2=A0
> +
= > +
> =C2=A0@dataclass(slots=3DTrue, frozen=3DTrue)
> =C2=A0= class NodeConfiguration:
> =C2=A0 =C2=A0 =C2=A0name: str
> @@ -= 84,6 +99,7 @@ class NodeConfiguration:
> =C2=A0 =C2=A0 =C2=A0use_firs= t_core: bool
> =C2=A0 =C2=A0 =C2=A0memory_channels: int
> =C2= =A0 =C2=A0 =C2=A0hugepages: HugepageConfiguration | None
> + =C2=A0 = =C2=A0ports: list[PortConfig]
>
> =C2=A0 =C2=A0 =C2=A0@staticme= thod
> =C2=A0 =C2=A0 =C2=A0def from_dict(d: dict) -> "NodeCon= figuration":
> @@ -92,18 +108,46 @@ def from_dict(d: dict) ->= "NodeConfiguration":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0if "force_first_numa" not in hugepage_config:
>= ; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0hugepage_co= nfig["force_first_numa"] =3D False
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0hugepage_config =3D HugepageConfiguration(**huge= page_config)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0common_config =3D {
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"name": d["nam= e"],
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"hostname= ": d["hostname"],
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"user": d["user"],
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0"password": d.get("password"= ),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"arch": Arc= hitecture(d["arch"]),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"os": OS(d["os"]),
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0"lcores": d.get("lcores", &q= uot;1"),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"use_= first_core": d.get("use_first_core", False),
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"memory_channels": d.get(&q= uot;memory_channels", 1),
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0"hugepages": hugepage_config,
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"ports": [
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0PortConfig.from_dict(i, d["name&quo= t;], port)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0for i, port in enumerate(d["ports"])
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0],

This also needs to be modified when re= moving PortConfig.id:
"ports": [PortConfig.from_dict(d["n= ame"], port) for port in d["ports"]],
=C2=A0
>
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> +
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0return NodeConfiguration(**common_config)
> +
> +
>= ; +@dataclass(slots=3DTrue)
> +class NodeVersionInfo:

This loo= ks like we could use it for all sorts of things in the future, so I'd j= ust call it NodeInfo.
=C2=A0

=
I = agree that it could be used for a lot of things, I think I just added the v= ersion so it would be more specific but in hindsight I agree that this coul= d be shortened and used for multiple things.
=C2=A0
&= gt;
> + =C2=A0 =C2=A0"""Class to hold important versio= ns within the node.
> +
> + =C2=A0 =C2=A0This class, unlike the= NodeConfiguration class, cannot be generated at the start.
> + =C2= =A0 =C2=A0This is because we need to initialize a connection with the node = before we can
> + =C2=A0 =C2=A0collet the information needed in this = class. Therefore, it cannot be a part of

Typo: collect


Oops, good catch.=
=C2=A0
> + =C2=A0 =C2=A0the configuration cl= ass above.
> + =C2=A0 =C2=A0"""
>
> - =C2= =A0 =C2=A0 =C2=A0 =C2=A0return NodeConfiguration(
> - =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0name=3Dd["name"],
> - =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0hostname=3Dd["hostname"],
> = - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0user=3Dd["user"],
&= gt; - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0password=3Dd.get("passw= ord"),
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0arch=3DArchi= tecture(d["arch"]),
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0os=3DOS(d["os"]),
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0lcores=3Dd.get("lcores", "1"),
> - =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0use_first_core=3Dd.get("use_firs= t_core", False),
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0me= mory_channels=3Dd.get("memory_channels", 1),
> - =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0hugepages=3Dhugepage_config,
> + =C2= =A0 =C2=A0os_name: str
> + =C2=A0 =C2=A0os_version: str
> + =C2= =A0 =C2=A0kernel_version: str
> +
> + =C2=A0 =C2=A0@staticmetho= d
> + =C2=A0 =C2=A0def from_dict(d: dict):
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0return NodeVersionInfo(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0os_name=3Dd["os_name"],
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0os_version=3Dd["os_version"],
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0kernel_version=3Dd["kernel_ve= rsion"],
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
>
>> @@ -128,6 +172,24 @@ def from_dict(d: dict) -> "BuildTargetC= onfiguration":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
>
= >
> +@dataclass(slots=3DTrue)
> +class BuildTargetVersionInf= o:
> + =C2=A0 =C2=A0"""Class to hold important version= s within the build target.
> +
> + =C2=A0 =C2=A0This is very si= milar to the NodeVersionInfo class, it just instead holds information
&g= t; + =C2=A0 =C2=A0for the build target.
> + =C2=A0 =C2=A0""= "
> +
> + =C2=A0 =C2=A0dpdk_version: str
> + =C2=A0 = =C2=A0compiler_version: str
> +
> + =C2=A0 =C2=A0@staticmethod<= br>> + =C2=A0 =C2=A0def from_dict(d: dict):
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0return BuildTargetVersionInfo(
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0dpdk_version=3Dd["dpdk_version"], compiler_versi= on=3Dd["compiler_version"]
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)=
> +

Do you expect we'll be storing more fields than just = these two (if so, then we can again rename the class to BuildTargetInfo)? I= f it's just these two, the extra class may not help that much (we could= pass around a tuple).

On the other=C2=A0hand, the= re's not really a reason to not have this class. I could see us adding = to it and I like BuildTargetInfo because of that.

I figured that it could be useful for storing any ext= ra information in the future and like you said, does no harm so I made it a= class. I think keeping it a class just makes it easier to extend it in the= future but I'll rename it like the above info class.
=C2=A0
=C2=A0
>
> +
> =C2=A0class TestSuiteCo= nfigDict(TypedDict):
> =C2=A0 =C2=A0 =C2=A0suite: str
> =C2=A0 = =C2=A0 =C2=A0cases: list[str]
> @@ -157,6 +219,8 @@ class ExecutionCo= nfiguration:
> =C2=A0 =C2=A0 =C2=A0func: bool
> =C2=A0 =C2=A0 = =C2=A0test_suites: list[TestSuiteConfig]
> =C2=A0 =C2=A0 =C2=A0system= _under_test: NodeConfiguration
> + =C2=A0 =C2=A0vdevs: list[str]
&= gt; + =C2=A0 =C2=A0skip_smoke_tests: bool

This should be added to co= nf.yaml to demonstrate the availability of the option.

Very good point, it would be hard for people to know = it was an option otherwise.

=C2=A0
>
>
> =C2=A0= =C2=A0 =C2=A0@staticmethod
> =C2=A0 =C2=A0 =C2=A0def from_dict(d: di= ct, node_map: dict) -> "ExecutionConfiguration":
> @@ -1= 67,14 +231,18 @@ def from_dict(d: dict, node_map: dict) -> "Executi= onConfiguration":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0map(TestSuiteConfig.from_dict, d["test_suites"])
> = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0sut_name =3D d["system_under_test"]
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0list_of_vdevs =3D d["vdevs"]
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0skip_smoke_tests =3D d["skip_smoke_tests"] if "= ;skip_smoke_tests" in d else False

Just as with hugepages, we c= ould skip the else branch if we set the default of skip_smoke_tests to Fals= e. I'm not sure which is the better approach.

I am unsure of which is better and it doesn't mak= e a massive difference, but I will change this to a .get() call where the d= efault is false to more closely align with the huge pages configuration.
=C2=A0
=C2=A0
>
> =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0assert sut_name in node_map, f"Unknown SUT {sut_name} in= execution {d}"
>
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0retu= rn ExecutionConfiguration(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0build_targets=3Dbuild_targets,
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0perf=3Dd["perf"],
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0func=3Dd["func"],
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0skip_smoke_tests=3Dskip_smoke_tests,<= br>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suites=3Dtest_= suites,
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0system_unde= r_test=3Dnode_map[sut_name],
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0vdevs=3Dlist_of_vdevs,
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)>
>
> @@ -221,3 +289,27 @@ def load_config() -> Configur= ation:
>
>
> =C2=A0CONFIGURATION =3D load_config()
>= ; +
> +
> +@unique
> +class InteractiveApp(Enum):
>= + =C2=A0 =C2=A0"""An enum that represents different support= ed interactive applications
> +
> + =C2=A0 =C2=A0The values in = this enum must all be set to objects that have a key called
> + =C2= =A0 =C2=A0"default_path" where "default_path" represent= s an array for the path to the
> + =C2=A0 =C2=A0application. This def= ault path will be passed into the handler class for the
> + =C2=A0 = =C2=A0application so that it can start the application. For every key other= than
> + =C2=A0 =C2=A0the default shell option, the path will be app= ended to the path to the DPDK
> + =C2=A0 =C2=A0build directory for th= e current SUT node.
> + =C2=A0 =C2=A0"""
> +
= > + =C2=A0 =C2=A0shell =3D {"default_path": [""]}> + =C2=A0 =C2=A0testpmd =3D {"default_path": ["app"= ;, "dpdk-testpmd"]}

The stored path coul= d be a PurePath object. It'll be instantiated as an OS-specific path, b= ut main_session.join_remote_path will convert it properly.


I actually was thinking about thi= s because I saw it done when I was looking through a different snippet of D= TS code in one of your patches I believe and I realized it would be a bette= r approach. I much prefer using PurePath here just because that is more cle= ar and feels cleaner so I will make this change.
=C2=A0=
> +
> + =C2=A0 =C2=A0def get_path(self):
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""A method for getting the d= efault paths of an application
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0Returns:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0String array= that represents an OS agnostic path to the application.
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0return self.value["default_path"]
> diff --git a/dts/fra= mework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema= .json
> index ca2d4a1e..3f7c301a 100644
> --- a/dts/framework/c= onfig/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_sc= hema.json
> @@ -6,6 +6,76 @@
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"= ;type": "string",
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"d= escription": "A unique identifier for a node"
> =C2=A0= =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0"NIC": {
> + =C2=A0= =C2=A0 =C2=A0"type": "string",
> + =C2=A0 =C2=A0= =C2=A0"enum": [
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ALL&q= uot;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ConnectX3_MT4103",> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ConnectX4_LX_MT4117",
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0"ConnectX4_MT4115",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"ConnectX5_MT4119",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"ConnectX5_MT4121",
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0"I40E_10G-10G_BASE_T_BC",
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0"I40E_10G-10G_BASE_T_X722",
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0"I40E_10G-SFP_X722",
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"I40E_10G-SFP_XL710",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&qu= ot;I40E_10G-X722_A0",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_= 1G-1G_BASE_T_X722",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_25= G-25G_SFP28",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_40G-QSFP= _A",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"I40E_40G-QSFP_B",=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IAVF-ADAPTIVE_VF",
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IAVF-VF",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"IAVF_10G-X722_VF",
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"ICE_100G-E810C_QSFP",
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"ICE_25G-E810C_SFP",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&quo= t;ICE_25G-E810_XXV_SFP",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IG= B-I350_VF",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82540EM&= quot;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82545EM_COPPER&quo= t;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82571EB_COPPER",=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82574L",
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGB_1G-82576",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"IGB_1G-82576_QUAD_COPPER",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"IGB_1G-82576_QUAD_COPPER_ET2",
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0"IGB_1G-82580_COPPER",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"IGB_1G-I210_COPPER",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"IGB_1G-I350_COPPER",
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0"IGB_1G-I354_SGMII",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= "IGB_1G-PCH_LPTLP_I218_LM",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= "IGB_1G-PCH_LPTLP_I218_V",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&= quot;IGB_1G-PCH_LPT_I217_LM",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&qu= ot;IGB_1G-PCH_LPT_I217_V",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"= IGB_2.5G-I354_BACKPLANE_2_5GBPS",
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"IGC-I225_LM",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IGC-= I226_LM",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_S= FP",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_SFP_SF= _QP",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_T3_LO= M",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-82599_VF"= ;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X540T",
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X540_VF",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_A_SFP",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_X_10G_T",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_X_SFP",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550EM_X_VF",
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0"IXGBE_10G-X550T",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"IXGBE_10G-X550_VF",
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"brcm_57414",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"br= cm_P2100G",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"cavium_0011&quo= t;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"cavium_a034",
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0"cavium_a063",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0"cavium_a064",
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0"fastlinq_ql41000",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&q= uot;fastlinq_ql41000_vf",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"f= astlinq_ql45000",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"fastlinq_= ql45000_vf",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"hi1822",<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"virtio"
> + =C2=A0 = =C2=A0 =C2=A0]
> + =C2=A0 =C2=A0},
> +

All these NICs ma= y be overkill, do we want to trim them?
=C2=A0

I think in general that the more we have the better t= o make it more universally usable. If a NIC isn't supported by DTS anym= ore we could pull it out but I don't see a problem with maintaining a l= ist that has all supported NICs even if it does end up being long.
=C2=A0
<= div dir=3D"ltr">
>
> =C2=A0 =C2=A0 =C2=A0"ARCH"= : {
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"type": "string"= ,
> =C2=A0 =C2=A0 =C2=A0 =C2=A0"enum": [
> @@ -94,10 = +164,24 @@
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"amount"
= > =C2=A0 =C2=A0 =C2=A0 =C2=A0]
> =C2=A0 =C2=A0 =C2=A0},
> + = =C2=A0 =C2=A0"pci_address": {
> + =C2=A0 =C2=A0 =C2=A0"= ;type": "string",
> + =C2=A0 =C2=A0 =C2=A0"patter= n": "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$&quo= t;
> + =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0"port_peer_address= ": {
> + =C2=A0 =C2=A0 =C2=A0"description": "Peer= is a TRex port, and IXIA port or a PCI address",
> + =C2=A0 =C2= =A0 =C2=A0"oneOf": [
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0{
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description": "PCI p= eer port",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"$ref"= ;: "#/definitions/pci_address"
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0}
> + =C2=A0 =C2=A0 =C2=A0]
> + =C2=A0 =C2=A0},
> =C2= =A0 =C2=A0 =C2=A0"test_suite": {
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0"type": "string",
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0"enum": [
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0"hello_wor= ld"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"hello_world",
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0"smoke_tests"

Is there a reason to provide both this and the execution.skip_smoke_= tests option? If we provide both, we may need to think about what happens w= hen both are present. I'd say it's best for everyone (users and dev= s) to have only one option and I'd lean towards execution.skip_smoke_te= sts.

Good catch, I meant to remo= ve this. In theory if you added it in both places it wouldn't cause too= much of an issue, you would just be running smoke tests twice in your exec= ution. I will remove it anyway though.


> =C2=A0 =C2= =A0 =C2=A0 =C2=A0]
> =C2=A0 =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2= =A0"test_target": {
> @@ -165,6 +249,44 @@
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0"hugepages": {
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0"$ref": "#/definitions/hugepages"> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0"ports": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0"type": "array",
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0"items": {
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type": "object",
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description"= : "Each port should be described on both sides of the connection. This= makes configuration slightly more verbose but greatly simplifies implement= ation. If there are an inconsistencies, then DTS will not run until that is= sue is fixed. An example inconsistency would be port 1, node 1 says it is c= onnected to port 1, node 2, but port 1, node 2 says it is connected to port= 2, node 1.",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0"properties": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0"pci": {
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"$ref": "#/definitions= /pci_address",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0"description": "The local PCI address of th= e port"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&quo= t;dpdk_os_driver": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"type": "string",
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"descriptio= n": "The driver that the kernel should bind this device to for DP= DK to use it. (ex: vfio-pci)"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0"os_driver": {
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type": "string&qu= ot;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0"description": "The driver normally used by this port (ex= : i40e)"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&= quot;peer_node": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0"type": "string",
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description&q= uot;: "The name of the node the peer port is on"
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"peer_pci": {
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"$ref&q= uot;: "#/definitions/pci_address",
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description": "Th= e PCI address of the peer port"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&= quot;additionalProperties": false,
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"required": [
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"pci",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"dpdk_os_driver",> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"os_dri= ver",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0"peer_node",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0"peer_pci"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0]
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}= ,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"minimum": 1=
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0},
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"addi= tionalProperties": false,
> @@ -211,6 +333,17 @@
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0]
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"vdevs": {<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"description":= "Names of vdevs to be used in execution",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type": "array",
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"items": {
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type":"stri= ng"

Missing space before "string".

> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0},
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"skip_smoke_t= ests": {
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"desc= ription": "Optional field that allows you to skip smoke testing&q= uot;,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"type":&= quot;boolean"

Also the missing space.

Good call, I'll correct these.
=C2=A0

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0},
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"system_under_test": {
&= gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"$ref": "= ;#/definitions/node_name"
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0}
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py> index 05022845..f76c9ceb 100644
> --- a/dts/framework/dts.py> +++ b/dts/framework/dts.py
> @@ -5,7 +5,13 @@
>
> = =C2=A0import sys
>
> -from .config import CONFIGURATION, BuildT= argetConfiguration, ExecutionConfiguration
> +from .config import (> + =C2=A0 =C2=A0CONFIGURATION,
> + =C2=A0 =C2=A0BuildTargetConf= iguration,
> + =C2=A0 =C2=A0ExecutionConfiguration,
> + =C2=A0 = =C2=A0TestSuiteConfig,
> +)
> +from .exception import BlockingT= estSuiteError
> =C2=A0from .logger import DTSLOG, getLogger
> = =C2=A0from .test_result import BuildTargetResult, DTSResult, ExecutionResul= t, Result
> =C2=A0from .test_suite import get_test_suites
> @@ = -82,7 +88,9 @@ def _run_execution(
> =C2=A0 =C2=A0 =C2=A0running all = build targets in the given execution.
> =C2=A0 =C2=A0 =C2=A0"&qu= ot;"
> =C2=A0 =C2=A0 =C2=A0dts_logger.info(f"Running execution with SUT '= {exec= ution.system_under_test.name}'.")
> - =C2=A0 =C2=A0execu= tion_result =3D result.add_execution(sut_node.config)
> + =C2=A0 =C2= =A0execution_result =3D result.add_execution(
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0sut_node.config, sut_node.get_node_versions()
> + =C2=A0 = =C2=A0)
>
> =C2=A0 =C2=A0 =C2=A0try:
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0sut_node.set_up_execution(execution)
> @@ -118,14 +1= 26,17 @@ def _run_build_target(
>
> =C2=A0 =C2=A0 =C2=A0try:> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node.set_up_build_target(build_= target)
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0result.dpdk_version =3D sut_no= de.dpdk_version
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# result.dpdk_version = =3D sut_node.dpdk_version
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target= _result.add_build_target_versions(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0sut_node.get_build_target_versions()
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_resul= t.update_setup(Result.PASS)
> =C2=A0 =C2=A0 =C2=A0except Exception as= e:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.exception("Bu= ild target setup failed.")
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0b= uild_target_result.update_setup(Result.FAIL, e)
>
> =C2=A0 =C2= =A0 =C2=A0else:
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0_run_suites(sut_node, = execution, build_target_result)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0_run_a= ll_suites(sut_node, execution, build_target_result)
>
> =C2=A0 = =C2=A0 =C2=A0finally:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0try:
>= ; @@ -136,7 +147,46 @@ def _run_build_target(
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_result.update_teardown(Result.FAIL,= e)
>
>
> -def _run_suites(
> +def _run_single_suit= e(
> + =C2=A0 =C2=A0sut_node: SutNode,
> + =C2=A0 =C2=A0executi= on: ExecutionConfiguration,
> + =C2=A0 =C2=A0build_target_result: Bui= ldTargetResult,
> + =C2=A0 =C2=A0test_suite_config: TestSuiteConfig,<= br>> +) -> None:
> + =C2=A0 =C2=A0"""Runs a sing= le test suite.
> +
> + =C2=A0 =C2=A0Args:
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0sut_node: Node to run tests on.
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0execution: Execution the test case belongs to.
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0build_target_result: Build target configuration tes= t case is run on
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_config: Te= st suite configuration
> +
> + =C2=A0 =C2=A0Raises:
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0BlockingTestSuiteError: If a test suite that was= marked as blocking fails.
> + =C2=A0 =C2=A0"""
>= ; + =C2=A0 =C2=A0try:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0full_suite_path = =3D f"tests.TestSuite_{test_suite_config.test_suite}"
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_classes =3D get_test_suites(full_suit= e_path)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0suites_str =3D ", ".= join((x.__name__ for x in test_suite_classes))
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0dts_logger.debug(f"Found test suites '{suites_str}' = in '{full_suite_path}'.")
> + =C2=A0 =C2=A0except Except= ion as e:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.exception("A= n error occurred when searching for test suites.")
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0result.update_setup(Result.ERROR, e)
> +
> = + =C2=A0 =C2=A0else:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0for test_suite_cl= ass in test_suite_classes:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0test_suite =3D test_suite_class(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_config.test_cases,
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0execution.func,
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_result,=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node.= _build_target_config,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0result,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite.run()
> = +
> +
> +def _run_all_suites(
> =C2=A0 =C2=A0 =C2=A0sut_n= ode: SutNode,
> =C2=A0 =C2=A0 =C2=A0execution: ExecutionConfiguration= ,
> =C2=A0 =C2=A0 =C2=A0build_target_result: BuildTargetResult,
&g= t; @@ -146,27 +196,32 @@ def _run_suites(
> =C2=A0 =C2=A0 =C2=A0with = possibly only a subset of test cases.
> =C2=A0 =C2=A0 =C2=A0If no sub= set is specified, run all test cases.
> =C2=A0 =C2=A0 =C2=A0"&qu= ot;"
> + =C2=A0 =C2=A0end_build_target =3D False
> + =C2= =A0 =C2=A0smoke_test_config =3D TestSuiteConfig.from_dict("smoke_tests= ")
> + =C2=A0 =C2=A0if not execution.skip_smoke_tests:
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0try:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0_run_single_suite(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0sut_node, execution, build_target_result, smoke_test_co= nfig
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0except BlockingTestSuiteError as e:
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.exception("Smoke tests fa= iled, stopping build target execution.")
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0result.add_error(e)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0return
> =C2=A0 =C2=A0 =C2=A0for test_suite_confi= g in execution.test_suites:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0try:<= br>> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0full_suite_path =3D f&qu= ot;tests.TestSuite_{test_suite_config.test_suite}"
> - =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_classes =3D get_test_suites(fu= ll_suite_path)
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0suites_st= r =3D ", ".join((x.__name__ for x in test_suite_classes))
>= - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.debug(
> - =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Found test suite= s '{suites_str}' in '{full_suite_path}'."
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0_run_single_suite(
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node, execution, bu= ild_target_result, test_suite_config
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0)
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0except Exception= as e:
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_logger.except= ion("An error occurred when searching for test suites.")
> = - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0result.update_setup(Result.ERROR= , e)
> -
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> - =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for test_suite_class in test_suite_classe= s:
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_su= ite =3D test_suite_class(
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sut_node,
> - =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite_config.test_cases,<= br>> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0execution.func,
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_result,
> - =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> - =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_suite.run()
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0except BlockingTestSuiteError as e:
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0dts_logger.exception(
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"An error occurred within {test= _suite_config.test_suite}. "
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"Skipping build target..."
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0result.add_error(e)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0end_build_target =3D True
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0# if a blocking test failed and we need to bail out of suite executions<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if end_build_target:
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break
>

This doesn't loo= k great to me. I think we could just add a list[TestSuiteConfig] parameter = to _run_suites. We could then just pass a list with/without smoke TestSuite= Config, something like:
test_suites =3D ex= ecution.test_suites
if not execution.skip_smoke_tests:
=C2=A0 =C2=A0 = test_suites[:0] =3D [TestSuiteConfig.from_dict("smoke_tests")]_run_suites(sut_node, execution, build_target_result, test_suites)

A note on method ordering:
_run_= single_suite before=C2=A0_run_all_suites was confusing to me, as the method= s are used in the reverse order -=C2=A0_run_build_target calls=C2=A0_run_al= l_suites which only then calls=C2=A0_run_single_suite. I'd like the met= hods to be ordered in the order they're used in the code.

I agree this was ugly, that's a= good way to handle it and I'll migrate to just adding it to the front = of the list. I think I can actually just use the execution.test_suites arra= y directly that is already being used and just append it to the front of th= at. This way the list of test suites in the execution configuration is accu= rate to what actually ran. I don't recall why this happened with the me= thod ordering either but I'll get that corrected as well. I'm think= ing it'll look something more like this:


>
> =C2=A0def _exit_dts() -> None:
> diff --git a/= dts/framework/exception.py b/dts/framework/exception.py
> index ca353= d98..dfb12df4 100644
> --- a/dts/framework/exception.py
> +++ b= /dts/framework/exception.py
> @@ -25,6 +25,7 @@ class ErrorSeverity(I= ntEnum):
> =C2=A0 =C2=A0 =C2=A0SSH_ERR =3D 4
> =C2=A0 =C2=A0 = =C2=A0DPDK_BUILD_ERR =3D 10
> =C2=A0 =C2=A0 =C2=A0TESTCASE_VERIFY_ERR= =3D 20
> + =C2=A0 =C2=A0BLOCKING_TESTSUITE_ERR =3D 25
>
>= ;
> =C2=A0class DTSError(Exception):
> @@ -144,3 +145,14 @@ def= __init__(self, value: str):
>
> =C2=A0 =C2=A0 =C2=A0def __str_= _(self) -> str:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return repr(se= lf.value)
> +
> +
> +class BlockingTestSuiteError(DTSErro= r):
> + =C2=A0 =C2=A0suite_name: str
> + =C2=A0 =C2=A0severity:= ClassVar[ErrorSeverity] =3D ErrorSeverity.BLOCKING_TESTSUITE_ERR
> +=
> + =C2=A0 =C2=A0def __init__(self, suite_name: str) -> None:
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.suite_name =3D suite_name
> +<= br>> + =C2=A0 =C2=A0def __str__(self) -> str:
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0return f"Blocking suite {self.suite_name} failed."<= br>> diff --git a/dts/framework/remote_session/__init__.py b/dts/framewo= rk/remote_session/__init__.py
> index ee221503..04342b8d 100644
&g= t; --- a/dts/framework/remote_session/__init__.py
> +++ b/dts/framewo= rk/remote_session/__init__.py
> @@ -1,5 +1,6 @@
> =C2=A0# SPDX-= License-Identifier: BSD-3-Clause
> =C2=A0# Copyright(c) 2023 PANTHEON= .tech s.r.o.
> +# Copyright(c) 2022-2023 University of New Hampshire<= /div>

Should the year be just 2023? I'm not sure abo= ut these copyright statements.
= I think just 2023 makes sense, I wasn't completely sure either but I wa= s trying to align more with older copyright statements throughout the code = and wasn't sure why some others had a range. In anycase, just 2023 make= s sense to me and it matches the one above so I have no problem making it t= hat.


>
> =C2=A0"""
> =C2=A0= The package provides modules for managing remote connections to a remote ho= st (node),
> @@ -17,7 +18,14 @@
>
> =C2=A0from .linux_ses= sion import LinuxSession
> =C2=A0from .os_session import OSSession> -from .remote import CommandResult, RemoteSession, SSHSession
>= +from .remote import (
> + =C2=A0 =C2=A0CommandResult,
> + =C2= =A0 =C2=A0InteractiveRemoteSession,
> + =C2=A0 =C2=A0InteractiveShell= ,
> + =C2=A0 =C2=A0RemoteSession,
> + =C2=A0 =C2=A0SSHSession,<= br>> + =C2=A0 =C2=A0TestPmdShell,
> +)
>
>
> =C2= =A0def create_session(
> diff --git a/dts/framework/remote_session/os= _session.py b/dts/framework/remote_session/os_session.py
> index 4c48= ae25..f5f53923 100644
> --- a/dts/framework/remote_session/os_session= .py
> +++ b/dts/framework/remote_session/os_session.py
> @@ -12= ,7 +12,13 @@
> =C2=A0from framework.testbed_model import LogicalCore<= br>> =C2=A0from framework.utils import EnvVarsDict, MesonArgs
>> -from .remote import CommandResult, RemoteSession, create_remote_sess= ion
> +from .remote import (
> + =C2=A0 =C2=A0CommandResult,> + =C2=A0 =C2=A0InteractiveRemoteSession,
> + =C2=A0 =C2=A0Remot= eSession,
> + =C2=A0 =C2=A0create_interactive_session,
> + =C2= =A0 =C2=A0create_remote_session,
> +)
>
>
> =C2=A0c= lass OSSession(ABC):
> @@ -26,6 +32,7 @@ class OSSession(ABC):
>= ; =C2=A0 =C2=A0 =C2=A0name: str
> =C2=A0 =C2=A0 =C2=A0_logger: DTSLOG=
> =C2=A0 =C2=A0 =C2=A0remote_session: RemoteSession
> + =C2=A0= =C2=A0interactive_session: InteractiveRemoteSession
>
> =C2=A0= =C2=A0 =C2=A0def __init__(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self,=
> @@ -37,6 +44,7 @@ def __init__(
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self.name =3D = name
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger =3D logger
&= gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.remote_session =3D create_remote= _session(node_config, name, logger)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0se= lf.interactive_session =3D create_interactive_session(node_config, name, lo= gger)

We may not want to create the interactive se= ssion at this point. This does create a connection to the node which we don= 't want (it is extra time consumed) when creating an extra session on t= op of the main_session (with Node.create_session). I think we could move th= is to OSSession.create_interactive_shell. More below.

I think the idea of initializing it here was wh= at we had discussed before about having an open SSH session for interactive= shells open in the background throughout the entire run. If I understand w= hat you're saying, you're suggesting that we would only initialize = this connection when we need it the first time and then leave it in the bac= kground afterwards. I can see how this would be more efficient if you had a= run where the interactive session was never used, but it would add extra l= ogic to make sure that the connection is only initialized once and that it = doesn't happen every time you create an interactive shell. This is some= thing that could be done, but considering that this will always be initiali= zed with smoke tests, the only way you wouldn't have an interactive rem= ote session created at the start is if you disable smoke tests. I think it = is easier to just have it running in the background rather than spawn it wh= en it's used and have to worry about if a connection currently exists o= r not.
=C2=A0

>
> =C2=A0 =C2= =A0 =C2=A0def close(self, force: bool =3D False) -> None:
> =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> @@ -173,3 +181,27 @@= def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -&= gt; None:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if needed and mount the= hugepages if needed.
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0If force_fi= rst_numa is True, configure hugepages just on the first socket.
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> +
> + =C2= =A0 =C2=A0@abstractmethod
> + =C2=A0 =C2=A0def get_compiler_version(s= elf, compiler_name: str) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&q= uot;""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get installed version= of compiler used for DPDK
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0""= ;"
> +
> + =C2=A0 =C2=A0@abstractmethod
> + =C2=A0 = =C2=A0def get_os_name(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get name of OS f= or the node
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
>= +
> + =C2=A0 =C2=A0@abstractmethod
> + =C2=A0 =C2=A0def get_os= _version(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0""= "
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get version of OS for the node<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> +
> += =C2=A0 =C2=A0@abstractmethod
> + =C2=A0 =C2=A0def get_kernel_version= (self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Get kernel version for the node
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""

T= hese could be merged into one method (returning NodeVersionInfo), is there = a reason to separate them? We could get some performance improvements with = one method (e.g. getting /etc/os-release just once and then process locally= or we could use a script we execute remotely to have just one remote call,= but that's something to think about in the future).
<= /div>

That's a good point. Originally I left t= hem separate so that during the run you could just access what you need, bu= t you can just access the data you need from the object.


> diff --git a/dts/framework/remote_session/posix_session.py= b/dts/framework/remote_session/posix_session.py
> index d38062e8..dc= 6689f5 100644
> --- a/dts/framework/remote_session/posix_session.py> +++ b/dts/framework/remote_session/posix_session.py
> @@ -219,= 3 +219,33 @@ def _remove_dpdk_runtime_dirs(
>
> =C2=A0 =C2=A0 = =C2=A0def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
> =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0return ""
> +
> + =C2=A0 = =C2=A0def get_compiler_version(self, compiler_name: str) -> str:
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0match compiler_name:
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0case "gcc":
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(f"{c= ompiler_name} --version", 60).stdout.split(
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"\n"
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)[0]
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case "clang":
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_com= mand(f"{compiler_name} --version", 60).stdout.split(
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"= \n"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)= [0]
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case "msvc"= ;:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return = self.send_command("cl", 60).stdout
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0case "icc":
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(f"{compiler= _name} -V", 60).stdout
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0case _:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0raise ValueError(f"Unknown compiler {compiler_name}")
&g= t; +
> + =C2=A0 =C2=A0def get_os_name(self) -> str:
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0"awk -F=3D '$1=3D=3D\"NAME\" = {print $2}' /etc/os-release", 60
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0).stdout
> +
> + =C2=A0 =C2=A0def get_os_version(self) -&= gt; str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.send_command(
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"awk -F=3D '$1=3D= =3D\"VERSION\" {print $2}' /etc/os-release", 60, True> + =C2=A0 =C2=A0 =C2=A0 =C2=A0).stdout
> +
> + =C2=A0 =C2= =A0def get_kernel_version(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0return self.send_command("uname -r", 60).stdout
<= br>
These 60 second timeouts seem high to me. All of these comman= ds should execute quickly so the default timeout seems fine to me.

That makes sense to me, no sense i= n making it too long.
=C2=A0

> diff = --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/rem= ote_session/remote/__init__.py
> index 8a151221..ef7b4851 100644
&= gt; --- a/dts/framework/remote_session/remote/__init__.py
> +++ b/dts= /framework/remote_session/remote/__init__.py
> @@ -1,16 +1,28 @@
&= gt; =C2=A0# SPDX-License-Identifier: BSD-3-Clause
> =C2=A0# Copyright= (c) 2023 PANTHEON.tech s.r.o.
> +# Copyright(c) 2022-2023 University = of New Hampshire
>
> =C2=A0# pylama:ignore=3DW0611
>
&= gt; +from paramiko import AutoAddPolicy, SSHClient
> +
> =C2=A0= from framework.config import NodeConfiguration
> =C2=A0from framework= .logger import DTSLOG
>
> +from .interactive_remote_session imp= ort InteractiveRemoteSession
> +from .interactive_shell import Intera= ctiveShell
> =C2=A0from .remote_session import CommandResult, RemoteS= ession
> =C2=A0from .ssh_session import SSHSession
> +from .tes= tpmd_shell import TestPmdShell
>
>
> =C2=A0def create_rem= ote_session(
> =C2=A0 =C2=A0 =C2=A0node_config: NodeConfiguration, na= me: str, logger: DTSLOG
> =C2=A0) -> RemoteSession:
> =C2=A0= =C2=A0 =C2=A0return SSHSession(node_config, name, logger)
> +
>= ; +
> +def create_interactive_session(
> + =C2=A0 =C2=A0node_co= nfig: NodeConfiguration, name: str, logger: DTSLOG
> +) -> Interac= tiveRemoteSession:
> + =C2=A0 =C2=A0return InteractiveRemoteSession(n= ode_config, logger)
> diff --git a/dts/framework/remote_session/remot= e/interactive_remote_session.py b/dts/framework/remote_session/remote/inter= active_remote_session.py
> new file mode 100644
> index 0000000= 0..4e88308a
> --- /dev/null
> +++ b/dts/framework/remote_sessio= n/remote/interactive_remote_session.py
> @@ -0,0 +1,113 @@
> +#= SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 Univers= ity of New Hampshire
> +
> +import socket
> +import trace= back
> +from pathlib import PurePath
> +
> +from paramiko= import AutoAddPolicy, SSHClient, Transport
> +from paramiko.ssh_exce= ption import (
> + =C2=A0 =C2=A0AuthenticationException,
> + = =C2=A0 =C2=A0BadHostKeyException,
> + =C2=A0 =C2=A0NoValidConnections= Error,
> + =C2=A0 =C2=A0SSHException,
> +)
> +
> +f= rom framework.config import InteractiveApp, NodeConfiguration
> +from= framework.exception import SSHConnectionError
> +from framework.logg= er import DTSLOG
> +from framework.settings import SETTINGS
> +=
> +from .interactive_shell import InteractiveShell
> +from .te= stpmd_shell import TestPmdShell
> +
> +
> +class Interact= iveRemoteSession:
> + =C2=A0 =C2=A0hostname: str
> + =C2=A0 =C2= =A0ip: str
> + =C2=A0 =C2=A0port: int
> + =C2=A0 =C2=A0username= : str
> + =C2=A0 =C2=A0password: str
> + =C2=A0 =C2=A0_logger: = DTSLOG
> + =C2=A0 =C2=A0_node_config: NodeConfiguration
> + =C2= =A0 =C2=A0session: SSHClient
> + =C2=A0 =C2=A0_transport: Transport |= None
> +
> + =C2=A0 =C2=A0def __init__(self, node_config: Node= Configuration, _logger: DTSLOG) -> None:
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0self._node_config =3D node_config
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._logger =3D _logger
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.hostn= ame =3D node_config.hostname
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.user= name =3D node_config.user
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.passwor= d =3D node_config.password if node_config.password else ""
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0port =3D 22

framewo= rk/remote_session/remote/interactive_remote_session.py:45: error: Incompati= ble types in assignment (expression has type "str", variable has = type "int")

We can address this by m= aking 22 a string (port =3D "22").

I'll fix this as well.
=C2=A0
=

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.ip =3D node_config.hos= tname
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if ":" in node_config.= hostname:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.ip, port = =3D node_config.hostname.split(":")
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0self.port =3D int(port)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self= ._logger.info(
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Initializing interactive= connection for {self.username}@{self.hostname}"
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._connect()
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.info(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0f"Interactive connection successful for {self.username}@{self.h= ostname}"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +
> + = =C2=A0 =C2=A0def _connect(self) -> None:
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0client =3D SSHClient()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0client.se= t_missing_host_key_policy(AutoAddPolicy)
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0self.session =3D client
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0retry_attem= pts =3D 10
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0for retry_attempt in range(= retry_attempts):
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0try:> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0client.conne= ct(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self.ip,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0username=3Dself.username,
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0port=3Dself.port,> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0password=3Dself.password,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout=3D20 if self.port else 10,
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0except (TypeError, BadHostKeyExceptio= n, AuthenticationException) as e:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._logger.exception(e)
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raise SSHConnectionError(self.host= name) from e
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0except (NoV= alidConnectionsError, socket.error, SSHException) as e:
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.debug(tracebac= k.format_exc())
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0self._logger.warning(e)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0self._= logger.info(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0"Retrying interactive session connection: "<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0f"retry number {retry_attempt +1}"
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0else:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0break
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raise SSHConnectionError(self.hos= tname)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# Interactive sessions are used= on an "as needed" basis so we have
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0# to set a keepalive
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._t= ransport =3D self.session.get_transport()
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0if self._transport is not None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0self._transport.set_keepalive(30)
> +
> + =C2=A0 = =C2=A0def create_interactive_shell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0se= lf,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0shell_type: InteractiveApp,
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_app: PurePath,
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0timeout: float =3D SETTINGS.timeout,
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0eal_parameters: str =3D "",
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0app_parameters=3D"",
> + =C2=A0 =C2=A0) -&= gt; InteractiveShell:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0""&quo= t;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0See "create_interactive_shell&= quot; in SutNode
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""> + =C2=A0 =C2=A0 =C2=A0 =C2=A0match (shell_type.name):
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0case "shell":

Compar= ing with the enum values seems better.
=

Good point, that does seem like it would be more consistent, I= 'll make the change.
=C2=A0

> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return InteractiveS= hell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self.session, self._logger, path_to_app, timeout
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0case "testpmd":
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return TestPmdShell(
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.= session,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0self._logger,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_app,
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout=3Dtimeout,
> = + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0eal_= flags=3Deal_parameters,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0cmd_line_options=3Dapp_parameters,
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)

framework/remote_session/remote/interactive_remote_session.py:89: er= ror: Missing return statement

This is probably= because we don't have a case for the rest of the values.

You're right, I didn't handle t= he default case. I'll change it to defaulting to just returning a base = interactive shell with a message explaining that they didn't handle the= type.
=C2=A0

> diff --git a/dts/fra= mework/remote_session/remote/interactive_shell.py b/dts/framework/remote_se= ssion/remote/interactive_shell.py
> new file mode 100644
> inde= x 00000000..d98503be
> --- /dev/null
> +++ b/dts/framework/remo= te_session/remote/interactive_shell.py
> @@ -0,0 +1,98 @@
> +fr= om pathlib import PurePath
> +
> +from paramiko import Channel,= SSHClient, channel
> +
> +from framework.logger import DTSLOG<= br>> +from framework.settings import SETTINGS
> +
> +
>= ; +class InteractiveShell:
> +
> + =C2=A0 =C2=A0_interactive_se= ssion: SSHClient
> + =C2=A0 =C2=A0_stdin: channel.ChannelStdinFile> + =C2=A0 =C2=A0_stdout: channel.ChannelFile
> + =C2=A0 =C2=A0_s= sh_channel: Channel
> + =C2=A0 =C2=A0_logger: DTSLOG
> + =C2=A0= =C2=A0_timeout: float
> +
> + =C2=A0 =C2=A0def __init__(
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0self,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= interactive_session: SSHClient,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0logger= : DTSLOG,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_app: PurePath | None= =3D None,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout: float =3D SETTINGS= .timeout,
> + =C2=A0 =C2=A0) -> None:
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._interactive_session =3D interactive_session
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._ssh_channel =3D self._interactive_session.inv= oke_shell()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stdin =3D self._ssh_= channel.makefile_stdin("w")
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= self._stdout =3D self._ssh_channel.makefile("r")
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0self._ssh_channel.settimeout(timeout)
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0self._ssh_channel.set_combine_stderr(True) =C2=A0# com= bines stdout and stderr streams
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._= logger =3D logger
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._timeout =3D ti= meout
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if path_to_app:
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.send_command_no_output(str(path_to_= app)) =C2=A0# starts the application

This should b= e part of the derived class - the code to start an app is going to be app-s= pecific. Since this is an interactive session, the assumption is that we= 9;re always going to start an app and then work with it, so starting the ap= p in the constructor should be safe.
We actually have that for te= stpmd, so we just need to unify the above with what we have.

That makes sense, the only reason I didn= 't do this before was because I was starting the application differentl= y in test_pmd (with EAL flags and command line options) so I would call it = without this path when I called the super init(). However, I can change thi= s by just making a method called _start_application() that can be overridde= n in the subclasses and that will clean this process up quite a bit.
=C2=A0

> +
> + =C2=A0 =C2=A0def empt= y_stdout_buffer(self) -> None:

Same comment on = ordering as above.

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"&q= uot;"Removes all data from the stdout buffer.
> +
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0Because of the way paramiko handles read buffers, t= here is no way to effectively
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0remove d= ata from, or "flush", read buffers. This method essentially moves= our
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0offset on the buffer to the end a= nd thus "removes" the data from the buffer.
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0Timeouts are thrown on read operations of paramiko pipes b= ased on whether data
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0had been received= before timeout so we assume that if we reach the timeout then
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0we are at the end of the buffer.
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0self._ssh_channel.settimeout(1)

Waiting a wh= ole second seems to be a lot. We actually may not need this method from the= use in the code - that is we change how the app starts.
<= /div>

We will still need this code whenever you se= nd a command and don't get its output. What it is doing is essentially = moving the pointer for the output buffer to the end of the file which is ne= eded because if you send a command and you don't want any output, if we= don't still move past the output in the buffer then it will persist an= d bleed into when you send a command and do want to collect output. Having = this method allows you to know you are starting from an empty buffer when y= ou are getting the output of the commands.

In the case of timing however, = I could cut this down to half a second, it just gives the chance that a com= mand that takes longer to produce its output will still contaminate the buf= fer. If this causes a problem we could always increase the time in the futu= re.
=C2=A0

> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0try:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for line in s= elf._stdout:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0pass
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0except TimeoutError:
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pass
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0self._ssh_channel.settimeout(self._timeout) =C2=A0# reset timeout=
> +
> + =C2=A0 =C2=A0def send_command_no_output(self, command:= str) -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""Se= nd command to channel without recording output.
> +
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0This method will not verify any input or output, it wil= l simply assume the
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0command succeeded.= This method will also consume all output in the buffer
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0after executing the command.
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._= logger.info(
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Sending command {command.s= trip()} and not collecting output"
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stdin.write(f"{command= }\n")
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stdin.flush()
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.empty_stdout_buffer()
> +
> = + =C2=A0 =C2=A0def send_command_get_output(self, command: str, prompt: str)= -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""Send a c= ommand and get all output before the expected ending string.
> +
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0Lines that expect input are not included i= n the stdout buffer so they cannot be
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= used for expect. For example, if you were prompted to log into something> + =C2=A0 =C2=A0 =C2=A0 =C2=A0with a username and password, you cannot= expect "username:" because it won't
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0yet be in the stdout buffer. A work around for this could be c= onsuming an
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0extra newline character to= force the current prompt into the stdout buffer.
> +
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0Returns:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0All output in the buffer before expected string
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0self._logger.info(f= "Sending command {command.strip()}...")
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0self._stdin.write(f"{command}\n")
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0self._stdin.flush()
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0out: str =3D ""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0for line = in self._stdout:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0out += =3D line
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if prompt in li= ne and not line.rstrip().endswith(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0command.rstrip()
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0): =C2=A0# ignore line that sent command
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0self._logger.debug(f"Got output: {out}")
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0return out
> +
> + =C2=A0 =C2= =A0def close(self) -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._s= tdin.close()
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._ssh_channel.close()=
> +
> + =C2=A0 =C2=A0def __del__(self) -> None:
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.close()
> diff --git a/dts/framework/= remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/remot= e/testpmd_shell.py
> new file mode 100644
> index 00000000..873= f3c82
> --- /dev/null
> +++ b/dts/framework/remote_session/remo= te/testpmd_shell.py
> @@ -0,0 +1,58 @@
> +# SPDX-License-Identi= fier: BSD-3-Clause
> +# Copyright(c) 2023 University of New Hampshire=
> +
> +
> +from pathlib import PurePath
> +
>= ; +from paramiko import SSHClient
> +
> +from framework.logger = import DTSLOG
> +from framework.settings import SETTINGS
> +> +from .interactive_shell import InteractiveShell
> +
> +<= br>> +class TestPmdShell(InteractiveShell):
> + =C2=A0 =C2=A0expec= ted_prompt: str =3D "testpmd>"
> +
> + =C2=A0 =C2= =A0def __init__(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self,
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0interactive_session: SSHClient,
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0logger: DTSLOG,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0pa= th_to_testpmd: PurePath,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0eal_flags: st= r =3D "",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0cmd_line_options: = str =3D "",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout: float = =3D SETTINGS.timeout,
> + =C2=A0 =C2=A0) -> None:
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0"""Initializes an interactive testpmd s= ession using specified parameters."""
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0super(TestPmdShell, self).__init__(
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0interactive_session, logger=3Dlogger, timeou= t=3Dtimeout
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0self.send_command_get_output(
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0f"{path_to_testpmd} {eal_flags} -- -i {cmd_line_op= tions}\n",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.exp= ected_prompt,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +
> + = =C2=A0 =C2=A0def send_command(self, command: str, prompt: str =3D expected_= prompt) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""S= pecific way of handling the command for testpmd
> +
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0An extra newline character is consumed in order to forc= e the current line into
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0the stdout buf= fer.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0return self.send_command_get_output(f"{command= }\n", prompt)
> +
> + =C2=A0 =C2=A0def get_devices(self) -= > list[str]:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""Get = a list of device names that are known to testpmd
> +
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0Uses the device info listed in testpmd and then parses= the output to
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return only the names o= f the devices.
> +
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Returns:
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0A list of strings representi= ng device names (e.g. 0000:14:00.1)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0&q= uot;""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dev_info: str =3D sel= f.send_command("show device info all")
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0dev_list: list[str] =3D []
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0for line in dev_info.split("\n"):
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0if "device name:" in line.lower():
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0dev_list.append(= line.strip().split(": ")[1].strip())
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0return dev_list

This could be an object = (or a list of TestPmdDevice object or something similar). We could start wi= th just the name and add to it in the future.

I think for the scope of smoke tests it would make sens= e to leave this as just the names for now because all we are doing is makin= g sure we can discover them and that they appear. I do agree that it could = be useful to have more info, but it would likely make more sense to add to = it in the future when we need that additional information.
<= div>=C2=A0

> diff --git a/dts/framework/test_result.py = b/dts/framework/test_result.py
> index 74391982..029665fb 100644
&= gt; --- a/dts/framework/test_result.py
> +++ b/dts/framework/test_res= ult.py
> @@ -1,5 +1,6 @@
> =C2=A0# SPDX-License-Identifier: BSD= -3-Clause
> =C2=A0# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +#= Copyright(c) 2022-2023 University of New Hampshire
>
> =C2=A0&= quot;""
> =C2=A0Generic result container and reporters
&= gt; @@ -8,14 +9,17 @@
> =C2=A0import os.path
> =C2=A0from colle= ctions.abc import MutableSequence
> =C2=A0from enum import Enum, auto=
> +from typing import Dict

No need, we can = just use built-in dict (since Python3.9).

>
> =C2=A0= from .config import (
> =C2=A0 =C2=A0 =C2=A0OS,
> =C2=A0 =C2=A0= =C2=A0Architecture,
> =C2=A0 =C2=A0 =C2=A0BuildTargetConfiguration,<= br>> + =C2=A0 =C2=A0BuildTargetVersionInfo,
> =C2=A0 =C2=A0 =C2=A0= Compiler,
> =C2=A0 =C2=A0 =C2=A0CPUType,
> =C2=A0 =C2=A0 =C2=A0= NodeConfiguration,
> + =C2=A0 =C2=A0NodeVersionInfo,
> =C2=A0)<= br>> =C2=A0from .exception import DTSError, ErrorSeverity
> =C2=A0= from .logger import DTSLOG
> @@ -67,12 +71,14 @@ class Statistics(dic= t):
> =C2=A0 =C2=A0 =C2=A0Using a dict provides a convenient way to f= ormat the data.
> =C2=A0 =C2=A0 =C2=A0"""
>
&= gt; - =C2=A0 =C2=A0def __init__(self, dpdk_version):
> + =C2=A0 =C2= =A0def __init__(self, output_info: Dict[str, str] | None):
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0super(Statistics, self).__init__()
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for result in Result:
> =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self[result.name] =3D 0
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self["PASS RATE"] =3D 0.0
> - =C2=A0 =C2=A0 =C2=A0 =C2= =A0self["DPDK VERSION"] =3D dpdk_version
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0if output_info:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0for info_key, info_val in output_info.items():
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self[info_key] =3D info_val>
> =C2=A0 =C2=A0 =C2=A0def __iadd__(self, other: Result) -> &= quot;Statistics":
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"&quo= t;"
> @@ -206,6 +212,8 @@ class BuildTargetResult(BaseResult):> =C2=A0 =C2=A0 =C2=A0os: OS
> =C2=A0 =C2=A0 =C2=A0cpu: CPUType<= br>> =C2=A0 =C2=A0 =C2=A0compiler: Compiler
> + =C2=A0 =C2=A0compi= ler_version: str | None
> + =C2=A0 =C2=A0dpdk_version: str | None
= >
> =C2=A0 =C2=A0 =C2=A0def __init__(self, build_target: BuildTarg= etConfiguration):
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0super(BuildTarg= etResult, self).__init__()
> @@ -213,6 +221,12 @@ def __init__(self, = build_target: BuildTargetConfiguration):
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self.os =3D build_target.os
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0self.cpu =3D build_target.cpu
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0self.compiler =3D build_target.compiler
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0self.compiler_version =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0s= elf.dpdk_version =3D None
> +
> + =C2=A0 =C2=A0def add_build_ta= rget_versions(self, versions: BuildTargetVersionInfo) -> None:
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0self.compiler_version =3D versions.compiler_ver= sion
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.dpdk_version =3D versions.dp= dk_version
>
> =C2=A0 =C2=A0 =C2=A0def add_test_suite(self, tes= t_suite_name: str) -> TestSuiteResult:
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0test_suite_result =3D TestSuiteResult(test_suite_name)
> @@= -228,10 +242,17 @@ class ExecutionResult(BaseResult):
> =C2=A0 =C2= =A0 =C2=A0"""
>
> =C2=A0 =C2=A0 =C2=A0sut_node: = NodeConfiguration
> + =C2=A0 =C2=A0sut_os_name: str
> + =C2=A0 = =C2=A0sut_os_version: str
> + =C2=A0 =C2=A0sut_kernel_version: str>
> - =C2=A0 =C2=A0def __init__(self, sut_node: NodeConfiguration= ):
> + =C2=A0 =C2=A0def __init__(self, sut_node: NodeConfiguration, s= ut_version_info: NodeVersionInfo):
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0super(ExecutionResult, self).__init__()
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0self.sut_node =3D sut_node
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0s= elf.sut_version_info =3D sut_version_info
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0self.sut_os_name =3D sut_version_info.os_name
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0self.sut_os_version =3D sut_version_info.os_version
> += =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut_kernel_version =3D sut_version_info.ke= rnel_version
>
> =C2=A0 =C2=A0 =C2=A0def add_build_target(
&= gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self, build_target: BuildTargetConfig= uration
> @@ -258,6 +279,7 @@ class DTSResult(BaseResult):
> = =C2=A0 =C2=A0 =C2=A0"""
>
> =C2=A0 =C2=A0 =C2=A0= dpdk_version: str | None
> + =C2=A0 =C2=A0output: dict | None
>= =C2=A0 =C2=A0 =C2=A0_logger: DTSLOG
> =C2=A0 =C2=A0 =C2=A0_errors: l= ist[Exception]
> =C2=A0 =C2=A0 =C2=A0_return_code: ErrorSeverity
&= gt; @@ -267,14 +289,17 @@ class DTSResult(BaseResult):
> =C2=A0 =C2= =A0 =C2=A0def __init__(self, logger: DTSLOG):
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0super(DTSResult, self).__init__()
> =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0self.dpdk_version =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0self.output =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._log= ger =3D logger
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._errors =3D [= ]
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._return_code =3D ErrorSeve= rity.NO_ERR
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stats_result = =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stats_filename =3D= os.path.join(SETTINGS.output_dir, "statistics.txt")
>
&= gt; - =C2=A0 =C2=A0def add_execution(self, sut_node: NodeConfiguration) -&g= t; ExecutionResult:
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0execution_result = =3D ExecutionResult(sut_node)
> + =C2=A0 =C2=A0def add_execution(
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0self, sut_node: NodeConfiguration, sut_ve= rsion_info: NodeVersionInfo
> + =C2=A0 =C2=A0) -> ExecutionResult:=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0execution_result =3D ExecutionResult(= sut_node, sut_version_info)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.= _inner_results.append(execution_result)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0return execution_result
>
> @@ -296,7 +321,8 @@ def proc= ess(self) -> None:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0for error in self._errors:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.debug(repr(error))
>
> = - =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stats_result =3D Statistics(self.dpdk_ve= rsion)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._stats_result =3D Statisti= cs(self.output)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# add information gath= ered from the smoke tests to the statistics
> =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0self.add_stats(self._stats_result)
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0with open(self._stats_filename, "w+") as stats_f= ile:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0stats_file.wri= te(str(self._stats_result))
> diff --git a/dts/framework/test_suite.p= y b/dts/framework/test_suite.py
> index 0705f38f..ad66014c 100644
= > --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_sui= te.py
> @@ -10,11 +10,24 @@
> =C2=A0import inspect
> =C2= =A0import re
> =C2=A0from types import MethodType
> +from typin= g import Dict
>
> -from .exception import ConfigurationError, S= SHTimeoutError, TestCaseVerifyError
> +from .config import BuildTarge= tConfiguration
> +from .exception import (
> + =C2=A0 =C2=A0Blo= ckingTestSuiteError,
> + =C2=A0 =C2=A0ConfigurationError,
> + = =C2=A0 =C2=A0SSHTimeoutError,
> + =C2=A0 =C2=A0TestCaseVerifyError,> +)
> =C2=A0from .logger import DTSLOG, getLogger
> =C2= =A0from .settings import SETTINGS
> -from .test_result import BuildTa= rgetResult, Result, TestCaseResult, TestSuiteResult
> +from .test_res= ult import (
> + =C2=A0 =C2=A0BuildTargetResult,
> + =C2=A0 =C2= =A0DTSResult,
> + =C2=A0 =C2=A0Result,
> + =C2=A0 =C2=A0TestCas= eResult,
> + =C2=A0 =C2=A0TestSuiteResult,
> +)
> =C2=A0f= rom .testbed_model import SutNode
>
>
> @@ -37,10 +50,12 = @@ class TestSuite(object):
> =C2=A0 =C2=A0 =C2=A0"""<= br>>
> =C2=A0 =C2=A0 =C2=A0sut_node: SutNode
> + =C2=A0 =C2= =A0is_blocking =3D False
> =C2=A0 =C2=A0 =C2=A0_logger: DTSLOG
>= ; =C2=A0 =C2=A0 =C2=A0_test_cases_to_run: list[str]
> =C2=A0 =C2=A0 = =C2=A0_func: bool
> =C2=A0 =C2=A0 =C2=A0_result: TestSuiteResult
&= gt; + =C2=A0 =C2=A0_dts_result: DTSResult
>
> =C2=A0 =C2=A0 =C2= =A0def __init__(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self,
> @@= -48,6 +63,8 @@ def __init__(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0tes= t_cases: list[str],
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0func: bool,> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_result: BuildTargetRe= sult,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0build_target_conf: BuildTargetCo= nfiguration,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0dts_result: DTSResult,> =C2=A0 =C2=A0 =C2=A0):
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self= .sut_node =3D sut_node
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logg= er =3D getLogger(self.__class__.__name__)
> @@ -55,6 +72,8 @@ def __i= nit__(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._test_cases_to_run.ex= tend(SETTINGS.test_cases)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._f= unc =3D func
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._result =3D bui= ld_target_result.add_test_suite(self.__class__.__name__)
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0self.build_target_info =3D build_target_conf
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dts_result =3D dts_result
>
> = =C2=A0 =C2=A0 =C2=A0def set_up_suite(self) -> None:
> =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"""
> @@ -118,6 +137,8 @@ def = run(self) -> None:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"the next test suite may be affected.&= quot;
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= self._result.update_setup(Result.ERROR, e)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0if len(self._result.get_errors()) > 0 and self.is_bl= ocking:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ra= ise BlockingTestSuiteError(test_suite_name)
>
> =C2=A0 =C2=A0 = =C2=A0def _execute_test_suite(self) -> None:
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0"""
> @@ -232,6 +253,12 @@ def _execu= te_test_case(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0test_= case_result.update(Result.SKIP)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0raise KeyboardInterrupt("Stop DTS")
>
> = + =C2=A0 =C2=A0def write_to_statistics_file(self, output: Dict[str, str]):<= /div>

This isn't used anywhere as far as I can tell.=

It was previously but whe= n we discussed removing the statistics output I left this in but you're= right, thinking about it now it doesn't make sense to leave it in unti= l we reformat statistics.txt.
=C2=A0

&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0if self._dts_result.output is not None:
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dts_result.output.upd= ate(output)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dts_result.output =3D output
> +=

This could be simplified if we init output to an = empty dict in the constructor.

>
> =C2=A0def get_tes= t_suites(testsuite_module_path: str) -> list[type[TestSuite]]:
> = =C2=A0 =C2=A0 =C2=A0def is_test_suite(object) -> bool:
> diff --gi= t a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.= py
> index d48fafe6..c5147e0e 100644
> --- a/dts/framework/test= bed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
>= @@ -40,6 +40,7 @@ class Node(object):
> =C2=A0 =C2=A0 =C2=A0lcores: = list[LogicalCore]
> =C2=A0 =C2=A0 =C2=A0_logger: DTSLOG
> =C2= =A0 =C2=A0 =C2=A0_other_sessions: list[OSSession]
> + =C2=A0 =C2=A0_e= xecution_config: ExecutionConfiguration
>
> =C2=A0 =C2=A0 =C2= =A0def __init__(self, node_config: NodeConfiguration):
> =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0self.config =3D node_config
> @@ -64,6 +65,7 = @@ def set_up_execution(self, execution_config: ExecutionConfiguration) -&g= t; None:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
&g= t; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._setup_hugepages()
> =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0self._set_up_execution(execution_config)
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._execution_config =3D execution_config<= br>>
> =C2=A0 =C2=A0 =C2=A0def _set_up_execution(self, execution_c= onfig: ExecutionConfiguration) -> None:
> =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0"""
> diff --git a/dts/framework/testbed_mod= el/sut_node.py b/dts/framework/testbed_model/sut_node.py
> index 2b2b= 50d9..27849da7 100644
> --- a/dts/framework/testbed_model/sut_node.py=
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -1,14 +1,= 21 @@
> =C2=A0# SPDX-License-Identifier: BSD-3-Clause
> =C2=A0#= Copyright(c) 2010-2014 Intel Corporation
> =C2=A0# Copyright(c) 2023= PANTHEON.tech s.r.o.
> +# Copyright(c) 2022-2023 University of New H= ampshire
>
> =C2=A0import os
> =C2=A0import tarfile
&g= t; =C2=A0import time
> =C2=A0from pathlib import PurePath
>
= > -from framework.config import BuildTargetConfiguration, NodeConfigurat= ion
> -from framework.remote_session import CommandResult, OSSession<= br>> +from framework.config import (
> + =C2=A0 =C2=A0BuildTargetC= onfiguration,
> + =C2=A0 =C2=A0BuildTargetVersionInfo,
> + =C2= =A0 =C2=A0InteractiveApp,
> + =C2=A0 =C2=A0NodeConfiguration,
>= + =C2=A0 =C2=A0NodeVersionInfo,
> +)
> +from framework.remote_= session import CommandResult, InteractiveShell, OSSession
> =C2=A0fro= m framework.settings import SETTINGS
> =C2=A0from framework.utils imp= ort EnvVarsDict, MesonArgs
>
> @@ -26,13 +33,17 @@ class SutNod= e(Node):
>
> =C2=A0 =C2=A0 =C2=A0_dpdk_prefix_list: list[str]> =C2=A0 =C2=A0 =C2=A0_dpdk_timestamp: str
> - =C2=A0 =C2=A0_bui= ld_target_config: BuildTargetConfiguration | None
> + =C2=A0 =C2=A0_b= uild_target_config: BuildTargetConfiguration

What&= #39;s the reason for this change? My guess would be mypy, but the change st= ill results in mypy complaining:
framewo= rk/testbed_model/sut_node.py:51: error: Incompatible types in assignment (e= xpression has type "None", variable has type "BuildTargetCon= figuration")

I= would guess it was for mypy as well which I got to be passing before I sub= mitted the patch, but if there is still an error with it then I will revert= .
=C2=A0


> =C2=A0 = =C2=A0 =C2=A0_env_vars: EnvVarsDict
> =C2=A0 =C2=A0 =C2=A0_remote_tmp= _dir: PurePath
> =C2=A0 =C2=A0 =C2=A0__remote_dpdk_dir: PurePath | No= ne
> - =C2=A0 =C2=A0_dpdk_version: str | None
> =C2=A0 =C2=A0 = =C2=A0_app_compile_timeout: float
> =C2=A0 =C2=A0 =C2=A0_dpdk_kill_se= ssion: OSSession | None
> + =C2=A0 =C2=A0_dpdk_version: str | None> + =C2=A0 =C2=A0_os_name: str | None
> + =C2=A0 =C2=A0_os_versio= n: str | None
> + =C2=A0 =C2=A0_kernel_version: str | None
> + = =C2=A0 =C2=A0_compiler_version: str | None
>
> =C2=A0 =C2=A0 = =C2=A0def __init__(self, node_config: NodeConfiguration):
> =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0super(SutNode, self).__init__(node_config)
&g= t; @@ -41,12 +52,16 @@ def __init__(self, node_config: NodeConfiguration):<= br>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._env_vars =3D EnvVarsDict()<= br>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._remote_tmp_dir =3D self.mai= n_session.get_remote_tmp_dir()
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0se= lf.__remote_dpdk_dir =3D None
> - =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dp= dk_version =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._app_com= pile_timeout =3D 90
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dpdk_ki= ll_session =3D None
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dpdk_ti= mestamp =3D (
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f&quo= t;{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime(= ))}"
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0self._dpdk_version =3D None
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0self._os_name =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._os= _version =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._kernel_version= =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._compiler_version =3D N= one
>
> =C2=A0 =C2=A0 =C2=A0@property
> =C2=A0 =C2=A0 =C2= =A0def _remote_dpdk_dir(self) -> PurePath:
> @@ -75,6 +90,44 @@ de= f dpdk_version(self) -> str:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0)
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self._dpdk_= version
>
> + =C2=A0 =C2=A0@property
> + =C2=A0 =C2=A0def= os_name(self) -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if self._os_= name is None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._os_n= ame =3D self.main_session.get_os_name()
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0return self._os_name
> +
> + =C2=A0 =C2=A0@property
> = + =C2=A0 =C2=A0def os_version(self) -> str:
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0if self._os_version is None:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0self._os_version =3D self.main_session.get_os_version()> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return self._os_version
> +
>= ; + =C2=A0 =C2=A0@property
> + =C2=A0 =C2=A0def kernel_version(self) = -> str:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if self._kernel_version is = None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._kernel_versi= on =3D self.main_session.get_kernel_version()
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0return self._kernel_version
> +
> + =C2=A0 =C2=A0@pro= perty
> + =C2=A0 =C2=A0def compiler_version(self) -> str:
> = + =C2=A0 =C2=A0 =C2=A0 =C2=A0if self._compiler_version is None:
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._compiler_version =3D self.ma= in_session.get_compiler_version(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._build_target_config.compiler.name
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0return self._compiler_version
> +
> + =C2=A0 =C2=A0def ge= t_node_versions(self) -> NodeVersionInfo:
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0return NodeVersionInfo(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0os_name=3Dself.os_name,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0os_version=3Dself.os_version,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0kernel_version=3Dself.kernel_version,
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0)
> +
> + =C2=A0 =C2=A0def get_build_target_versio= ns(self) -> BuildTargetVersionInfo:
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0return BuildTargetVersionInfo(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0dpdk_version=3Dself.dpdk_version, compiler_version=3Dself.compile= r_version
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +
> =C2=A0 = =C2=A0 =C2=A0def _guess_dpdk_remote_dir(self) -> PurePath:
> =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.main_session.guess_dpdk_remote_d= ir(self._remote_tmp_dir)
>
> @@ -84,6 +137,10 @@ def _set_up_bu= ild_target(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Setup DPDK on the SUT node.
>= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0# we want to ensure that dpdk_version and compiler_version= is reset for new
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# build targets
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0self._dpdk_version =3D None
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._compiler_version =3D None
> =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0self._configure_build_target(build_target_config)> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._copy_dpdk_tarball()
> = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._build_dpdk()
> @@ -262,6 +319= ,49 @@ def run_dpdk_app(
> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0f"{app_path} {eal_args}", timeout, verify=3DTrue
> = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
>
> + =C2=A0 =C2=A0def crea= te_interactive_shell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self,
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0shell_type: InteractiveApp,
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0path_to_app: PurePath | None =3D None,

Can we join shell_type and path_to_app together? Both contain the sam= e type of information.
If we can't, let's drop path_to_ap= p, as we don't provide (in this patch) a way for users to modify this. = We can add it later if needed.
The idea behind it was allowing users to specify where their applicati= on is when they call this method. Users can pass a path into this applicati= on if their application isn't at the default path which I think is a go= od addition to have right now in the case of a user having their binary in = a different location.
=C2=A0

> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0timeout: float =3D SETTINGS.timeout,
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0eal_parameters: str =3D "",
Any reason we're not using the EalParameters class here?

I didn't use it original= ly because it requires an lcore list and memory channels if you provide the= m. Although, I could use EalParameters and just make them optional which wo= uld have the same effect but be more structured is what makes the most sens= e to me so I will make that change.
=C2=A0
=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0app_parameters=3D"",
<= div>
This is also currently not used, so maybe we should drop= it and add it in the future, probably as a dedicated object representing t= he parameters.

That makes = sense, you're right that a more structured class for providing paramete= rs would work better. The difficulty with a dedicated object however is tha= t you have different parameters for different applications so you couldn= 9;t get very granular. Regardless, it can be removed from this patch for no= w and added back in the future.
=C2=A0

= > + =C2=A0 =C2=A0) -> InteractiveShell:

This= should return a Union of all the different sessions we support, otherwise = mypy complains:
tests/TestSuite_smoke_te= sts.py:68: error: Incompatible types in assignment (expression has type &qu= ot;InteractiveShell", variable has type "TestPmdShell")


I fi= gured this would be allowed because they were subclasses and didn't rea= lize mypy would have a problem with it. I will fix this in the next version= .
=C2=A0
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0"""Create a handler for an interactive session.
> +=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0This method is a factory that calls a= method in OSSession to create shells for
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0different DPDK applications.
> +
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0Args:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0shell_type:= Enum value representing the desired application.
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_app: Represents a path to the applicatio= n you are attempting to
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0launch. This path will be executed at the start of the app> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0initializati= on. If one isn't provided, the default specified in the
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0enumeration will be use= d.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout: Timeout for = the ssh channel

We should specify what sort of tim= eout this is - for creating the channel or for sent commands?

Very good point, this is pretty vague i= f you weren't the one who wrote it. I'll make sure to clear this up= .
=C2=A0

> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0eal_parameters: List of EAL parameters to use to launch= the app. This is
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0ignored for base "shell" types.
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0app_parameters: Command-line flags to pass into= the application on launch.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Returns:> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Instance of the desired i= nteractive application.
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0""&q= uot;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0default_path: PurePath | None
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0# if we just want a default shell, there = is no need to append the DPDK build
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0# = directory to the path
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if shell_type.name =3D=3D "sh= ell":
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0default_path = =3D None
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0else:
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0default_path =3D self.main_session.join_remote_= path(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self= .remote_dpdk_build_dir, *shell_type.get_path()
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0)

All this path handling l= ooks clunky, it should be part of the=C2=A0InteractiveApp class. Would addi= ng something like update_path to InteractiveApp work?

I agree this was a little messy, but after chan= ging the default paths to PurePaths, I could clean this up more by just def= aulting to shell_type.get_path() and then having a single if statement that= checks if it isn't a shell type and, if not, set the default path with= appending the build dir. it would look something more like this:

<= div style=3D"font-family:arial,sans-serif" class=3D"gmail_default">default_= path =3D shell_type.get_path()

=C2=A0if shell_type !=3D InteractiveA= pp.shell:
=C2=A0 =C2=A0 =C2=A0=C2=A0=C2=A0 default_path =3D self.main_se= ssion.join_remote_path(
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.r= emote_dpdk_build_dir, shell_type.get_path()
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = )


> + =C2=A0 =C2=A0 =C2=A0 =C2=A0return self.m= ain_session.interactive_session.create_interactive_shell(
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0shell_type,
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0path_to_app or default_path,
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0timeout,
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0eal_parameters,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0app_parameters,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> +<= br>>
> =C2=A0class EalParameters(object):
> =C2=A0 =C2=A0 = =C2=A0def __init__(
> diff --git a/dts/tests/TestSuite_smoke_tests.py= b/dts/tests/TestSuite_smoke_tests.py
> new file mode 100644
> = index 00000000..a2fd9fae
> --- /dev/null
> +++ b/dts/tests/Test= Suite_smoke_tests.py
> @@ -0,0 +1,101 @@
> +# SPDX-License-Iden= tifier: BSD-3-Clause
> +# Copyright(c) 2023 University of New Hampshi= re
> +
> +from framework.config import InteractiveApp
> += from framework.remote_session import TestPmdShell
> +from framework.t= est_suite import TestSuite
> +
> +
> +class SmokeTests(Te= stSuite):
> + =C2=A0 =C2=A0is_blocking =3D True
> + =C2=A0 =C2= =A0# in this list, the first index is the address of the nic and the second= is
> + =C2=A0 =C2=A0# the driver for that nic.
> + =C2=A0 =C2= =A0list_of_nics: list[tuple[str, str]] =3D []

Woul= d using a dict make more sense?
I thought a tuple was a good option because of the two pieces of data= but I think a dict would also be useful because it would be more clear whe= n it is accessed later. I think turning this into a dict (and maybe migrati= ng to an object later) is a good idea.
=C2=A0

> +
> + =C2=A0 =C2=A0def set_up_suite(self) -> None:
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0Setup:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= build all DPDK

This needs to be updated.

You're right, I guess I missed = updating this.


> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= """
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.dpdk_build_dir= _path =3D self.sut_node.remote_dpdk_build_dir
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0for nic in self.sut_node.config.ports:
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0new_tuple =3D (nic.pci, nic.os_driver.strip())> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.list_of_nics.append(= new_tuple)

It would be better to rename this based= on what list of nics this is - the one we expect to find or the one we'= ;re testing?

Another spot = that you're right, it is not nearly as clear if you weren't the one= who wrote it, I'll update this as well.
=C2=A0
=

> +
> + =C2=A0 =C2=A0def test_unit_tests(self) -> None= :
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0run the fast-test unit-test suite through meson
> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0self.s= ut_node.main_session.send_command(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0f"meson test -C {self.dpdk_build_dir_path} --suite fast-t= ests",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0300,
>= + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0verify=3DTrue,
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0)
> +
> + =C2=A0 =C2=A0def test_driver_tes= ts(self) -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""= ;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0run the driver-test unit-test suite through meson
&= gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
> + =C2=A0 =C2=A0= =C2=A0 =C2=A0list_of_vdevs =3D ""
> + =C2=A0 =C2=A0 =C2=A0= =C2=A0for dev in self.sut_node._execution_config.vdevs:
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0list_of_vdevs +=3D f"--vdev {dev} &q= uot;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0list_of_vdevs =3D list_of_vdevs[:= -1]
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0if list_of_vdevs:
> + =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self._logger.info(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0"Running driver tests with the following virtu= al "
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= f"devices: {list_of_vdevs}"
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut= _node.main_session.send_command(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0f"meson test -C {self.dpdk_build_dir_path} --s= uite driver-tests "
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0f'--test-args "{list_of_vdevs}"',
>= ; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0300,
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0verify=3DTrue,
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0else:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.sut= _node.main_session.send_command(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0f"meson test -C {self.dpdk_build_dir_path} --s= uite driver-tests",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0300,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0verify=3DTrue,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0)
> +
> + =C2=A0 =C2=A0def test_devices_listed_in_testpmd= (self) -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""<= br>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0Uses testpmd driver to verify that devices have been fo= und by testpmd
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""

Every other test description is written in imperative = and with lowecase first letter. Let's unite everything to imperative wi= th uppercase first letter and properly ending the sentence with a dot.

Sounds good to me, I'll re= format them.
=C2=A0

> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0testpmd_driver: TestPmdShell =3D self.sut_node.create_inte= ractive_shell(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Interacti= veApp.testpmd
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0dev_list: list[str] =3D testpmd_driver.get_devices()
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0for nic in self.list_of_nics:
Here's where the more explicit naming would be beneficial -= I had to think about what's what.

> + =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0self.verify(
> + =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0nic[0] in dev_list,
> + =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Device {nic[0]} was = not listed in testpmd's available devices, "
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0"please check y= our configuration",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0)
> +
> + =C2=A0 =C2=A0def test_device_bound_to_driver(self= ) -> None:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
&g= t; + =C2=A0 =C2=A0 =C2=A0 =C2=A0Test:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0ensure that all drivers listed in the config are bound to the= correct driver
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0"""
= > + =C2=A0 =C2=A0 =C2=A0 =C2=A0path_to_dev =3D self.sut_node.main_sessio= n.join_remote_path(
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self= .sut_node._remote_dpdk_dir, "usertools", "dpdk-devbind.py&qu= ot;
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + =C2=A0 =C2=A0 =C2=A0 = =C2=A0for nic in self.list_of_nics:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0out =3D self.sut_node.main_session.send_command(
> + =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"{path_to_dev} --= status | grep {nic[0]}", 60
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0)

This may be something we want to put i= nto SutNode, something like get_device_info. We'd only send the command= once (not for every NIC as we're doing here) and parse the output.

I think similar to the above = test for testpmd, it makes sense to isolate the checking if the devices are= bound to the right driver to smoke tests. Just because after smoke tests v= erifies that they are bound to the right driver it would never be used in t= he SutNode again because you could then trust the config file. Also, if we = were to later change this to be isolated to only checking the NICs that are= being used for this execution like we had mentioned, you would want to iso= late this to those nics that are actually being used in the execution, not = all of the NICs in the SutNode again so I think keeping it in smoke tests m= akes more sense.

I do agree though that this can be done in only one call= to dev-bind which I wasn't thinking about when writing it and that I c= an refactor for the next version of the patch.
=C2=A0

> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.verify(=
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0len(out.s= tdout) !=3D 0,
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0f"Failed to find configured device ({nic[0]}) using dpdk-devbind= .py",
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for string in out.stdout.split(&qu= ot; "):
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0if "drv=3D" in string:
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.verify(
> + =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0st= ring.split("=3D")[1] =3D=3D nic[1],
> + =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0f"Dr= iver for device {nic[0]} does not match driver listed in "
> + = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0f'configuration (bound to {string.split("=3D")[1]})= ',
> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0)
> --
> 2.41.0
>

Thank you for the feedback, it's always very helpful= .
= -Jeremy
--000000000000f8f02805ffedb07c--