From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CBCCF42E49; Tue, 11 Jul 2023 20:09:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D65A40A7D; Tue, 11 Jul 2023 20:09:36 +0200 (CEST) Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by mails.dpdk.org (Postfix) with ESMTP id 461CF4003C for ; Tue, 11 Jul 2023 20:09:34 +0200 (CEST) Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1b8ad356f03so39783535ad.1 for ; Tue, 11 Jul 2023 11:09:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1689098973; x=1691690973; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=vKcNQeyHWW+3/D85Q53ADdHhcltC/hb5+PrCRBPRGqA=; b=RDXXU0lAu6D45rpR+inJtQOH6uTVGd5r/vecfQz9HKAiCqj0VH2cSkiT9m1BQVusFb qJ0UsZhPYcEIbyNKExmKdNXgLyTdZprKsjC56iZjuWxeVsTkFmecIKn5fi90do/lzXzW HTUL762JMFMqtLiZf5NTcsKQDcTxRILBMOXF8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689098973; x=1691690973; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=vKcNQeyHWW+3/D85Q53ADdHhcltC/hb5+PrCRBPRGqA=; b=CAAI4geEVt2ViM+vtlskpxGJPF5kkCngUMTAFlZUVLjD0Q10lRF3rL43lkIJgCX9CT VgjA6qt/24rbJ4edw7YJhjvrGThQo1rpNejszOHHpOl+AEiDXziCnjqq2Ox1pPDAC146 4hk2P9CjGvZWKQrrQrS+Z+ofZ6ZBfJzLVViaKbESHO/XnYqr+335x7ydqygRn8riYj5T DK+Js2TUW229F5uGhd1jRdCKbfx0TwRBpkRlKL/wMRfl2z1aQuv0vhj+IKoHXiNcNYDa T/PyoqF7fe5X2nhLNeiVut6+OFzrXos5XoxVn37rhrERzK+6Uu7rkkA2kM6s1enSVlYl 31lg== X-Gm-Message-State: ABy/qLaJU58dY5hSHJPyJocDzkbXLP833SL2DtWdJnMmzHhLJzIFafhb 4rUxZr2dM2HXxn/xwiYSI+wn4ovftiQOGJmSJuFiyA== X-Google-Smtp-Source: APBJJlHnewqJBfGy1yftXxrteORJSs6ZvUQ8AMsKiSAB5hkOFvIoSC7e6oMIx/h2okoXEyNIbQmpkJKNQVBnuLQTY9A= X-Received: by 2002:a17:903:228d:b0:1b8:a3a6:df9c with SMTP id b13-20020a170903228d00b001b8a3a6df9cmr14703961plh.60.1689098972792; Tue, 11 Jul 2023 11:09:32 -0700 (PDT) MIME-Version: 1.0 References: <20230615201318.13359-2-jspewock@iol.unh.edu> <20230615201318.13359-3-jspewock@iol.unh.edu> In-Reply-To: From: Jeremy Spewock Date: Tue, 11 Jul 2023 14:09:21 -0400 Message-ID: Subject: Re: [PATCH v1 1/2] dts: add smoke tests To: =?UTF-8?Q?Juraj_Linke=C5=A1?= Cc: Honnappa.Nagarahalli@arm.com, thomas@monjalon.net, lijuan.tu@intel.com, wathsala.vithanage@arm.com, probb@iol.unh.edu, dev@dpdk.org Content-Type: multipart/alternative; boundary="000000000000439e3206003a00c2" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000439e3206003a00c2 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Just a couple of responses but anywhere I didn't respond I also agree. On Tue, Jul 11, 2023 at 4:17=E2=80=AFAM Juraj Linke=C5=A1 wrote: > Some more comments below. I agree where I didn't reply. > > On Sat, Jul 8, 2023 at 1:07=E2=80=AFAM Jeremy Spewock > wrote: > > > > > > > > On Thu, Jul 6, 2023 at 10:54=E2=80=AFAM Juraj Linke=C5=A1 > wrote: > >> > >> There are mypy errors related to paramiko: > >> framework/remote_session/remote/interactive_remote_session.py:8: error= : > Library stubs not installed for "paramiko" (or incompatible with Python > 3.10) > >> framework/remote_session/remote/interactive_remote_session.py:9: error= : > Library stubs not installed for "paramiko.ssh_exception" (or incompatible > with Python 3.10) > >> > >> We do this for pexpect: > >> import pexpect # type: ignore > >> from pexpect import pxssh # type: ignore > >> > >> We should do the same for paramiko. > > > > > > Good to know, I'll be sure to change that to make sure it passes. > > > >> > >> > >> There are also some changes which may require coordination. I'm not > sure how we'd do that, the best may be to align that off-list. > >> > >> More inline. > >> > >> On Thu, Jun 15, 2023 at 10:13=E2=80=AFPM wrote: > >> > > >> > From: Jeremy Spewock > >> > > >> > Adds a new test suite as well as configuration changes needed for > running > >> > smoke tests that verify general environment aspects of the system > >> > under test. If any of these tests fail, the DTS execution terminates > >> > as part of a "fail-fast" model. > >> > > >> > Signed-off-by: Jeremy Spewock > >> > --- > >> > dts/conf.yaml | 13 ++ > >> > dts/framework/config/__init__.py | 114 +++++++++++++-- > >> > dts/framework/config/conf_yaml_schema.json | 135 > +++++++++++++++++- > >> > dts/framework/dts.py | 101 ++++++++++--- > >> > dts/framework/exception.py | 12 ++ > >> > dts/framework/remote_session/__init__.py | 10 +- > >> > dts/framework/remote_session/os_session.py | 34 ++++- > >> > dts/framework/remote_session/posix_session.py | 30 ++++ > >> > .../remote_session/remote/__init__.py | 12 ++ > >> > .../remote/interactive_remote_session.py | 113 +++++++++++++++ > >> > .../remote/interactive_shell.py | 98 +++++++++++++ > >> > .../remote_session/remote/testpmd_shell.py | 58 ++++++++ > >> > dts/framework/test_result.py | 38 ++++- > >> > dts/framework/test_suite.py | 31 +++- > >> > dts/framework/testbed_model/node.py | 2 + > >> > dts/framework/testbed_model/sut_node.py | 110 +++++++++++++- > >> > dts/tests/TestSuite_smoke_tests.py | 101 +++++++++++++ > >> > 17 files changed, 962 insertions(+), 50 deletions(-) > >> > create mode 100644 > dts/framework/remote_session/remote/interactive_remote_session.py > >> > create mode 100644 > dts/framework/remote_session/remote/interactive_shell.py > >> > create mode 100644 > dts/framework/remote_session/remote/testpmd_shell.py > >> > create mode 100644 dts/tests/TestSuite_smoke_tests.py > >> > > >> > diff --git a/dts/conf.yaml b/dts/conf.yaml > >> > index a9bd8a3e..03fd57e1 100644 > >> > --- a/dts/conf.yaml > >> > +++ b/dts/conf.yaml > >> > @@ -10,6 +10,8 @@ executions: > >> > compiler_wrapper: ccache > >> > perf: false > >> > func: true > >> > + vdevs: #names of virtual devices to be used for testing > >> > >> Vdevs are optional, let's mention that in the comment as with the > hugepages config. > > > > > > That's a good point, I'll be sure to include that for clarity. > > > >> > >> > >> > > >> > + - "crypto_openssl" > >> > >> How (or where) is this configured in the original DTS? The vdevs are > only going to be used on SUTs, so it may make sense to put them to node > config. On the other hand, each execution using the SUT may use different > vdevs, so this is probably the best place. Maybe we could move it under > system_under_test in execution? That would make more sense I think - it > would be an addition to the existing SUT configuration (as in here's wher= e > we augment SUT config specific to the execution). > >> > > > > > > I added it under execution under that same line of thinking that you > mentioned: different executions might call for use of different vdevs. I > think that is a good point however that these are only going to be used o= n > the SUT so it makes sense to add it under that section. > > > > I think this would potentially be good for other things we mentioned as > well like how to handle ports on the SUT node and if those should be list= ed > under SUT or execution. I think if we turn this system_under_test key int= o > an object that we can add things to the SUT based on the execution. Then, > in the future, we could create a devices object that could be used for CP= U > devices, NICs, or things of that nature and add them to the SUT based on > the execution. > > It seems we have an agreement. Let's put vdevs under system_under_test > and we'll put other execution specific SUT config there as we add new > features. > > > > >> > > >> > test_suites: > >> > - hello_world > >> > system_under_test: "SUT 1" > >> > @@ -20,6 +22,17 @@ nodes: > >> > arch: x86_64 > >> > os: linux > >> > lcores: "" > >> > + ports: > >> > + - pci: "0000:00:08.0" > >> > + dpdk_os_driver: vfio-pci > >> > >> We could find a better name for this driver (and maybe even the regula= r > os driver). It's the OS driver that dpdk uses and there could be better > names - os_dpdk_driver, os_driver_dpdk, os_driver_for_dpdk. Which one do > you like the most? Or maybe some other name? > >> In either case we probably want to add a comment to both drivers that > explain what we mean here. The name doesn't need to be perfect then. :-) > I'll make sure to align the change with my patch if we change this. > > > > > > I think out of those I like os_driver_for_dpdk which makes it a little > more clear that it is just an OS driver that DPDK is going to use, not > something specific to DPDK. I agree that this still isn't perfect but it = is > at least more clear and the comment will help clear it the rest of the wa= y. > > Makes sense, I'll rename it to os_driver_for_dpdk in my patch. > > > > >> > >> > > >> > + os_driver: i40e > >> > + peer_node: "TG 1" > >> > + peer_pci: "0000:00:08.0" > >> > + - pci: "0000:00:08.1" > >> > + dpdk_os_driver: vfio-pci > >> > + os_driver: i40e > >> > + peer_node: "TG 1" > >> > + peer_pci: "0000:00:08.1" > >> > use_first_core: false > >> > memory_channels: 4 > >> > hugepages: # optional; if removed, will use system hugepage > configuration > >> > diff --git a/dts/framework/config/__init__.py > b/dts/framework/config/__init__.py > >> > index ebb0823f..74e5158d 100644 > >> > --- a/dts/framework/config/__init__.py > >> > +++ b/dts/framework/config/__init__.py > >> > @@ -72,6 +72,21 @@ class HugepageConfiguration: > >> > force_first_numa: bool > >> > > >> > > >> > +@dataclass(slots=3DTrue, frozen=3DTrue) > >> > +class PortConfig: > >> > + id: int > >> > >> I removed id in my latest local version as I didn't see a real use for > it. > >> > >> > > >> > + node: str > >> > + pci: str > >> > + dpdk_os_driver: str > >> > + os_driver: str > >> > + peer_node: str > >> > + peer_pci: str > >> > + > >> > + @staticmethod > >> > + def from_dict(id: int, node: str, d: dict) -> "PortConfig": > >> > + return PortConfig(id=3Did, node=3Dnode, **d) > >> > >> This would need to be changed if we remove the id. > >> > > > > That makes sense, I don't really see a need for it either so I will als= o > remove it. > > > >> > >> > + > >> > + > >> > @dataclass(slots=3DTrue, frozen=3DTrue) > >> > class NodeConfiguration: > >> > name: str > >> > @@ -84,6 +99,7 @@ class NodeConfiguration: > >> > use_first_core: bool > >> > memory_channels: int > >> > hugepages: HugepageConfiguration | None > >> > + ports: list[PortConfig] > >> > > >> > @staticmethod > >> > def from_dict(d: dict) -> "NodeConfiguration": > >> > @@ -92,18 +108,46 @@ def from_dict(d: dict) -> "NodeConfiguration": > >> > if "force_first_numa" not in hugepage_config: > >> > hugepage_config["force_first_numa"] =3D False > >> > hugepage_config =3D > HugepageConfiguration(**hugepage_config) > >> > + common_config =3D { > >> > + "name": d["name"], > >> > + "hostname": d["hostname"], > >> > + "user": d["user"], > >> > + "password": d.get("password"), > >> > + "arch": Architecture(d["arch"]), > >> > + "os": OS(d["os"]), > >> > + "lcores": d.get("lcores", "1"), > >> > + "use_first_core": d.get("use_first_core", False), > >> > + "memory_channels": d.get("memory_channels", 1), > >> > + "hugepages": hugepage_config, > >> > + "ports": [ > >> > + PortConfig.from_dict(i, d["name"], port) > >> > + for i, port in enumerate(d["ports"]) > >> > + ], > >> > >> This also needs to be modified when removing PortConfig.id: > >> "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]= ], > >> > >> > > >> > + } > >> > + > >> > + return NodeConfiguration(**common_config) > >> > + > >> > + > >> > +@dataclass(slots=3DTrue) > >> > +class NodeVersionInfo: > >> > >> This looks like we could use it for all sorts of things in the future, > so I'd just call it NodeInfo. > >> > > > > > > I agree that it could be used for a lot of things, I think I just added > the version so it would be more specific but in hindsight I agree that th= is > could be shortened and used for multiple things. > > > >> > >> > > >> > + """Class to hold important versions within the node. > >> > + > >> > + This class, unlike the NodeConfiguration class, cannot be > generated at the start. > >> > + This is because we need to initialize a connection with the nod= e > before we can > >> > + collet the information needed in this class. Therefore, it > cannot be a part of > >> > >> Typo: collect > >> > > > > Oops, good catch. > > > >> > >> > + the configuration class above. > >> > + """ > >> > > >> > - return NodeConfiguration( > >> > - name=3Dd["name"], > >> > - hostname=3Dd["hostname"], > >> > - user=3Dd["user"], > >> > - password=3Dd.get("password"), > >> > - arch=3DArchitecture(d["arch"]), > >> > - os=3DOS(d["os"]), > >> > - lcores=3Dd.get("lcores", "1"), > >> > - use_first_core=3Dd.get("use_first_core", False), > >> > - memory_channels=3Dd.get("memory_channels", 1), > >> > - hugepages=3Dhugepage_config, > >> > + os_name: str > >> > + os_version: str > >> > + kernel_version: str > >> > + > >> > + @staticmethod > >> > + def from_dict(d: dict): > >> > + return NodeVersionInfo( > >> > + os_name=3Dd["os_name"], > >> > + os_version=3Dd["os_version"], > >> > + kernel_version=3Dd["kernel_version"], > >> > ) > >> > > >> > > >> > @@ -128,6 +172,24 @@ def from_dict(d: dict) -> > "BuildTargetConfiguration": > >> > ) > >> > > >> > > >> > +@dataclass(slots=3DTrue) > >> > +class BuildTargetVersionInfo: > >> > + """Class to hold important versions within the build target. > >> > + > >> > + This is very similar to the NodeVersionInfo class, it just > instead holds information > >> > + for the build target. > >> > + """ > >> > + > >> > + dpdk_version: str > >> > + compiler_version: str > >> > + > >> > + @staticmethod > >> > + def from_dict(d: dict): > >> > + return BuildTargetVersionInfo( > >> > + dpdk_version=3Dd["dpdk_version"], > compiler_version=3Dd["compiler_version"] > >> > + ) > >> > + > >> > >> Do you expect we'll be storing more fields than just these two (if so, > then we can again rename the class to BuildTargetInfo)? If it's just thes= e > two, the extra class may not help that much (we could pass around a tuple= ). > >> > >> On the other hand, there's not really a reason to not have this class. > I could see us adding to it and I like BuildTargetInfo because of that. > > > > > > I figured that it could be useful for storing any extra information in > the future and like you said, does no harm so I made it a class. I think > keeping it a class just makes it easier to extend it in the future but I'= ll > rename it like the above info class. > > > >> > >> > >> > > >> > + > >> > class TestSuiteConfigDict(TypedDict): > >> > suite: str > >> > cases: list[str] > >> > @@ -157,6 +219,8 @@ class ExecutionConfiguration: > >> > func: bool > >> > test_suites: list[TestSuiteConfig] > >> > system_under_test: NodeConfiguration > >> > + vdevs: list[str] > >> > + skip_smoke_tests: bool > >> > >> This should be added to conf.yaml to demonstrate the availability of > the option. > > > > > > Very good point, it would be hard for people to know it was an option > otherwise. > > > >> > >> > > >> > > >> > @staticmethod > >> > def from_dict(d: dict, node_map: dict) -> > "ExecutionConfiguration": > >> > @@ -167,14 +231,18 @@ def from_dict(d: dict, node_map: dict) -> > "ExecutionConfiguration": > >> > map(TestSuiteConfig.from_dict, d["test_suites"]) > >> > ) > >> > sut_name =3D d["system_under_test"] > >> > + list_of_vdevs =3D d["vdevs"] > >> > + skip_smoke_tests =3D d["skip_smoke_tests"] if > "skip_smoke_tests" in d else False > >> > >> Just as with hugepages, we could skip the else branch if we set the > default of skip_smoke_tests to False. I'm not sure which is the better > approach. > > > > > > I am unsure of which is better and it doesn't make a massive difference= , > but I will change this to a .get() call where the default is false to mor= e > closely align with the huge pages configuration. > > That seems to be the best option. > > > > >> > >> > >> > > >> > assert sut_name in node_map, f"Unknown SUT {sut_name} in > execution {d}" > >> > > >> > return ExecutionConfiguration( > >> > build_targets=3Dbuild_targets, > >> > perf=3Dd["perf"], > >> > func=3Dd["func"], > >> > + skip_smoke_tests=3Dskip_smoke_tests, > >> > test_suites=3Dtest_suites, > >> > system_under_test=3Dnode_map[sut_name], > >> > + vdevs=3Dlist_of_vdevs, > >> > ) > >> > > >> > > >> > @@ -221,3 +289,27 @@ def load_config() -> Configuration: > >> > > >> > > >> > CONFIGURATION =3D load_config() > >> > + > >> > + > >> > +@unique > >> > +class InteractiveApp(Enum): > >> > + """An enum that represents different supported interactive > applications > >> > + > >> > + The values in this enum must all be set to objects that have a > key called > >> > + "default_path" where "default_path" represents an array for the > path to the > >> > + application. This default path will be passed into the handler > class for the > >> > + application so that it can start the application. For every key > other than > >> > + the default shell option, the path will be appended to the path > to the DPDK > >> > + build directory for the current SUT node. > >> > + """ > >> > + > >> > + shell =3D {"default_path": [""]} > >> > + testpmd =3D {"default_path": ["app", "dpdk-testpmd"]} > >> > >> The stored path could be a PurePath object. It'll be instantiated as a= n > OS-specific path, but main_session.join_remote_path will convert it > properly. > >> > > > > I actually was thinking about this because I saw it done when I was > looking through a different snippet of DTS code in one of your patches I > believe and I realized it would be a better approach. I much prefer using > PurePath here just because that is more clear and feels cleaner so I will > make this change. > > > >> > >> > + > >> > + def get_path(self): > >> > + """A method for getting the default paths of an application > >> > + > >> > + Returns: > >> > + String array that represents an OS agnostic path to the > application. > >> > + """ > >> > + return self.value["default_path"] > >> > diff --git a/dts/framework/config/conf_yaml_schema.json > b/dts/framework/config/conf_yaml_schema.json > >> > index ca2d4a1e..3f7c301a 100644 > >> > --- a/dts/framework/config/conf_yaml_schema.json > >> > +++ b/dts/framework/config/conf_yaml_schema.json > >> > @@ -6,6 +6,76 @@ > >> > "type": "string", > >> > "description": "A unique identifier for a node" > >> > }, > >> > + "NIC": { > >> > + "type": "string", > >> > + "enum": [ > >> > + "ALL", > >> > + "ConnectX3_MT4103", > >> > + "ConnectX4_LX_MT4117", > >> > + "ConnectX4_MT4115", > >> > + "ConnectX5_MT4119", > >> > + "ConnectX5_MT4121", > >> > + "I40E_10G-10G_BASE_T_BC", > >> > + "I40E_10G-10G_BASE_T_X722", > >> > + "I40E_10G-SFP_X722", > >> > + "I40E_10G-SFP_XL710", > >> > + "I40E_10G-X722_A0", > >> > + "I40E_1G-1G_BASE_T_X722", > >> > + "I40E_25G-25G_SFP28", > >> > + "I40E_40G-QSFP_A", > >> > + "I40E_40G-QSFP_B", > >> > + "IAVF-ADAPTIVE_VF", > >> > + "IAVF-VF", > >> > + "IAVF_10G-X722_VF", > >> > + "ICE_100G-E810C_QSFP", > >> > + "ICE_25G-E810C_SFP", > >> > + "ICE_25G-E810_XXV_SFP", > >> > + "IGB-I350_VF", > >> > + "IGB_1G-82540EM", > >> > + "IGB_1G-82545EM_COPPER", > >> > + "IGB_1G-82571EB_COPPER", > >> > + "IGB_1G-82574L", > >> > + "IGB_1G-82576", > >> > + "IGB_1G-82576_QUAD_COPPER", > >> > + "IGB_1G-82576_QUAD_COPPER_ET2", > >> > + "IGB_1G-82580_COPPER", > >> > + "IGB_1G-I210_COPPER", > >> > + "IGB_1G-I350_COPPER", > >> > + "IGB_1G-I354_SGMII", > >> > + "IGB_1G-PCH_LPTLP_I218_LM", > >> > + "IGB_1G-PCH_LPTLP_I218_V", > >> > + "IGB_1G-PCH_LPT_I217_LM", > >> > + "IGB_1G-PCH_LPT_I217_V", > >> > + "IGB_2.5G-I354_BACKPLANE_2_5GBPS", > >> > + "IGC-I225_LM", > >> > + "IGC-I226_LM", > >> > + "IXGBE_10G-82599_SFP", > >> > + "IXGBE_10G-82599_SFP_SF_QP", > >> > + "IXGBE_10G-82599_T3_LOM", > >> > + "IXGBE_10G-82599_VF", > >> > + "IXGBE_10G-X540T", > >> > + "IXGBE_10G-X540_VF", > >> > + "IXGBE_10G-X550EM_A_SFP", > >> > + "IXGBE_10G-X550EM_X_10G_T", > >> > + "IXGBE_10G-X550EM_X_SFP", > >> > + "IXGBE_10G-X550EM_X_VF", > >> > + "IXGBE_10G-X550T", > >> > + "IXGBE_10G-X550_VF", > >> > + "brcm_57414", > >> > + "brcm_P2100G", > >> > + "cavium_0011", > >> > + "cavium_a034", > >> > + "cavium_a063", > >> > + "cavium_a064", > >> > + "fastlinq_ql41000", > >> > + "fastlinq_ql41000_vf", > >> > + "fastlinq_ql45000", > >> > + "fastlinq_ql45000_vf", > >> > + "hi1822", > >> > + "virtio" > >> > + ] > >> > + }, > >> > + > >> > >> All these NICs may be overkill, do we want to trim them? > >> > > > > > > I think in general that the more we have the better to make it more > universally usable. If a NIC isn't supported by DTS anymore we could pull > it out but I don't see a problem with maintaining a list that has all > supported NICs even if it does end up being long. > > > > The broader question is what does it mean that a NIC is supported in > DTS? That's a question we should address in the CI/DTS call and in the > meantime, we could just leave the list as is. > > I think this would be a very good thing to bring up and agree that there should be more discussion on it. It probably is better to leave the list longer in the meantime like you were saying as well. > >> > >> > > >> > "ARCH": { > >> > "type": "string", > >> > "enum": [ > >> > @@ -94,10 +164,24 @@ > >> > "amount" > >> > ] > >> > }, > >> > + "pci_address": { > >> > + "type": "string", > >> > + "pattern": > "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$" > >> > + }, > >> > + "port_peer_address": { > >> > + "description": "Peer is a TRex port, and IXIA port or a PCI > address", > >> > + "oneOf": [ > >> > + { > >> > + "description": "PCI peer port", > >> > + "$ref": "#/definitions/pci_address" > >> > + } > >> > + ] > >> > + }, > >> > "test_suite": { > >> > "type": "string", > >> > "enum": [ > >> > - "hello_world" > >> > + "hello_world", > >> > + "smoke_tests" > >> > >> Is there a reason to provide both this and the > execution.skip_smoke_tests option? If we provide both, we may need to thi= nk > about what happens when both are present. I'd say it's best for everyone > (users and devs) to have only one option and I'd lean towards > execution.skip_smoke_tests. > > > > > > Good catch, I meant to remove this. In theory if you added it in both > places it wouldn't cause too much of an issue, you would just be running > smoke tests twice in your execution. I will remove it anyway though. > > > >> > >> > ] > >> > }, > >> > "test_target": { > >> > @@ -165,6 +249,44 @@ > >> > }, > >> > "hugepages": { > >> > "$ref": "#/definitions/hugepages" > >> > + }, > >> > + "ports": { > >> > + "type": "array", > >> > + "items": { > >> > + "type": "object", > >> > + "description": "Each port should be described on both > sides of the connection. This makes configuration slightly more verbose b= ut > greatly simplifies implementation. If there are an inconsistencies, then > DTS will not run until that issue is fixed. An example inconsistency woul= d > be port 1, node 1 says it is connected to port 1, node 2, but port 1, nod= e > 2 says it is connected to port 2, node 1.", > >> > + "properties": { > >> > + "pci": { > >> > + "$ref": "#/definitions/pci_address", > >> > + "description": "The local PCI address of the port= " > >> > + }, > >> > + "dpdk_os_driver": { > >> > + "type": "string", > >> > + "description": "The driver that the kernel should > bind this device to for DPDK to use it. (ex: vfio-pci)" > >> > + }, > >> > + "os_driver": { > >> > + "type": "string", > >> > + "description": "The driver normally used by this > port (ex: i40e)" > >> > + }, > >> > + "peer_node": { > >> > + "type": "string", > >> > + "description": "The name of the node the peer por= t > is on" > >> > + }, > >> > + "peer_pci": { > >> > + "$ref": "#/definitions/pci_address", > >> > + "description": "The PCI address of the peer port" > >> > + } > >> > + }, > >> > + "additionalProperties": false, > >> > + "required": [ > >> > + "pci", > >> > + "dpdk_os_driver", > >> > + "os_driver", > >> > + "peer_node", > >> > + "peer_pci" > >> > + ] > >> > + }, > >> > + "minimum": 1 > >> > } > >> > }, > >> > "additionalProperties": false, > >> > @@ -211,6 +333,17 @@ > >> > ] > >> > } > >> > }, > >> > + "vdevs": { > >> > + "description": "Names of vdevs to be used in execution"= , > >> > + "type": "array", > >> > + "items": { > >> > + "type":"string" > >> > >> Missing space before "string". > >> > >> > + } > >> > + }, > >> > + "skip_smoke_tests": { > >> > + "description": "Optional field that allows you to skip > smoke testing", > >> > + "type":"boolean" > >> > >> Also the missing space. > > > > > > Good call, I'll correct these. > > > >> > >> > >> > + }, > >> > "system_under_test": { > >> > "$ref": "#/definitions/node_name" > >> > } > >> > diff --git a/dts/framework/dts.py b/dts/framework/dts.py > >> > index 05022845..f76c9ceb 100644 > >> > --- a/dts/framework/dts.py > >> > +++ b/dts/framework/dts.py > >> > @@ -5,7 +5,13 @@ > >> > > >> > import sys > >> > > >> > -from .config import CONFIGURATION, BuildTargetConfiguration, > ExecutionConfiguration > >> > +from .config import ( > >> > + CONFIGURATION, > >> > + BuildTargetConfiguration, > >> > + ExecutionConfiguration, > >> > + TestSuiteConfig, > >> > +) > >> > +from .exception import BlockingTestSuiteError > >> > from .logger import DTSLOG, getLogger > >> > from .test_result import BuildTargetResult, DTSResult, > ExecutionResult, Result > >> > from .test_suite import get_test_suites > >> > @@ -82,7 +88,9 @@ def _run_execution( > >> > running all build targets in the given execution. > >> > """ > >> > dts_logger.info(f"Running execution with SUT '{ > execution.system_under_test.name}'.") > >> > - execution_result =3D result.add_execution(sut_node.config) > >> > + execution_result =3D result.add_execution( > >> > + sut_node.config, sut_node.get_node_versions() > >> > + ) > >> > > >> > try: > >> > sut_node.set_up_execution(execution) > >> > @@ -118,14 +126,17 @@ def _run_build_target( > >> > > >> > try: > >> > sut_node.set_up_build_target(build_target) > >> > - result.dpdk_version =3D sut_node.dpdk_version > >> > + # result.dpdk_version =3D sut_node.dpdk_version > >> > + build_target_result.add_build_target_versions( > >> > + sut_node.get_build_target_versions() > >> > + ) > >> > build_target_result.update_setup(Result.PASS) > >> > except Exception as e: > >> > dts_logger.exception("Build target setup failed.") > >> > build_target_result.update_setup(Result.FAIL, e) > >> > > >> > else: > >> > - _run_suites(sut_node, execution, build_target_result) > >> > + _run_all_suites(sut_node, execution, build_target_result) > >> > > >> > finally: > >> > try: > >> > @@ -136,7 +147,46 @@ def _run_build_target( > >> > build_target_result.update_teardown(Result.FAIL, e) > >> > > >> > > >> > -def _run_suites( > >> > +def _run_single_suite( > >> > + sut_node: SutNode, > >> > + execution: ExecutionConfiguration, > >> > + build_target_result: BuildTargetResult, > >> > + test_suite_config: TestSuiteConfig, > >> > +) -> None: > >> > + """Runs a single test suite. > >> > + > >> > + Args: > >> > + sut_node: Node to run tests on. > >> > + execution: Execution the test case belongs to. > >> > + build_target_result: Build target configuration test case i= s > run on > >> > + test_suite_config: Test suite configuration > >> > + > >> > + Raises: > >> > + BlockingTestSuiteError: If a test suite that was marked as > blocking fails. > >> > + """ > >> > + try: > >> > + full_suite_path =3D > f"tests.TestSuite_{test_suite_config.test_suite}" > >> > + test_suite_classes =3D get_test_suites(full_suite_path) > >> > + suites_str =3D ", ".join((x.__name__ for x in > test_suite_classes)) > >> > + dts_logger.debug(f"Found test suites '{suites_str}' in > '{full_suite_path}'.") > >> > + except Exception as e: > >> > + dts_logger.exception("An error occurred when searching for > test suites.") > >> > + result.update_setup(Result.ERROR, e) > >> > + > >> > + else: > >> > + for test_suite_class in test_suite_classes: > >> > + test_suite =3D test_suite_class( > >> > + sut_node, > >> > + test_suite_config.test_cases, > >> > + execution.func, > >> > + build_target_result, > >> > + sut_node._build_target_config, > >> > + result, > >> > + ) > >> > + test_suite.run() > >> > + > >> > + > >> > +def _run_all_suites( > >> > sut_node: SutNode, > >> > execution: ExecutionConfiguration, > >> > build_target_result: BuildTargetResult, > >> > @@ -146,27 +196,32 @@ def _run_suites( > >> > with possibly only a subset of test cases. > >> > If no subset is specified, run all test cases. > >> > """ > >> > + end_build_target =3D False > >> > + smoke_test_config =3D TestSuiteConfig.from_dict("smoke_tests") > >> > + if not execution.skip_smoke_tests: > >> > + try: > >> > + _run_single_suite( > >> > + sut_node, execution, build_target_result, > smoke_test_config > >> > + ) > >> > + except BlockingTestSuiteError as e: > >> > + dts_logger.exception("Smoke tests failed, stopping buil= d > target execution.") > >> > + result.add_error(e) > >> > + return > >> > for test_suite_config in execution.test_suites: > >> > try: > >> > - full_suite_path =3D > f"tests.TestSuite_{test_suite_config.test_suite}" > >> > - test_suite_classes =3D get_test_suites(full_suite_path) > >> > - suites_str =3D ", ".join((x.__name__ for x in > test_suite_classes)) > >> > - dts_logger.debug( > >> > - f"Found test suites '{suites_str}' in > '{full_suite_path}'." > >> > + _run_single_suite( > >> > + sut_node, execution, build_target_result, > test_suite_config > >> > ) > >> > - except Exception as e: > >> > - dts_logger.exception("An error occurred when searching > for test suites.") > >> > - result.update_setup(Result.ERROR, e) > >> > - > >> > - else: > >> > - for test_suite_class in test_suite_classes: > >> > - test_suite =3D test_suite_class( > >> > - sut_node, > >> > - test_suite_config.test_cases, > >> > - execution.func, > >> > - build_target_result, > >> > - ) > >> > - test_suite.run() > >> > + except BlockingTestSuiteError as e: > >> > + dts_logger.exception( > >> > + f"An error occurred within > {test_suite_config.test_suite}. " > >> > + "Skipping build target..." > >> > + ) > >> > + result.add_error(e) > >> > + end_build_target =3D True > >> > + # if a blocking test failed and we need to bail out of suit= e > executions > >> > + if end_build_target: > >> > + break > >> > > >> > >> This doesn't look great to me. I think we could just add a > list[TestSuiteConfig] parameter to _run_suites. We could then just pass a > list with/without smoke TestSuiteConfig, something like: > >> test_suites =3D execution.test_suites > >> if not execution.skip_smoke_tests: > >> test_suites[:0] =3D [TestSuiteConfig.from_dict("smoke_tests")] > >> _run_suites(sut_node, execution, build_target_result, test_suites) > >> > >> A note on method ordering: > >> _run_single_suite before _run_all_suites was confusing to me, as the > methods are used in the reverse order - _run_build_target calls > _run_all_suites which only then calls _run_single_suite. I'd like the > methods to be ordered in the order they're used in the code. > > > > > > I agree this was ugly, that's a good way to handle it and I'll migrate > to just adding it to the front of the list. I think I can actually just u= se > the execution.test_suites array directly that is already being used and > just append it to the front of that. This way the list of test suites in > the execution configuration is accurate to what actually ran. I don't > recall why this happened with the method ordering either but I'll get tha= t > corrected as well. I'm thinking it'll look something more like this: > > The example is missing, but not needed, just prepend it to the list if > possible - my snippet operated under the assumption of config being a > frozen dataset, but that may not matter for mutable types. > > > > >> > >> > > >> > def _exit_dts() -> None: > >> > diff --git a/dts/framework/exception.py b/dts/framework/exception.py > >> > index ca353d98..dfb12df4 100644 > >> > --- a/dts/framework/exception.py > >> > +++ b/dts/framework/exception.py > >> > @@ -25,6 +25,7 @@ class ErrorSeverity(IntEnum): > >> > SSH_ERR =3D 4 > >> > DPDK_BUILD_ERR =3D 10 > >> > TESTCASE_VERIFY_ERR =3D 20 > >> > + BLOCKING_TESTSUITE_ERR =3D 25 > >> > > >> > > >> > class DTSError(Exception): > >> > @@ -144,3 +145,14 @@ def __init__(self, value: str): > >> > > >> > def __str__(self) -> str: > >> > return repr(self.value) > >> > + > >> > + > >> > +class BlockingTestSuiteError(DTSError): > >> > + suite_name: str > >> > + severity: ClassVar[ErrorSeverity] =3D > ErrorSeverity.BLOCKING_TESTSUITE_ERR > >> > + > >> > + def __init__(self, suite_name: str) -> None: > >> > + self.suite_name =3D suite_name > >> > + > >> > + def __str__(self) -> str: > >> > + return f"Blocking suite {self.suite_name} failed." > >> > diff --git a/dts/framework/remote_session/__init__.py > b/dts/framework/remote_session/__init__.py > >> > index ee221503..04342b8d 100644 > >> > --- a/dts/framework/remote_session/__init__.py > >> > +++ b/dts/framework/remote_session/__init__.py > >> > @@ -1,5 +1,6 @@ > >> > # SPDX-License-Identifier: BSD-3-Clause > >> > # Copyright(c) 2023 PANTHEON.tech s.r.o. > >> > +# Copyright(c) 2022-2023 University of New Hampshire > >> > >> Should the year be just 2023? I'm not sure about these copyright > statements. > > > > > > I think just 2023 makes sense, I wasn't completely sure either but I wa= s > trying to align more with older copyright statements throughout the code > and wasn't sure why some others had a range. In anycase, just 2023 makes > sense to me and it matches the one above so I have no problem making it > that. > > 2022-2023 is there in files which were applied in 2022 and modified in > 2023 (by the same author/organization). This would be the first > modification by UNH, hence 2023 would make more sense to me. > > That's a very good distinction, thank you for the info! > > >> > >> > > >> > """ > >> > The package provides modules for managing remote connections to a > remote host (node), > >> > @@ -17,7 +18,14 @@ > >> > > >> > from .linux_session import LinuxSession > >> > from .os_session import OSSession > >> > -from .remote import CommandResult, RemoteSession, SSHSession > >> > +from .remote import ( > >> > + CommandResult, > >> > + InteractiveRemoteSession, > >> > + InteractiveShell, > >> > + RemoteSession, > >> > + SSHSession, > >> > + TestPmdShell, > >> > +) > >> > > >> > > >> > def create_session( > >> > diff --git a/dts/framework/remote_session/os_session.py > b/dts/framework/remote_session/os_session.py > >> > index 4c48ae25..f5f53923 100644 > >> > --- a/dts/framework/remote_session/os_session.py > >> > +++ b/dts/framework/remote_session/os_session.py > >> > @@ -12,7 +12,13 @@ > >> > from framework.testbed_model import LogicalCore > >> > from framework.utils import EnvVarsDict, MesonArgs > >> > > >> > -from .remote import CommandResult, RemoteSession, > create_remote_session > >> > +from .remote import ( > >> > + CommandResult, > >> > + InteractiveRemoteSession, > >> > + RemoteSession, > >> > + create_interactive_session, > >> > + create_remote_session, > >> > +) > >> > > >> > > >> > class OSSession(ABC): > >> > @@ -26,6 +32,7 @@ class OSSession(ABC): > >> > name: str > >> > _logger: DTSLOG > >> > remote_session: RemoteSession > >> > + interactive_session: InteractiveRemoteSession > >> > > >> > def __init__( > >> > self, > >> > @@ -37,6 +44,7 @@ def __init__( > >> > self.name =3D name > >> > self._logger =3D logger > >> > self.remote_session =3D create_remote_session(node_config, > name, logger) > >> > + self.interactive_session =3D > create_interactive_session(node_config, name, logger) > >> > >> We may not want to create the interactive session at this point. This > does create a connection to the node which we don't want (it is extra tim= e > consumed) when creating an extra session on top of the main_session (with > Node.create_session). I think we could move this to > OSSession.create_interactive_shell. More below. > > > > > > I think the idea of initializing it here was what we had discussed > before about having an open SSH session for interactive shells open in th= e > background throughout the entire run. If I understand what you're saying, > you're suggesting that we would only initialize this connection when we > need it the first time and then leave it in the background afterwards. I > can see how this would be more efficient if you had a run where the > interactive session was never used, but it would add extra logic to make > sure that the connection is only initialized once and that it doesn't > happen every time you create an interactive shell. This is something that > could be done, but considering that this will always be initialized with > smoke tests, the only way you wouldn't have an interactive remote session > created at the start is if you disable smoke tests. I think it is easier = to > just have it running in the background rather than spawn it when it's use= d > and have to worry about if a connection currently exists or not. > > Right, with smoke tests almost always running, there may not be that > much of an advantage in initializing it only when needed. On the other > hand, the check could be very simple - the same thing we do with > properties such as SutNode.os_name. > > I agree that it wouldn't be hard to check if it were defined, I was just thinking that if we were going to spend the time more often than not anyway, it would make sense to do it initially so that it doesn't cause a slow during the test suite and instead during initialization. If you disagree however, we could easily change this in the future and do it as needed as I think, in the rare case, you are right that it would be more efficient, but otherwise it made more sense to me to run it during the initialization stages of the run. > >> > >> > >> > > >> > def close(self, force: bool =3D False) -> None: > >> > """ > >> > @@ -173,3 +181,27 @@ def setup_hugepages(self, hugepage_amount: int, > force_first_numa: bool) -> None: > >> > if needed and mount the hugepages if needed. > >> > If force_first_numa is True, configure hugepages just on th= e > first socket. > >> > """ > >> > + > >> > + @abstractmethod > >> > + def get_compiler_version(self, compiler_name: str) -> str: > >> > + """ > >> > + Get installed version of compiler used for DPDK > >> > + """ > >> > + > >> > + @abstractmethod > >> > + def get_os_name(self) -> str: > >> > + """ > >> > + Get name of OS for the node > >> > + """ > >> > + > >> > + @abstractmethod > >> > + def get_os_version(self) -> str: > >> > + """ > >> > + Get version of OS for the node > >> > + """ > >> > + > >> > + @abstractmethod > >> > + def get_kernel_version(self) -> str: > >> > + """ > >> > + Get kernel version for the node > >> > + """ > >> > >> These could be merged into one method (returning NodeVersionInfo), is > there a reason to separate them? We could get some performance improvemen= ts > with one method (e.g. getting /etc/os-release just once and then process > locally or we could use a script we execute remotely to have just one > remote call, but that's something to think about in the future). > > > > > > That's a good point. Originally I left them separate so that during the > run you could just access what you need, but you can just access the data > you need from the object. > > > >> > >> > diff --git a/dts/framework/remote_session/posix_session.py > b/dts/framework/remote_session/posix_session.py > >> > index d38062e8..dc6689f5 100644 > >> > --- a/dts/framework/remote_session/posix_session.py > >> > +++ b/dts/framework/remote_session/posix_session.py > >> > @@ -219,3 +219,33 @@ def _remove_dpdk_runtime_dirs( > >> > > >> > def get_dpdk_file_prefix(self, dpdk_prefix) -> str: > >> > return "" > >> > + > >> > + def get_compiler_version(self, compiler_name: str) -> str: > >> > + match compiler_name: > >> > + case "gcc": > >> > + return self.send_command(f"{compiler_name} > --version", 60).stdout.split( > >> > + "\n" > >> > + )[0] > >> > + case "clang": > >> > + return self.send_command(f"{compiler_name} > --version", 60).stdout.split( > >> > + "\n" > >> > + )[0] > >> > + case "msvc": > >> > + return self.send_command("cl", 60).stdout > >> > + case "icc": > >> > + return self.send_command(f"{compiler_name} -V", > 60).stdout > >> > + case _: > >> > + raise ValueError(f"Unknown compiler {compiler_name}= ") > >> > + > >> > + def get_os_name(self) -> str: > >> > + return self.send_command( > >> > + "awk -F=3D '$1=3D=3D\"NAME\" {print $2}' /etc/os-releas= e", 60 > >> > + ).stdout > >> > + > >> > + def get_os_version(self) -> str: > >> > + return self.send_command( > >> > + "awk -F=3D '$1=3D=3D\"VERSION\" {print $2}' /etc/os-rel= ease", > 60, True > >> > + ).stdout > >> > + > >> > + def get_kernel_version(self) -> str: > >> > + return self.send_command("uname -r", 60).stdout > >> > >> These 60 second timeouts seem high to me. All of these commands should > execute quickly so the default timeout seems fine to me. > > > > > > That makes sense to me, no sense in making it too long. > > > >> > >> > >> > diff --git a/dts/framework/remote_session/remote/__init__.py > b/dts/framework/remote_session/remote/__init__.py > >> > index 8a151221..ef7b4851 100644 > >> > --- a/dts/framework/remote_session/remote/__init__.py > >> > +++ b/dts/framework/remote_session/remote/__init__.py > >> > @@ -1,16 +1,28 @@ > >> > # SPDX-License-Identifier: BSD-3-Clause > >> > # Copyright(c) 2023 PANTHEON.tech s.r.o. > >> > +# Copyright(c) 2022-2023 University of New Hampshire > >> > > >> > # pylama:ignore=3DW0611 > >> > > >> > +from paramiko import AutoAddPolicy, SSHClient > >> > + > >> > from framework.config import NodeConfiguration > >> > from framework.logger import DTSLOG > >> > > >> > +from .interactive_remote_session import InteractiveRemoteSession > >> > +from .interactive_shell import InteractiveShell > >> > from .remote_session import CommandResult, RemoteSession > >> > from .ssh_session import SSHSession > >> > +from .testpmd_shell import TestPmdShell > >> > > >> > > >> > def create_remote_session( > >> > node_config: NodeConfiguration, name: str, logger: DTSLOG > >> > ) -> RemoteSession: > >> > return SSHSession(node_config, name, logger) > >> > + > >> > + > >> > +def create_interactive_session( > >> > + node_config: NodeConfiguration, name: str, logger: DTSLOG > >> > +) -> InteractiveRemoteSession: > >> > + return InteractiveRemoteSession(node_config, logger) > >> > diff --git > a/dts/framework/remote_session/remote/interactive_remote_session.py > b/dts/framework/remote_session/remote/interactive_remote_session.py > >> > new file mode 100644 > >> > index 00000000..4e88308a > >> > --- /dev/null > >> > +++ > b/dts/framework/remote_session/remote/interactive_remote_session.py > >> > @@ -0,0 +1,113 @@ > >> > +# SPDX-License-Identifier: BSD-3-Clause > >> > +# Copyright(c) 2023 University of New Hampshire > >> > + > >> > +import socket > >> > +import traceback > >> > +from pathlib import PurePath > >> > + > >> > +from paramiko import AutoAddPolicy, SSHClient, Transport > >> > +from paramiko.ssh_exception import ( > >> > + AuthenticationException, > >> > + BadHostKeyException, > >> > + NoValidConnectionsError, > >> > + SSHException, > >> > +) > >> > + > >> > +from framework.config import InteractiveApp, NodeConfiguration > >> > +from framework.exception import SSHConnectionError > >> > +from framework.logger import DTSLOG > >> > +from framework.settings import SETTINGS > >> > + > >> > +from .interactive_shell import InteractiveShell > >> > +from .testpmd_shell import TestPmdShell > >> > + > >> > + > >> > +class InteractiveRemoteSession: > >> > + hostname: str > >> > + ip: str > >> > + port: int > >> > + username: str > >> > + password: str > >> > + _logger: DTSLOG > >> > + _node_config: NodeConfiguration > >> > + session: SSHClient > >> > + _transport: Transport | None > >> > + > >> > + def __init__(self, node_config: NodeConfiguration, _logger: > DTSLOG) -> None: > >> > + self._node_config =3D node_config > >> > + self._logger =3D _logger > >> > + self.hostname =3D node_config.hostname > >> > + self.username =3D node_config.user > >> > + self.password =3D node_config.password if node_config.passw= ord > else "" > >> > + port =3D 22 > >> > >> framework/remote_session/remote/interactive_remote_session.py:45: > error: Incompatible types in assignment (expression has type "str", > variable has type "int") > >> > >> We can address this by making 22 a string (port =3D "22"). > > > > > > I'll fix this as well. > > > >> > >> > >> > + self.ip =3D node_config.hostname > >> > + if ":" in node_config.hostname: > >> > + self.ip, port =3D node_config.hostname.split(":") > >> > + self.port =3D int(port) > >> > + self._logger.info( > >> > + f"Initializing interactive connection for > {self.username}@{self.hostname}" > >> > + ) > >> > + self._connect() > >> > + self._logger.info( > >> > + f"Interactive connection successful for {self.username}= @ > {self.hostname}" > >> > + ) > >> > + > >> > + def _connect(self) -> None: > >> > + client =3D SSHClient() > >> > + client.set_missing_host_key_policy(AutoAddPolicy) > >> > + self.session =3D client > >> > + retry_attempts =3D 10 > >> > + for retry_attempt in range(retry_attempts): > >> > + try: > >> > + client.connect( > >> > + self.ip, > >> > + username=3Dself.username, > >> > + port=3Dself.port, > >> > + password=3Dself.password, > >> > + timeout=3D20 if self.port else 10, > >> > + ) > >> > + except (TypeError, BadHostKeyException, > AuthenticationException) as e: > >> > + self._logger.exception(e) > >> > + raise SSHConnectionError(self.hostname) from e > >> > + except (NoValidConnectionsError, socket.error, > SSHException) as e: > >> > + self._logger.debug(traceback.format_exc()) > >> > + self._logger.warning(e) > >> > + self._logger.info( > >> > + "Retrying interactive session connection: " > >> > + f"retry number {retry_attempt +1}" > >> > + ) > >> > + else: > >> > + break > >> > + else: > >> > + raise SSHConnectionError(self.hostname) > >> > + # Interactive sessions are used on an "as needed" basis so > we have > >> > + # to set a keepalive > >> > + self._transport =3D self.session.get_transport() > >> > + if self._transport is not None: > >> > + self._transport.set_keepalive(30) > >> > + > >> > + def create_interactive_shell( > >> > + self, > >> > + shell_type: InteractiveApp, > >> > + path_to_app: PurePath, > >> > + timeout: float =3D SETTINGS.timeout, > >> > + eal_parameters: str =3D "", > >> > + app_parameters=3D"", > >> > + ) -> InteractiveShell: > >> > + """ > >> > + See "create_interactive_shell" in SutNode > >> > + """ > >> > + match (shell_type.name): > >> > + case "shell": > >> > >> Comparing with the enum values seems better. > > > > > > Good point, that does seem like it would be more consistent, I'll make > the change. > > > >> > >> > >> > + return InteractiveShell( > >> > + self.session, self._logger, path_to_app, timeou= t > >> > + ) > >> > + case "testpmd": > >> > + return TestPmdShell( > >> > + self.session, > >> > + self._logger, > >> > + path_to_app, > >> > + timeout=3Dtimeout, > >> > + eal_flags=3Deal_parameters, > >> > + cmd_line_options=3Dapp_parameters, > >> > + ) > >> > >> framework/remote_session/remote/interactive_remote_session.py:89: > error: Missing return statement > >> > >> This is probably because we don't have a case for the rest of the > values. > > > > > > You're right, I didn't handle the default case. I'll change it to > defaulting to just returning a base interactive shell with a message > explaining that they didn't handle the type. > > > >> > >> > >> > diff --git a/dts/framework/remote_session/remote/interactive_shell.p= y > b/dts/framework/remote_session/remote/interactive_shell.py > >> > new file mode 100644 > >> > index 00000000..d98503be > >> > --- /dev/null > >> > +++ b/dts/framework/remote_session/remote/interactive_shell.py > >> > @@ -0,0 +1,98 @@ > >> > +from pathlib import PurePath > >> > + > >> > +from paramiko import Channel, SSHClient, channel > >> > + > >> > +from framework.logger import DTSLOG > >> > +from framework.settings import SETTINGS > >> > + > >> > + > >> > +class InteractiveShell: > >> > + > >> > + _interactive_session: SSHClient > >> > + _stdin: channel.ChannelStdinFile > >> > + _stdout: channel.ChannelFile > >> > + _ssh_channel: Channel > >> > + _logger: DTSLOG > >> > + _timeout: float > >> > + > >> > + def __init__( > >> > + self, > >> > + interactive_session: SSHClient, > >> > + logger: DTSLOG, > >> > + path_to_app: PurePath | None =3D None, > >> > + timeout: float =3D SETTINGS.timeout, > >> > + ) -> None: > >> > + self._interactive_session =3D interactive_session > >> > + self._ssh_channel =3D self._interactive_session.invoke_shel= l() > >> > + self._stdin =3D self._ssh_channel.makefile_stdin("w") > >> > + self._stdout =3D self._ssh_channel.makefile("r") > >> > + self._ssh_channel.settimeout(timeout) > >> > + self._ssh_channel.set_combine_stderr(True) # combines > stdout and stderr streams > >> > + self._logger =3D logger > >> > + self._timeout =3D timeout > >> > + if path_to_app: > >> > + self.send_command_no_output(str(path_to_app)) # starts > the application > >> > >> This should be part of the derived class - the code to start an app is > going to be app-specific. Since this is an interactive session, the > assumption is that we're always going to start an app and then work with > it, so starting the app in the constructor should be safe. > >> We actually have that for testpmd, so we just need to unify the above > with what we have. > > > > > > That makes sense, the only reason I didn't do this before was because I > was starting the application differently in test_pmd (with EAL flags and > command line options) so I would call it without this path when I called > the super init(). However, I can change this by just making a method call= ed > _start_application() that can be overridden in the subclasses and that wi= ll > clean this process up quite a bit. > > > > Good, I had something like that in mind. > > >> > >> > >> > + > >> > + def empty_stdout_buffer(self) -> None: > >> > >> Same comment on ordering as above. > >> > >> > + """Removes all data from the stdout buffer. > >> > + > >> > + Because of the way paramiko handles read buffers, there is > no way to effectively > >> > + remove data from, or "flush", read buffers. This method > essentially moves our > >> > + offset on the buffer to the end and thus "removes" the data > from the buffer. > >> > + Timeouts are thrown on read operations of paramiko pipes > based on whether data > >> > + had been received before timeout so we assume that if we > reach the timeout then > >> > + we are at the end of the buffer. > >> > + """ > >> > + self._ssh_channel.settimeout(1) > >> > >> Waiting a whole second seems to be a lot. We actually may not need thi= s > method from the use in the code - that is we change how the app starts. > > > > > > We will still need this code whenever you send a command and don't get > its output. What it is doing is essentially moving the pointer for the > output buffer to the end of the file which is needed because if you send = a > command and you don't want any output, if we don't still move past the > output in the buffer then it will persist and bleed into when you send a > command and do want to collect output. Having this method allows you to > know you are starting from an empty buffer when you are getting the outpu= t > of the commands. > > > > Ah, I was commenting on send_command_no_output when I mentioned "this > method", so I need to restate my point. We can do basically the same > thing with "send_command" and the only difference I see is that we > don't care about prompt in send_command_no_output. Is there a scenario > where we need that? > This method was to address the situation that I had brought up a while back when discussing how to handle interactive applications. The scenario where you want to run an application but you cannot consume a newline character because the line you are on requires input. In the case of testpmd and "bash-like" applications, we can consume a newline character safely but you can't with every interactive environment. The example I used then was if you ran a script and it asked you to enter a password or a name for something. Consuming a newline in this case might not give you the prompt again but rather would end up taking in an unintended newline. > > > In the case of timing however, I could cut this down to half a second, > it just gives the chance that a command that takes longer to produce its > output will still contaminate the buffer. If this causes a problem we cou= ld > always increase the time in the future. > > > > The point is to not have any set time (since that's either too slow or > unreliable), if we can avoid this. > I agree that it isn't super reliable, but I think it is good to have even if it isn't used as often. The reason for this is because if the case arose where you didn't want to collect output up until a point in the middle of the stdout string or maybe passed in a prompt that didn't include all of the output provided, this offers some way to at least clear the buffer somehow. > > >> > >> > >> > + try: > >> > + for line in self._stdout: > >> > + pass > >> > + except TimeoutError: > >> > + pass > >> > + self._ssh_channel.settimeout(self._timeout) # reset timeou= t > >> > + > >> > + def send_command_no_output(self, command: str) -> None: > >> > + """Send command to channel without recording output. > >> > + > >> > + This method will not verify any input or output, it will > simply assume the > >> > + command succeeded. This method will also consume all output > in the buffer > >> > + after executing the command. > >> > + """ > >> > + self._logger.info( > >> > + f"Sending command {command.strip()} and not collecting > output" > >> > + ) > >> > + self._stdin.write(f"{command}\n") > >> > + self._stdin.flush() > >> > + self.empty_stdout_buffer() > >> > + > >> > + def send_command_get_output(self, command: str, prompt: str) -> > str: > >> > + """Send a command and get all output before the expected > ending string. > >> > + > >> > + Lines that expect input are not included in the stdout > buffer so they cannot be > >> > + used for expect. For example, if you were prompted to log > into something > >> > + with a username and password, you cannot expect "username:" > because it won't > >> > + yet be in the stdout buffer. A work around for this could b= e > consuming an > >> > + extra newline character to force the current prompt into th= e > stdout buffer. > >> > + > >> > + Returns: > >> > + All output in the buffer before expected string > >> > + """ > >> > + self._logger.info(f"Sending command {command.strip()}...") > >> > + self._stdin.write(f"{command}\n") > >> > + self._stdin.flush() > >> > + out: str =3D "" > >> > + for line in self._stdout: > >> > + out +=3D line > >> > + if prompt in line and not line.rstrip().endswith( > >> > + command.rstrip() > >> > + ): # ignore line that sent command > >> > + break > >> > + self._logger.debug(f"Got output: {out}") > >> > + return out > >> > + > >> > + def close(self) -> None: > >> > + self._stdin.close() > >> > + self._ssh_channel.close() > >> > + > >> > + def __del__(self) -> None: > >> > + self.close() > >> > diff --git a/dts/framework/remote_session/remote/testpmd_shell.py > b/dts/framework/remote_session/remote/testpmd_shell.py > >> > new file mode 100644 > >> > index 00000000..873f3c82 > >> > --- /dev/null > >> > +++ b/dts/framework/remote_session/remote/testpmd_shell.py > >> > @@ -0,0 +1,58 @@ > >> > +# SPDX-License-Identifier: BSD-3-Clause > >> > +# Copyright(c) 2023 University of New Hampshire > >> > + > >> > + > >> > +from pathlib import PurePath > >> > + > >> > +from paramiko import SSHClient > >> > + > >> > +from framework.logger import DTSLOG > >> > +from framework.settings import SETTINGS > >> > + > >> > +from .interactive_shell import InteractiveShell > >> > + > >> > + > >> > +class TestPmdShell(InteractiveShell): > >> > + expected_prompt: str =3D "testpmd>" > >> > + > >> > + def __init__( > >> > + self, > >> > + interactive_session: SSHClient, > >> > + logger: DTSLOG, > >> > + path_to_testpmd: PurePath, > >> > + eal_flags: str =3D "", > >> > + cmd_line_options: str =3D "", > >> > + timeout: float =3D SETTINGS.timeout, > >> > + ) -> None: > >> > + """Initializes an interactive testpmd session using > specified parameters.""" > >> > + super(TestPmdShell, self).__init__( > >> > + interactive_session, logger=3Dlogger, timeout=3Dtimeout > >> > + ) > >> > + self.send_command_get_output( > >> > + f"{path_to_testpmd} {eal_flags} -- -i > {cmd_line_options}\n", > >> > + self.expected_prompt, > >> > + ) > >> > + > >> > + def send_command(self, command: str, prompt: str =3D > expected_prompt) -> str: > >> > + """Specific way of handling the command for testpmd > >> > + > >> > + An extra newline character is consumed in order to force th= e > current line into > >> > + the stdout buffer. > >> > + """ > >> > + return self.send_command_get_output(f"{command}\n", prompt) > >> > + > >> > + def get_devices(self) -> list[str]: > >> > + """Get a list of device names that are known to testpmd > >> > + > >> > + Uses the device info listed in testpmd and then parses the > output to > >> > + return only the names of the devices. > >> > + > >> > + Returns: > >> > + A list of strings representing device names (e.g. > 0000:14:00.1) > >> > + """ > >> > + dev_info: str =3D self.send_command("show device info all") > >> > + dev_list: list[str] =3D [] > >> > + for line in dev_info.split("\n"): > >> > + if "device name:" in line.lower(): > >> > + dev_list.append(line.strip().split(": ")[1].strip()= ) > >> > + return dev_list > >> > >> This could be an object (or a list of TestPmdDevice object or somethin= g > similar). We could start with just the name and add to it in the future. > > > > > > I think for the scope of smoke tests it would make sense to leave this > as just the names for now because all we are doing is making sure we can > discover them and that they appear. I do agree that it could be useful to > have more info, but it would likely make more sense to add to it in the > future when we need that additional information. > > > > The objects could contain just the name. I'd like to do this since we > already do the same thing with VirtualDevice (in > framework/testbed_model/hw/virtual_device.py). > > I didn't consider this, but it's not a bad approach, I'll change the list to be a list of objects. > >> > >> > >> > diff --git a/dts/framework/test_result.py > b/dts/framework/test_result.py > >> > index 74391982..029665fb 100644 > >> > --- a/dts/framework/test_result.py > >> > +++ b/dts/framework/test_result.py > >> > @@ -1,5 +1,6 @@ > >> > # SPDX-License-Identifier: BSD-3-Clause > >> > # Copyright(c) 2023 PANTHEON.tech s.r.o. > >> > +# Copyright(c) 2022-2023 University of New Hampshire > >> > > >> > """ > >> > Generic result container and reporters > >> > @@ -8,14 +9,17 @@ > >> > import os.path > >> > from collections.abc import MutableSequence > >> > from enum import Enum, auto > >> > +from typing import Dict > >> > >> No need, we can just use built-in dict (since Python3.9). > >> > >> > > >> > from .config import ( > >> > OS, > >> > Architecture, > >> > BuildTargetConfiguration, > >> > + BuildTargetVersionInfo, > >> > Compiler, > >> > CPUType, > >> > NodeConfiguration, > >> > + NodeVersionInfo, > >> > ) > >> > from .exception import DTSError, ErrorSeverity > >> > from .logger import DTSLOG > >> > @@ -67,12 +71,14 @@ class Statistics(dict): > >> > Using a dict provides a convenient way to format the data. > >> > """ > >> > > >> > - def __init__(self, dpdk_version): > >> > + def __init__(self, output_info: Dict[str, str] | None): > >> > super(Statistics, self).__init__() > >> > for result in Result: > >> > self[result.name] =3D 0 > >> > self["PASS RATE"] =3D 0.0 > >> > - self["DPDK VERSION"] =3D dpdk_version > >> > + if output_info: > >> > + for info_key, info_val in output_info.items(): > >> > + self[info_key] =3D info_val > >> > > >> > def __iadd__(self, other: Result) -> "Statistics": > >> > """ > >> > @@ -206,6 +212,8 @@ class BuildTargetResult(BaseResult): > >> > os: OS > >> > cpu: CPUType > >> > compiler: Compiler > >> > + compiler_version: str | None > >> > + dpdk_version: str | None > >> > > >> > def __init__(self, build_target: BuildTargetConfiguration): > >> > super(BuildTargetResult, self).__init__() > >> > @@ -213,6 +221,12 @@ def __init__(self, build_target: > BuildTargetConfiguration): > >> > self.os =3D build_target.os > >> > self.cpu =3D build_target.cpu > >> > self.compiler =3D build_target.compiler > >> > + self.compiler_version =3D None > >> > + self.dpdk_version =3D None > >> > + > >> > + def add_build_target_versions(self, versions: > BuildTargetVersionInfo) -> None: > >> > + self.compiler_version =3D versions.compiler_version > >> > + self.dpdk_version =3D versions.dpdk_version > >> > > >> > def add_test_suite(self, test_suite_name: str) -> > TestSuiteResult: > >> > test_suite_result =3D TestSuiteResult(test_suite_name) > >> > @@ -228,10 +242,17 @@ class ExecutionResult(BaseResult): > >> > """ > >> > > >> > sut_node: NodeConfiguration > >> > + sut_os_name: str > >> > + sut_os_version: str > >> > + sut_kernel_version: str > >> > > >> > - def __init__(self, sut_node: NodeConfiguration): > >> > + def __init__(self, sut_node: NodeConfiguration, > sut_version_info: NodeVersionInfo): > >> > super(ExecutionResult, self).__init__() > >> > self.sut_node =3D sut_node > >> > + self.sut_version_info =3D sut_version_info > >> > + self.sut_os_name =3D sut_version_info.os_name > >> > + self.sut_os_version =3D sut_version_info.os_version > >> > + self.sut_kernel_version =3D sut_version_info.kernel_version > >> > > >> > def add_build_target( > >> > self, build_target: BuildTargetConfiguration > >> > @@ -258,6 +279,7 @@ class DTSResult(BaseResult): > >> > """ > >> > > >> > dpdk_version: str | None > >> > + output: dict | None > >> > _logger: DTSLOG > >> > _errors: list[Exception] > >> > _return_code: ErrorSeverity > >> > @@ -267,14 +289,17 @@ class DTSResult(BaseResult): > >> > def __init__(self, logger: DTSLOG): > >> > super(DTSResult, self).__init__() > >> > self.dpdk_version =3D None > >> > + self.output =3D None > >> > self._logger =3D logger > >> > self._errors =3D [] > >> > self._return_code =3D ErrorSeverity.NO_ERR > >> > self._stats_result =3D None > >> > self._stats_filename =3D os.path.join(SETTINGS.output_dir, > "statistics.txt") > >> > > >> > - def add_execution(self, sut_node: NodeConfiguration) -> > ExecutionResult: > >> > - execution_result =3D ExecutionResult(sut_node) > >> > + def add_execution( > >> > + self, sut_node: NodeConfiguration, sut_version_info: > NodeVersionInfo > >> > + ) -> ExecutionResult: > >> > + execution_result =3D ExecutionResult(sut_node, > sut_version_info) > >> > self._inner_results.append(execution_result) > >> > return execution_result > >> > > >> > @@ -296,7 +321,8 @@ def process(self) -> None: > >> > for error in self._errors: > >> > self._logger.debug(repr(error)) > >> > > >> > - self._stats_result =3D Statistics(self.dpdk_version) > >> > + self._stats_result =3D Statistics(self.output) > >> > + # add information gathered from the smoke tests to the > statistics > >> > self.add_stats(self._stats_result) > >> > with open(self._stats_filename, "w+") as stats_file: > >> > stats_file.write(str(self._stats_result)) > >> > diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.= py > >> > index 0705f38f..ad66014c 100644 > >> > --- a/dts/framework/test_suite.py > >> > +++ b/dts/framework/test_suite.py > >> > @@ -10,11 +10,24 @@ > >> > import inspect > >> > import re > >> > from types import MethodType > >> > +from typing import Dict > >> > > >> > -from .exception import ConfigurationError, SSHTimeoutError, > TestCaseVerifyError > >> > +from .config import BuildTargetConfiguration > >> > +from .exception import ( > >> > + BlockingTestSuiteError, > >> > + ConfigurationError, > >> > + SSHTimeoutError, > >> > + TestCaseVerifyError, > >> > +) > >> > from .logger import DTSLOG, getLogger > >> > from .settings import SETTINGS > >> > -from .test_result import BuildTargetResult, Result, TestCaseResult, > TestSuiteResult > >> > +from .test_result import ( > >> > + BuildTargetResult, > >> > + DTSResult, > >> > + Result, > >> > + TestCaseResult, > >> > + TestSuiteResult, > >> > +) > >> > from .testbed_model import SutNode > >> > > >> > > >> > @@ -37,10 +50,12 @@ class TestSuite(object): > >> > """ > >> > > >> > sut_node: SutNode > >> > + is_blocking =3D False > >> > _logger: DTSLOG > >> > _test_cases_to_run: list[str] > >> > _func: bool > >> > _result: TestSuiteResult > >> > + _dts_result: DTSResult > >> > > >> > def __init__( > >> > self, > >> > @@ -48,6 +63,8 @@ def __init__( > >> > test_cases: list[str], > >> > func: bool, > >> > build_target_result: BuildTargetResult, > >> > + build_target_conf: BuildTargetConfiguration, > >> > + dts_result: DTSResult, > >> > ): > >> > self.sut_node =3D sut_node > >> > self._logger =3D getLogger(self.__class__.__name__) > >> > @@ -55,6 +72,8 @@ def __init__( > >> > self._test_cases_to_run.extend(SETTINGS.test_cases) > >> > self._func =3D func > >> > self._result =3D > build_target_result.add_test_suite(self.__class__.__name__) > >> > + self.build_target_info =3D build_target_conf > >> > + self._dts_result =3D dts_result > >> > > >> > def set_up_suite(self) -> None: > >> > """ > >> > @@ -118,6 +137,8 @@ def run(self) -> None: > >> > f"the next test suite may be affected." > >> > ) > >> > self._result.update_setup(Result.ERROR, e) > >> > + if len(self._result.get_errors()) > 0 and > self.is_blocking: > >> > + raise BlockingTestSuiteError(test_suite_name) > >> > > >> > def _execute_test_suite(self) -> None: > >> > """ > >> > @@ -232,6 +253,12 @@ def _execute_test_case( > >> > test_case_result.update(Result.SKIP) > >> > raise KeyboardInterrupt("Stop DTS") > >> > > >> > + def write_to_statistics_file(self, output: Dict[str, str]): > >> > >> This isn't used anywhere as far as I can tell. > > > > > > It was previously but when we discussed removing the statistics output = I > left this in but you're right, thinking about it now it doesn't make sens= e > to leave it in until we reformat statistics.txt. > > > >> > >> > >> > + if self._dts_result.output is not None: > >> > + self._dts_result.output.update(output) > >> > + else: > >> > + self._dts_result.output =3D output > >> > + > >> > >> This could be simplified if we init output to an empty dict in the > constructor. > >> > >> > > >> > def get_test_suites(testsuite_module_path: str) -> > list[type[TestSuite]]: > >> > def is_test_suite(object) -> bool: > >> > diff --git a/dts/framework/testbed_model/node.py > b/dts/framework/testbed_model/node.py > >> > index d48fafe6..c5147e0e 100644 > >> > --- a/dts/framework/testbed_model/node.py > >> > +++ b/dts/framework/testbed_model/node.py > >> > @@ -40,6 +40,7 @@ class Node(object): > >> > lcores: list[LogicalCore] > >> > _logger: DTSLOG > >> > _other_sessions: list[OSSession] > >> > + _execution_config: ExecutionConfiguration > >> > > >> > def __init__(self, node_config: NodeConfiguration): > >> > self.config =3D node_config > >> > @@ -64,6 +65,7 @@ def set_up_execution(self, execution_config: > ExecutionConfiguration) -> None: > >> > """ > >> > self._setup_hugepages() > >> > self._set_up_execution(execution_config) > >> > + self._execution_config =3D execution_config > >> > > >> > def _set_up_execution(self, execution_config: > ExecutionConfiguration) -> None: > >> > """ > >> > diff --git a/dts/framework/testbed_model/sut_node.py > b/dts/framework/testbed_model/sut_node.py > >> > index 2b2b50d9..27849da7 100644 > >> > --- a/dts/framework/testbed_model/sut_node.py > >> > +++ b/dts/framework/testbed_model/sut_node.py > >> > @@ -1,14 +1,21 @@ > >> > # SPDX-License-Identifier: BSD-3-Clause > >> > # Copyright(c) 2010-2014 Intel Corporation > >> > # Copyright(c) 2023 PANTHEON.tech s.r.o. > >> > +# Copyright(c) 2022-2023 University of New Hampshire > >> > > >> > import os > >> > import tarfile > >> > import time > >> > from pathlib import PurePath > >> > > >> > -from framework.config import BuildTargetConfiguration, > NodeConfiguration > >> > -from framework.remote_session import CommandResult, OSSession > >> > +from framework.config import ( > >> > + BuildTargetConfiguration, > >> > + BuildTargetVersionInfo, > >> > + InteractiveApp, > >> > + NodeConfiguration, > >> > + NodeVersionInfo, > >> > +) > >> > +from framework.remote_session import CommandResult, > InteractiveShell, OSSession > >> > from framework.settings import SETTINGS > >> > from framework.utils import EnvVarsDict, MesonArgs > >> > > >> > @@ -26,13 +33,17 @@ class SutNode(Node): > >> > > >> > _dpdk_prefix_list: list[str] > >> > _dpdk_timestamp: str > >> > - _build_target_config: BuildTargetConfiguration | None > >> > + _build_target_config: BuildTargetConfiguration > >> > >> What's the reason for this change? My guess would be mypy, but the > change still results in mypy complaining: > >> framework/testbed_model/sut_node.py:51: error: Incompatible types in > assignment (expression has type "None", variable has type > "BuildTargetConfiguration") > > > > > > I would guess it was for mypy as well which I got to be passing before = I > submitted the patch, but if there is still an error with it then I will > revert. > > > >> > >> > >> > >> > _env_vars: EnvVarsDict > >> > _remote_tmp_dir: PurePath > >> > __remote_dpdk_dir: PurePath | None > >> > - _dpdk_version: str | None > >> > _app_compile_timeout: float > >> > _dpdk_kill_session: OSSession | None > >> > + _dpdk_version: str | None > >> > + _os_name: str | None > >> > + _os_version: str | None > >> > + _kernel_version: str | None > >> > + _compiler_version: str | None > >> > > >> > def __init__(self, node_config: NodeConfiguration): > >> > super(SutNode, self).__init__(node_config) > >> > @@ -41,12 +52,16 @@ def __init__(self, node_config: > NodeConfiguration): > >> > self._env_vars =3D EnvVarsDict() > >> > self._remote_tmp_dir =3D self.main_session.get_remote_tmp_d= ir() > >> > self.__remote_dpdk_dir =3D None > >> > - self._dpdk_version =3D None > >> > self._app_compile_timeout =3D 90 > >> > self._dpdk_kill_session =3D None > >> > self._dpdk_timestamp =3D ( > >> > f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', > time.localtime())}" > >> > ) > >> > + self._dpdk_version =3D None > >> > + self._os_name =3D None > >> > + self._os_version =3D None > >> > + self._kernel_version =3D None > >> > + self._compiler_version =3D None > >> > > >> > @property > >> > def _remote_dpdk_dir(self) -> PurePath: > >> > @@ -75,6 +90,44 @@ def dpdk_version(self) -> str: > >> > ) > >> > return self._dpdk_version > >> > > >> > + @property > >> > + def os_name(self) -> str: > >> > + if self._os_name is None: > >> > + self._os_name =3D self.main_session.get_os_name() > >> > + return self._os_name > >> > + > >> > + @property > >> > + def os_version(self) -> str: > >> > + if self._os_version is None: > >> > + self._os_version =3D self.main_session.get_os_version() > >> > + return self._os_version > >> > + > >> > + @property > >> > + def kernel_version(self) -> str: > >> > + if self._kernel_version is None: > >> > + self._kernel_version =3D > self.main_session.get_kernel_version() > >> > + return self._kernel_version > >> > + > >> > + @property > >> > + def compiler_version(self) -> str: > >> > + if self._compiler_version is None: > >> > + self._compiler_version =3D > self.main_session.get_compiler_version( > >> > + self._build_target_config.compiler.name > >> > + ) > >> > + return self._compiler_version > >> > + > >> > + def get_node_versions(self) -> NodeVersionInfo: > >> > + return NodeVersionInfo( > >> > + os_name=3Dself.os_name, > >> > + os_version=3Dself.os_version, > >> > + kernel_version=3Dself.kernel_version, > >> > + ) > >> > + > >> > + def get_build_target_versions(self) -> BuildTargetVersionInfo: > >> > + return BuildTargetVersionInfo( > >> > + dpdk_version=3Dself.dpdk_version, > compiler_version=3Dself.compiler_version > >> > + ) > >> > + > >> > def _guess_dpdk_remote_dir(self) -> PurePath: > >> > return > self.main_session.guess_dpdk_remote_dir(self._remote_tmp_dir) > >> > > >> > @@ -84,6 +137,10 @@ def _set_up_build_target( > >> > """ > >> > Setup DPDK on the SUT node. > >> > """ > >> > + # we want to ensure that dpdk_version and compiler_version > is reset for new > >> > + # build targets > >> > + self._dpdk_version =3D None > >> > + self._compiler_version =3D None > >> > self._configure_build_target(build_target_config) > >> > self._copy_dpdk_tarball() > >> > self._build_dpdk() > >> > @@ -262,6 +319,49 @@ def run_dpdk_app( > >> > f"{app_path} {eal_args}", timeout, verify=3DTrue > >> > ) > >> > > >> > + def create_interactive_shell( > >> > + self, > >> > + shell_type: InteractiveApp, > >> > + path_to_app: PurePath | None =3D None, > >> > >> Can we join shell_type and path_to_app together? Both contain the same > type of information. > >> If we can't, let's drop path_to_app, as we don't provide (in this > patch) a way for users to modify this. We can add it later if needed. > > > > > > The idea behind it was allowing users to specify where their applicatio= n > is when they call this method. Users can pass a path into this applicatio= n > if their application isn't at the default path which I think is a good > addition to have right now in the case of a user having their binary in a > different location. > > > > As I mentioned in my comment, there isn't a way for users to pass the > path (at least I don't see one). Users in this case is people who run > DTS, not test case developers. Test cases shouldn't work with those > paths which need to be framework-wide. > I agree it would be good to have an option that users could configure > and we can talk about how to do this, but I would be best to do in a > different patch. > > You're right, apologies, I was thinking more from the perspective of developers but that doesn't really make sense either. I'll remove this path as something that can be passed in and we can talk about potentially adding it back later on. > >> > >> > >> > + timeout: float =3D SETTINGS.timeout, > >> > + eal_parameters: str =3D "", > >> > >> Any reason we're not using the EalParameters class here? > > > > > > I didn't use it originally because it requires an lcore list and memory > channels if you provide them. Although, I could use EalParameters and jus= t > make them optional which would have the same effect but be more structure= d > is what makes the most sense to me so I will make that change. > > The way to create EAL parameters is with the > SutNode.create_eal_parameters method which addresses both of your > points - there's no need to make it optional, as you can use > self.create_eal_parameters() as the default value. > > > > >> > >> > >> > + app_parameters=3D"", > >> > >> This is also currently not used, so maybe we should drop it and add it > in the future, probably as a dedicated object representing the parameters= . > > > > > > That makes sense, you're right that a more structured class for > providing parameters would work better. The difficulty with a dedicated > object however is that you have different parameters for different > applications so you couldn't get very granular. Regardless, it can be > removed from this patch for now and added back in the future. > > > > I imagined an object with methods for adding different kinds of > parameters - positional, without value, with value, long/short name > etc. But that's not that relevant right now, we'll think about it when > we actually need it (and the use case will make what everything > clearer). > > Thanks for clarify your thoughts on it, I agree that something like that could be useful and is a good idea for when we need something like this. > >> > >> > >> > + ) -> InteractiveShell: > >> > >> This should return a Union of all the different sessions we support, > otherwise mypy complains: > >> tests/TestSuite_smoke_tests.py:68: error: Incompatible types in > assignment (expression has type "InteractiveShell", variable has type > "TestPmdShell") > >> > > > > I figured this would be allowed because they were subclasses and didn't > realize mypy would have a problem with it. I will fix this in the next > version. > > > >> > >> > + """Create a handler for an interactive session. > >> > + > >> > + This method is a factory that calls a method in OSSession t= o > create shells for > >> > + different DPDK applications. > >> > + > >> > + Args: > >> > + shell_type: Enum value representing the desired > application. > >> > + path_to_app: Represents a path to the application you > are attempting to > >> > + launch. This path will be executed at the start of > the app > >> > + initialization. If one isn't provided, the default > specified in the > >> > + enumeration will be used. > >> > + timeout: Timeout for the ssh channel > >> > >> We should specify what sort of timeout this is - for creating the > channel or for sent commands? > > > > > > Very good point, this is pretty vague if you weren't the one who wrote > it. I'll make sure to clear this up. > > > >> > >> > >> > + eal_parameters: List of EAL parameters to use to launch > the app. This is > >> > + ignored for base "shell" types. > >> > + app_parameters: Command-line flags to pass into the > application on launch. > >> > + Returns: > >> > + Instance of the desired interactive application. > >> > + """ > >> > + default_path: PurePath | None > >> > + # if we just want a default shell, there is no need to > append the DPDK build > >> > + # directory to the path > >> > + if shell_type.name =3D=3D "shell": > >> > + default_path =3D None > >> > + else: > >> > + default_path =3D self.main_session.join_remote_path( > >> > + self.remote_dpdk_build_dir, *shell_type.get_path() > >> > + ) > >> > >> All this path handling looks clunky, it should be part of the > InteractiveApp class. Would adding something like update_path to > InteractiveApp work? > > > > > > I agree this was a little messy, but after changing the default paths t= o > PurePaths, I could clean this up more by just defaulting to > shell_type.get_path() and then having a single if statement that checks i= f > it isn't a shell type and, if not, set the default path with appending th= e > build dir. it would look something more like this: > > > > default_path =3D shell_type.get_path() > > > > if shell_type !=3D InteractiveApp.shell: > > default_path =3D self.main_session.join_remote_path( > > self.remote_dpdk_build_dir, shell_type.get_path() > > ) > > > > My point was that we're still passing two arguments (the app enum and > the path) where we could be passing just one, I think. We could just > update the path of InteractiveApp and pass only that: > @unique > class InteractiveApp(Enum): > shell =3D [pathlib.PurePath("")] > testpmd =3D [pathlib.PurePath("app", "dpdk-testpmd")] > > @property > def path(self) -> pathlib.PurePath: > return self.value[0] > > @path.setter > def path(self, path: pathlib.PurePath): > self.value[0] =3D path > > The above stores the default path with the option of updating it. That > would be much more elegant than passing both the path and the Enum. > A note on the implementation: the value has to be a mutable type, > because we can't assign new values. With mutable types, we're > modifying the existing value. > > Another important question, which I didn't think of before, is do we > need the shell interactive app? It's not used anywhere and I don't > think we'll ever use it, so let's remove it. We can add it later if we > find an actual use for it. > I really only included it as a generic application that someone could use to pilot an app manually which could be useful for things like simple scripts that need input that it wouldn't make sense to make a whole class for but you're right, I think it makes sense for developers to just create a subclass and handle it properly. I'll remove this as an option and modify the pathing as you recommended. > > >> > >> > + return > self.main_session.interactive_session.create_interactive_shell( > >> > + shell_type, > >> > + path_to_app or default_path, > >> > + timeout, > >> > + eal_parameters, > >> > + app_parameters, > >> > + ) > >> > + > >> > > >> > class EalParameters(object): > >> > def __init__( > >> > diff --git a/dts/tests/TestSuite_smoke_tests.py > b/dts/tests/TestSuite_smoke_tests.py > >> > new file mode 100644 > >> > index 00000000..a2fd9fae > >> > --- /dev/null > >> > +++ b/dts/tests/TestSuite_smoke_tests.py > >> > @@ -0,0 +1,101 @@ > >> > +# SPDX-License-Identifier: BSD-3-Clause > >> > +# Copyright(c) 2023 University of New Hampshire > >> > + > >> > +from framework.config import InteractiveApp > >> > +from framework.remote_session import TestPmdShell > >> > +from framework.test_suite import TestSuite > >> > + > >> > + > >> > +class SmokeTests(TestSuite): > >> > + is_blocking =3D True > >> > + # in this list, the first index is the address of the nic and > the second is > >> > + # the driver for that nic. > >> > + list_of_nics: list[tuple[str, str]] =3D [] > >> > >> Would using a dict make more sense? > > > > > > I thought a tuple was a good option because of the two pieces of data > but I think a dict would also be useful because it would be more clear wh= en > it is accessed later. I think turning this into a dict (and maybe migrati= ng > to an object later) is a good idea. > > > > Now that you mentioned objects, I'd say let's just store the ports. I > don't really see a reason for a dict or new object. > > I agree that now that I have moved over to the other way of using ports this abstraction out of the existing list doesn't make much sense. I'll change the list to just be the ports. > >> > >> > >> > + > >> > + def set_up_suite(self) -> None: > >> > + """ > >> > + Setup: > >> > + build all DPDK > >> > >> This needs to be updated. > > > > > > You're right, I guess I missed updating this. > > > >> > >> > + """ > >> > + self.dpdk_build_dir_path =3D > self.sut_node.remote_dpdk_build_dir > >> > + for nic in self.sut_node.config.ports: > >> > + new_tuple =3D (nic.pci, nic.os_driver.strip()) > >> > + self.list_of_nics.append(new_tuple) > >> > >> It would be better to rename this based on what list of nics this is - > the one we expect to find or the one we're testing? > > > > > > Another spot that you're right, it is not nearly as clear if you weren'= t > the one who wrote it, I'll update this as well. > > > >> > >> > >> > + > >> > + def test_unit_tests(self) -> None: > >> > + """ > >> > + Test: > >> > + run the fast-test unit-test suite through meson > >> > + """ > >> > + self.sut_node.main_session.send_command( > >> > + f"meson test -C {self.dpdk_build_dir_path} --suite > fast-tests", > >> > + 300, > >> > + verify=3DTrue, > >> > + ) > >> > + > >> > + def test_driver_tests(self) -> None: > >> > + """ > >> > + Test: > >> > + run the driver-test unit-test suite through meson > >> > + """ > >> > + list_of_vdevs =3D "" > >> > + for dev in self.sut_node._execution_config.vdevs: > >> > + list_of_vdevs +=3D f"--vdev {dev} " > >> > + list_of_vdevs =3D list_of_vdevs[:-1] > >> > + if list_of_vdevs: > >> > + self._logger.info( > >> > + "Running driver tests with the following virtual " > >> > + f"devices: {list_of_vdevs}" > >> > + ) > >> > + self.sut_node.main_session.send_command( > >> > + f"meson test -C {self.dpdk_build_dir_path} --suite > driver-tests " > >> > + f'--test-args "{list_of_vdevs}"', > >> > + 300, > >> > + verify=3DTrue, > >> > + ) > >> > + else: > >> > + self.sut_node.main_session.send_command( > >> > + f"meson test -C {self.dpdk_build_dir_path} --suite > driver-tests", > >> > + 300, > >> > + verify=3DTrue, > >> > + ) > >> > + > >> > + def test_devices_listed_in_testpmd(self) -> None: > >> > + """ > >> > + Test: > >> > + Uses testpmd driver to verify that devices have been > found by testpmd > >> > + """ > >> > >> Every other test description is written in imperative and with lowecas= e > first letter. Let's unite everything to imperative with uppercase first > letter and properly ending the sentence with a dot. > > > > > > Sounds good to me, I'll reformat them. > > > >> > >> > >> > + testpmd_driver: TestPmdShell =3D > self.sut_node.create_interactive_shell( > >> > + InteractiveApp.testpmd > >> > + ) > >> > + dev_list: list[str] =3D testpmd_driver.get_devices() > >> > + for nic in self.list_of_nics: > >> > >> Here's where the more explicit naming would be beneficial - I had to > think about what's what. > >> > >> > + self.verify( > >> > + nic[0] in dev_list, > >> > + f"Device {nic[0]} was not listed in testpmd's > available devices, " > >> > + "please check your configuration", > >> > + ) > >> > + > >> > + def test_device_bound_to_driver(self) -> None: > >> > + """ > >> > + Test: > >> > + ensure that all drivers listed in the config are bound > to the correct driver > >> > + """ > >> > + path_to_dev =3D self.sut_node.main_session.join_remote_path= ( > >> > + self.sut_node._remote_dpdk_dir, "usertools", > "dpdk-devbind.py" > >> > + ) > >> > + for nic in self.list_of_nics: > >> > + out =3D self.sut_node.main_session.send_command( > >> > + f"{path_to_dev} --status | grep {nic[0]}", 60 > >> > + ) > >> > >> This may be something we want to put into SutNode, something like > get_device_info. We'd only send the command once (not for every NIC as > we're doing here) and parse the output. > > > > > > I think similar to the above test for testpmd, it makes sense to isolat= e > the checking if the devices are bound to the right driver to smoke tests. > Just because after smoke tests verifies that they are bound to the right > driver it would never be used in the SutNode again because you could then > trust the config file. Also, if we were to later change this to be isolat= ed > to only checking the NICs that are being used for this execution like we > had mentioned, you would want to isolate this to those nics that are > actually being used in the execution, not all of the NICs in the SutNode > again so I think keeping it in smoke tests makes more sense. > > > > In the test we only care about the port being bound to the right > driver. I was thinking more broadly - execute the command once and > store all of the information we can. With the dpdk-devbind script, > there's not much to store, but we don't have to use that. > > With that in mind, let's not bloat the patch any more. We can do the > more generic implementation later. > > > I do agree though that this can be done in only one call to dev-bind > which I wasn't thinking about when writing it and that I can refactor for > the next version of the patch. > > > >> > >> > >> > + self.verify( > >> > + len(out.stdout) !=3D 0, > >> > + f"Failed to find configured device ({nic[0]}) using > dpdk-devbind.py", > >> > + ) > >> > + for string in out.stdout.split(" "): > >> > + if "drv=3D" in string: > >> > + self.verify( > >> > + string.split("=3D")[1] =3D=3D nic[1], > >> > + f"Driver for device {nic[0]} does not match > driver listed in " > >> > + f'configuration (bound to > {string.split("=3D")[1]})', > >> > + ) > >> > -- > >> > 2.41.0 > >> > > > > > > > Thank you for the feedback, it's always very helpful. > > -Jeremy > Thank you again for the comments, Jeremy --000000000000439e3206003a00c2 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Just a couple of responses but anywhere I didn&#= 39;t respond I also agree.

On Tue, Jul 11, 2023 at 4:17=E2=80=AFAM= Juraj Linke=C5=A1 <juraj.linkes@pantheon.tech> wrote:
Some more comments below. I ag= ree where I didn't reply.

On Sat, Jul 8, 2023 at 1:07=E2=80=AFAM Jeremy Spewock <jspewock@iol.unh.edu> wrote= :
>
>
>
> On Thu, Jul 6, 2023 at 10:54=E2=80=AFAM Juraj Linke=C5=A1 <juraj.li= nkes@pantheon.tech> wrote:
>>
>> There are mypy errors related to paramiko:
>> framework/remote_session/remote/interactive_remote_session.py:8: e= rror: Library stubs not installed for "paramiko" (or incompatible= with Python 3.10)
>> framework/remote_session/remote/interactive_remote_session.py:9: e= rror: Library stubs not installed for "paramiko.ssh_exception" (o= r incompatible with Python 3.10)
>>
>> We do this for pexpect:
>> import pexpect=C2=A0 # type: ignore
>> from pexpect import pxssh=C2=A0 # type: ignore
>>
>> We should do the same for paramiko.
>
>
> Good to know, I'll be sure to change that to make sure it passes.<= br> >
>>
>>
>> There are also some changes which may require coordination. I'= m not sure how we'd do that, the best may be to align that off-list. >>
>> More inline.
>>
>> On Thu, Jun 15, 2023 at 10:13=E2=80=AFPM <jspewock@iol.unh.edu> wrote: >> >
>> > From: Jeremy Spewock <jspewock@iol.unh.edu>
>> >
>> > Adds a new test suite as well as configuration changes needed= for running
>> > smoke tests that verify general environment aspects of the sy= stem
>> > under test. If any of these tests fail, the DTS execution ter= minates
>> > as part of a "fail-fast" model.
>> >
>> > Signed-off-by: Jeremy Spewock <jspewock@iol.unh.edu>
>> > ---
>> >=C2=A0 dts/conf.yaml=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0|=C2=A0 13 ++
>> >=C2=A0 dts/framework/config/__init__.py=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 114 +++++++++++++--
>> >=C2=A0 dts/framework/config/conf_yaml_schema.json=C2=A0 =C2=A0= | 135 +++++++++++++++++-
>> >=C2=A0 dts/framework/dts.py=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 101 ++++++++++---=
>> >=C2=A0 dts/framework/exception.py=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 12 ++
>> >=C2=A0 dts/framework/remote_session/__init__.py=C2=A0 =C2=A0 = =C2=A0 |=C2=A0 10 +-
>> >=C2=A0 dts/framework/remote_session/os_session.py=C2=A0 =C2=A0= |=C2=A0 34 ++++-
>> >=C2=A0 dts/framework/remote_session/posix_session.py |=C2=A0 3= 0 ++++
>> >=C2=A0 .../remote_session/remote/__init__.py=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0|=C2=A0 12 ++
>> >=C2=A0 .../remote/interactive_remote_session.py=C2=A0 =C2=A0 = =C2=A0 | 113 +++++++++++++++
>> >=C2=A0 .../remote/interactive_shell.py=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 98 +++++++++++++
>> >=C2=A0 .../remote_session/remote/testpmd_shell.py=C2=A0 =C2=A0= |=C2=A0 58 ++++++++
>> >=C2=A0 dts/framework/test_result.py=C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 38 ++++-
>> >=C2=A0 dts/framework/test_suite.py=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 31 +++-
>> >=C2=A0 dts/framework/testbed_model/node.py=C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0|=C2=A0 =C2=A02 +
>> >=C2=A0 dts/framework/testbed_model/sut_node.py=C2=A0 =C2=A0 = =C2=A0 =C2=A0| 110 +++++++++++++-
>> >=C2=A0 dts/tests/TestSuite_smoke_tests.py=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 | 101 +++++++++++++
>> >=C2=A0 17 files changed, 962 insertions(+), 50 deletions(-) >> >=C2=A0 create mode 100644 dts/framework/remote_session/remote/= interactive_remote_session.py
>> >=C2=A0 create mode 100644 dts/framework/remote_session/remote/= interactive_shell.py
>> >=C2=A0 create mode 100644 dts/framework/remote_session/remote/= testpmd_shell.py
>> >=C2=A0 create mode 100644 dts/tests/TestSuite_smoke_tests.py >> >
>> > diff --git a/dts/conf.yaml b/dts/conf.yaml
>> > index a9bd8a3e..03fd57e1 100644
>> > --- a/dts/conf.yaml
>> > +++ b/dts/conf.yaml
>> > @@ -10,6 +10,8 @@ executions:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 compiler_wrapper: ccache >> >=C2=A0 =C2=A0 =C2=A0 perf: false
>> >=C2=A0 =C2=A0 =C2=A0 func: true
>> > +=C2=A0 =C2=A0 vdevs: #names of virtual devices to be used fo= r testing
>>
>> Vdevs are optional, let's mention that in the comment as with = the hugepages config.
>
>
> That's a good point, I'll be sure to include that for clarity.=
>
>>
>>
>> >
>> > +=C2=A0 =C2=A0 =C2=A0 - "crypto_openssl"
>>
>> How (or where) is this configured in the original DTS? The vdevs a= re only going to be used on SUTs, so it may make sense to put them to node = config. On the other hand, each execution using the SUT may use different v= devs, so this is probably the best place. Maybe we could move it under syst= em_under_test in execution? That would make more sense I think - it would b= e an addition to the existing SUT configuration (as in here's where we = augment SUT config specific to the execution).
>>
>
>
> I added it under execution under that same line of thinking that you m= entioned: different executions might call for use of different vdevs. I thi= nk that is a good point however that these are only going to be used on the= SUT so it makes sense to add it under that section.
>
> I think this would potentially be good for other things we mentioned a= s well like how to handle ports on the SUT node and if those should be list= ed under SUT or execution. I think if we turn this system_under_test key in= to an object that we can add things to the SUT based on the execution. Then= , in the future, we could create a devices object that could be used for CP= U devices, NICs, or things of that nature and add them to the SUT based on = the execution.

It seems we have an agreement. Let's put vdevs under system_under_test<= br> and we'll put other execution specific SUT config there as we add new features.

>
>> >
>> >=C2=A0 =C2=A0 =C2=A0 test_suites:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 - hello_world
>> >=C2=A0 =C2=A0 =C2=A0 system_under_test: "SUT 1"
>> > @@ -20,6 +22,17 @@ nodes:
>> >=C2=A0 =C2=A0 =C2=A0 arch: x86_64
>> >=C2=A0 =C2=A0 =C2=A0 os: linux
>> >=C2=A0 =C2=A0 =C2=A0 lcores: ""
>> > +=C2=A0 =C2=A0 ports:
>> > +=C2=A0 =C2=A0 =C2=A0 - pci: "0000:00:08.0"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dpdk_os_driver: vfio-pci
>>
>> We could find a better name for this driver (and maybe even the re= gular os driver). It's the OS driver that dpdk uses and there could be = better names - os_dpdk_driver, os_driver_dpdk, os_driver_for_dpdk. Which on= e do you like the most? Or maybe some other name?
>> In either case we probably want to add a comment to both drivers t= hat explain what we mean here. The name doesn't need to be perfect then= . :-) I'll make sure to align the change with my patch if we change thi= s.
>
>
> I think out of those I like os_driver_for_dpdk which makes it a little= more clear that it is just an OS driver that DPDK is going to use, not som= ething specific to DPDK. I agree that this still isn't perfect but it i= s at least more clear and the comment will help clear it the rest of the wa= y.

Makes sense, I'll rename it to os_driver_for_dpdk in my patch.

>
>>
>> >
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 os_driver: i40e
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 peer_node: "TG 1"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 peer_pci: "0000:00:08.0&quo= t;
>> > +=C2=A0 =C2=A0 =C2=A0 - pci: "0000:00:08.1"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dpdk_os_driver: vfio-pci
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 os_driver: i40e
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 peer_node: "TG 1"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 peer_pci: "0000:00:08.1&quo= t;
>> >=C2=A0 =C2=A0 =C2=A0 use_first_core: false
>> >=C2=A0 =C2=A0 =C2=A0 memory_channels: 4
>> >=C2=A0 =C2=A0 =C2=A0 hugepages:=C2=A0 # optional; if removed, = will use system hugepage configuration
>> > diff --git a/dts/framework/config/__init__.py b/dts/framework= /config/__init__.py
>> > index ebb0823f..74e5158d 100644
>> > --- a/dts/framework/config/__init__.py
>> > +++ b/dts/framework/config/__init__.py
>> > @@ -72,6 +72,21 @@ class HugepageConfiguration:
>> >=C2=A0 =C2=A0 =C2=A0 force_first_numa: bool
>> >
>> >
>> > +@dataclass(slots=3DTrue, frozen=3DTrue)
>> > +class PortConfig:
>> > +=C2=A0 =C2=A0 id: int
>>
>> I removed id in my latest local version as I didn't see a real= use for it.
>>
>> >
>> > +=C2=A0 =C2=A0 node: str
>> > +=C2=A0 =C2=A0 pci: str
>> > +=C2=A0 =C2=A0 dpdk_os_driver: str
>> > +=C2=A0 =C2=A0 os_driver: str
>> > +=C2=A0 =C2=A0 peer_node: str
>> > +=C2=A0 =C2=A0 peer_pci: str
>> > +
>> > +=C2=A0 =C2=A0 @staticmethod
>> > +=C2=A0 =C2=A0 def from_dict(id: int, node: str, d: dict) -&g= t; "PortConfig":
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return PortConfig(id=3Did, node= =3Dnode, **d)
>>
>> This would need to be changed if we remove the id.
>>
>
> That makes sense, I don't really see a need for it either so I wil= l also remove it.
>
>>
>> > +
>> > +
>> >=C2=A0 @dataclass(slots=3DTrue, frozen=3DTrue)
>> >=C2=A0 class NodeConfiguration:
>> >=C2=A0 =C2=A0 =C2=A0 name: str
>> > @@ -84,6 +99,7 @@ class NodeConfiguration:
>> >=C2=A0 =C2=A0 =C2=A0 use_first_core: bool
>> >=C2=A0 =C2=A0 =C2=A0 memory_channels: int
>> >=C2=A0 =C2=A0 =C2=A0 hugepages: HugepageConfiguration | None >> > +=C2=A0 =C2=A0 ports: list[PortConfig]
>> >
>> >=C2=A0 =C2=A0 =C2=A0 @staticmethod
>> >=C2=A0 =C2=A0 =C2=A0 def from_dict(d: dict) -> "NodeCo= nfiguration":
>> > @@ -92,18 +108,46 @@ def from_dict(d: dict) -> "NodeC= onfiguration":
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if "forc= e_first_numa" not in hugepage_config:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= hugepage_config["force_first_numa"] =3D False
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 hugepage_conf= ig =3D HugepageConfiguration(**hugepage_config)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 common_config =3D {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "name": = d["name"],
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "hostname&quo= t;: d["hostname"],
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "user": = d["user"],
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "password&quo= t;: d.get("password"),
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "arch": = Architecture(d["arch"]),
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "os": OS= (d["os"]),
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "lcores"= : d.get("lcores", "1"),
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "use_first_co= re": d.get("use_first_core", False),
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "memory_chann= els": d.get("memory_channels", 1),
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "hugepages&qu= ot;: hugepage_config,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "ports":= [
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Port= Config.from_dict(i, d["name"], port)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for = i, port in enumerate(d["ports"])
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ],
>>
>> This also needs to be modified when removing PortConfig.id:
>> "ports": [PortConfig.from_dict(d["name"], port= ) for port in d["ports"]],
>>
>> >
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NodeConfiguration(**commo= n_config)
>> > +
>> > +
>> > +@dataclass(slots=3DTrue)
>> > +class NodeVersionInfo:
>>
>> This looks like we could use it for all sorts of things in the fut= ure, so I'd just call it NodeInfo.
>>
>
>
> I agree that it could be used for a lot of things, I think I just adde= d the version so it would be more specific but in hindsight I agree that th= is could be shortened and used for multiple things.
>
>>
>> >
>> > +=C2=A0 =C2=A0 """Class to hold important vers= ions within the node.
>> > +
>> > +=C2=A0 =C2=A0 This class, unlike the NodeConfiguration class= , cannot be generated at the start.
>> > +=C2=A0 =C2=A0 This is because we need to initialize a connec= tion with the node before we can
>> > +=C2=A0 =C2=A0 collet the information needed in this class. T= herefore, it cannot be a part of
>>
>> Typo: collect
>>
>
> Oops, good catch.
>
>>
>> > +=C2=A0 =C2=A0 the configuration class above.
>> > +=C2=A0 =C2=A0 """
>> >
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NodeConfiguration(
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 name=3Dd["nam= e"],
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 hostname=3Dd["= ;hostname"],
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 user=3Dd["use= r"],
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 password=3Dd.get(&= quot;password"),
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 arch=3DArchitectur= e(d["arch"]),
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 os=3DOS(d["os= "]),
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 lcores=3Dd.get(&qu= ot;lcores", "1"),
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 use_first_core=3Dd= .get("use_first_core", False),
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 memory_channels=3D= d.get("memory_channels", 1),
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 hugepages=3Dhugepa= ge_config,
>> > +=C2=A0 =C2=A0 os_name: str
>> > +=C2=A0 =C2=A0 os_version: str
>> > +=C2=A0 =C2=A0 kernel_version: str
>> > +
>> > +=C2=A0 =C2=A0 @staticmethod
>> > +=C2=A0 =C2=A0 def from_dict(d: dict):
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NodeVersionInfo(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 os_name=3Dd["= os_name"],
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 os_version=3Dd[&qu= ot;os_version"],
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 kernel_version=3Dd= ["kernel_version"],
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> >
>> >
>> > @@ -128,6 +172,24 @@ def from_dict(d: dict) -> "Build= TargetConfiguration":
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> >
>> >
>> > +@dataclass(slots=3DTrue)
>> > +class BuildTargetVersionInfo:
>> > +=C2=A0 =C2=A0 """Class to hold important vers= ions within the build target.
>> > +
>> > +=C2=A0 =C2=A0 This is very similar to the NodeVersionInfo cl= ass, it just instead holds information
>> > +=C2=A0 =C2=A0 for the build target.
>> > +=C2=A0 =C2=A0 """
>> > +
>> > +=C2=A0 =C2=A0 dpdk_version: str
>> > +=C2=A0 =C2=A0 compiler_version: str
>> > +
>> > +=C2=A0 =C2=A0 @staticmethod
>> > +=C2=A0 =C2=A0 def from_dict(d: dict):
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return BuildTargetVersionInfo( >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dpdk_version=3Dd[&= quot;dpdk_version"], compiler_version=3Dd["compiler_version"= ]
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>>
>> Do you expect we'll be storing more fields than just these two= (if so, then we can again rename the class to BuildTargetInfo)? If it'= s just these two, the extra class may not help that much (we could pass aro= und a tuple).
>>
>> On the other hand, there's not really a reason to not have thi= s class. I could see us adding to it and I like BuildTargetInfo because of = that.
>
>
> I figured that it could be useful for storing any extra information in= the future and like you said, does no harm so I made it a class. I think k= eeping it a class just makes it easier to extend it in the future but I'= ;ll rename it like the above info class.
>
>>
>>
>> >
>> > +
>> >=C2=A0 class TestSuiteConfigDict(TypedDict):
>> >=C2=A0 =C2=A0 =C2=A0 suite: str
>> >=C2=A0 =C2=A0 =C2=A0 cases: list[str]
>> > @@ -157,6 +219,8 @@ class ExecutionConfiguration:
>> >=C2=A0 =C2=A0 =C2=A0 func: bool
>> >=C2=A0 =C2=A0 =C2=A0 test_suites: list[TestSuiteConfig]
>> >=C2=A0 =C2=A0 =C2=A0 system_under_test: NodeConfiguration
>> > +=C2=A0 =C2=A0 vdevs: list[str]
>> > +=C2=A0 =C2=A0 skip_smoke_tests: bool
>>
>> This should be added to conf.yaml to demonstrate the availability = of the option.
>
>
> Very good point, it would be hard for people to know it was an option = otherwise.
>
>>
>> >
>> >
>> >=C2=A0 =C2=A0 =C2=A0 @staticmethod
>> >=C2=A0 =C2=A0 =C2=A0 def from_dict(d: dict, node_map: dict) -&= gt; "ExecutionConfiguration":
>> > @@ -167,14 +231,18 @@ def from_dict(d: dict, node_map: dict) = -> "ExecutionConfiguration":
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 map(TestSuite= Config.from_dict, d["test_suites"])
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_name =3D d["system= _under_test"]
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 list_of_vdevs =3D d["vdevs&= quot;]
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 skip_smoke_tests =3D d["ski= p_smoke_tests"] if "skip_smoke_tests" in d else False
>>
>> Just as with hugepages, we could skip the else branch if we set th= e default of skip_smoke_tests to False. I'm not sure which is the bette= r approach.
>
>
> I am unsure of which is better and it doesn't make a massive diffe= rence, but I will change this to a .get() call where the default is false t= o more closely align with the huge pages configuration.

That seems to be the best option.

>
>>
>>
>> >
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 assert sut_name in node_map= , f"Unknown SUT {sut_name} in execution {d}"
>> >
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return ExecutionConfigurati= on(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 build_targets= =3Dbuild_targets,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 perf=3Dd[&quo= t;perf"],
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 func=3Dd[&quo= t;func"],
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 skip_smoke_tests= =3Dskip_smoke_tests,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_suites= =3Dtest_suites,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 system_under_= test=3Dnode_map[sut_name],
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 vdevs=3Dlist_of_vd= evs,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> >
>> >
>> > @@ -221,3 +289,27 @@ def load_config() -> Configuration: >> >
>> >
>> >=C2=A0 CONFIGURATION =3D load_config()
>> > +
>> > +
>> > +@unique
>> > +class InteractiveApp(Enum):
>> > +=C2=A0 =C2=A0 """An enum that represents diff= erent supported interactive applications
>> > +
>> > +=C2=A0 =C2=A0 The values in this enum must all be set to obj= ects that have a key called
>> > +=C2=A0 =C2=A0 "default_path" where "default_p= ath" represents an array for the path to the
>> > +=C2=A0 =C2=A0 application. This default path will be passed = into the handler class for the
>> > +=C2=A0 =C2=A0 application so that it can start the applicati= on. For every key other than
>> > +=C2=A0 =C2=A0 the default shell option, the path will be app= ended to the path to the DPDK
>> > +=C2=A0 =C2=A0 build directory for the current SUT node.
>> > +=C2=A0 =C2=A0 """
>> > +
>> > +=C2=A0 =C2=A0 shell =3D {"default_path": ["&q= uot;]}
>> > +=C2=A0 =C2=A0 testpmd =3D {"default_path": ["= app", "dpdk-testpmd"]}
>>
>> The stored path could be a PurePath object. It'll be instantia= ted as an OS-specific path, but main_session.join_remote_path will convert = it properly.
>>
>
> I actually was thinking about this because I saw it done when I was lo= oking through a different snippet of DTS code in one of your patches I beli= eve and I realized it would be a better approach. I much prefer using PureP= ath here just because that is more clear and feels cleaner so I will make t= his change.
>
>>
>> > +
>> > +=C2=A0 =C2=A0 def get_path(self):
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """A method for g= etting the default paths of an application
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Returns:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 String array that = represents an OS agnostic path to the application.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self.value["default_= path"]
>> > diff --git a/dts/framework/config/conf_yaml_schema.json b/dts= /framework/config/conf_yaml_schema.json
>> > index ca2d4a1e..3f7c301a 100644
>> > --- a/dts/framework/config/conf_yaml_schema.json
>> > +++ b/dts/framework/config/conf_yaml_schema.json
>> > @@ -6,6 +6,76 @@
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 "type": "string&quo= t;,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 "description": "A u= nique identifier for a node"
>> >=C2=A0 =C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 "NIC": {
>> > +=C2=A0 =C2=A0 =C2=A0 "type": "string", >> > +=C2=A0 =C2=A0 =C2=A0 "enum": [
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ALL",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ConnectX3_MT4103", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ConnectX4_LX_MT4117",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ConnectX4_MT4115", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ConnectX5_MT4119", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ConnectX5_MT4121", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_10G-10G_BASE_T_BC&quo= t;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_10G-10G_BASE_T_X722&q= uot;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_10G-SFP_X722", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_10G-SFP_XL710",<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_10G-X722_A0", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_1G-1G_BASE_T_X722&quo= t;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_25G-25G_SFP28",<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_40G-QSFP_A",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "I40E_40G-QSFP_B",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IAVF-ADAPTIVE_VF", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IAVF-VF",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IAVF_10G-X722_VF", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ICE_100G-E810C_QSFP",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ICE_25G-E810C_SFP", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "ICE_25G-E810_XXV_SFP"= ,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB-I350_VF",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82540EM",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82545EM_COPPER"= ;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82571EB_COPPER"= ;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82574L",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82576",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82576_QUAD_COPPER&q= uot;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82576_QUAD_COPPER_E= T2",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-82580_COPPER",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-I210_COPPER",<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-I350_COPPER",<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-I354_SGMII", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-PCH_LPTLP_I218_LM&q= uot;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-PCH_LPTLP_I218_V&qu= ot;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-PCH_LPT_I217_LM&quo= t;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_1G-PCH_LPT_I217_V"= ;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGB_2.5G-I354_BACKPLANE_2_= 5GBPS",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGC-I225_LM",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IGC-I226_LM",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-82599_SFP",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-82599_SFP_SF_QP&= quot;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-82599_T3_LOM&quo= t;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-82599_VF",<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X540T",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X540_VF", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X550EM_A_SFP&quo= t;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X550EM_X_10G_T&q= uot;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X550EM_X_SFP&quo= t;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X550EM_X_VF"= ;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X550T",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "IXGBE_10G-X550_VF", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "brcm_57414",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "brcm_P2100G",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "cavium_0011",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "cavium_a034",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "cavium_a063",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "cavium_a064",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "fastlinq_ql41000", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "fastlinq_ql41000_vf",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "fastlinq_ql45000", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "fastlinq_ql45000_vf",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "hi1822",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "virtio"
>> > +=C2=A0 =C2=A0 =C2=A0 ]
>> > +=C2=A0 =C2=A0 },
>> > +
>>
>> All these NICs may be overkill, do we want to trim them?
>>
>
>
> I think in general that the more we have the better to make it more un= iversally usable. If a NIC isn't supported by DTS anymore we could pull= it out but I don't see a problem with maintaining a list that has all = supported NICs even if it does end up being long.
>

The broader question is what does it mean that a NIC is supported in
DTS? That's a question we should address in the CI/DTS call and in the<= br> meantime, we could just leave the list as is.


I think this would be a very good thing to br= ing up and agree that there should be more discussion on it. It probably is= better to leave the list longer in the meantime like you were saying as we= ll.
=C2=A0
>>
>> >
>> >=C2=A0 =C2=A0 =C2=A0 "ARCH": {
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 "type": "string&quo= t;,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 "enum": [
>> > @@ -94,10 +164,24 @@
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "amount"
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 ]
>> >=C2=A0 =C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 "pci_address": {
>> > +=C2=A0 =C2=A0 =C2=A0 "type": "string", >> > +=C2=A0 =C2=A0 =C2=A0 "pattern": "^[\\da-fA-F]= {4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$"
>> > +=C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 "port_peer_address": {
>> > +=C2=A0 =C2=A0 =C2=A0 "description": "Peer is = a TRex port, and IXIA port or a PCI address",
>> > +=C2=A0 =C2=A0 =C2=A0 "oneOf": [
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "description": = "PCI peer port",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "$ref": "#= /definitions/pci_address"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>> > +=C2=A0 =C2=A0 =C2=A0 ]
>> > +=C2=A0 =C2=A0 },
>> >=C2=A0 =C2=A0 =C2=A0 "test_suite": {
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 "type": "string&quo= t;,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 "enum": [
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 "hello_world"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "hello_world",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 "smoke_tests"
>>
>> Is there a reason to provide both this and the execution.skip_smok= e_tests option? If we provide both, we may need to think about what happens= when both are present. I'd say it's best for everyone (users and d= evs) to have only one option and I'd lean towards execution.skip_smoke_= tests.
>
>
> Good catch, I meant to remove this. In theory if you added it in both = places it wouldn't cause too much of an issue, you would just be runnin= g smoke tests twice in your execution. I will remove it anyway though.
>
>>
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 ]
>> >=C2=A0 =C2=A0 =C2=A0 },
>> >=C2=A0 =C2=A0 =C2=A0 "test_target": {
>> > @@ -165,6 +249,44 @@
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "hugepages"= ;: {
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "$ref&qu= ot;: "#/definitions/hugepages"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "ports": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "type": = "array",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "items":= {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "type&= quot;: "object",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "descr= iption": "Each port should be described on both sides of the conn= ection. This makes configuration slightly more verbose but greatly simplifi= es implementation. If there are an inconsistencies, then DTS will not run u= ntil that issue is fixed. An example inconsistency would be port 1, node 1 = says it is connected to port 1, node 2, but port 1, node 2 says it is conne= cted to port 2, node 1.",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "prope= rties": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;pci": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "$ref": "#/definitions/pci_address",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "description": "The local PCI address of the port"<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }, >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;dpdk_os_driver": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "type": "string",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "description": "The driver that the kernel should bind t= his device to for DPDK to use it. (ex: vfio-pci)"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }, >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;os_driver": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "type": "string",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "description": "The driver normally used by this port (e= x: i40e)"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }, >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;peer_node": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "type": "string",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "description": "The name of the node the peer port is on= "
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }, >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;peer_pci": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "$ref": "#/definitions/pci_address",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "description": "The PCI address of the peer port" >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 } >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "addit= ionalProperties": false,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "requi= red": [
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;pci",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;dpdk_os_driver",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;os_driver",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;peer_node",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;peer_pci"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ]
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "minimum"= ;: 1
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "additionalProperties&= quot;: false,
>> > @@ -211,6 +333,17 @@
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ]
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "vdevs": {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "description&= quot;: "Names of vdevs to be used in execution",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "type": = "array",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "items":= {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "type&= quot;:"string"
>>
>> Missing space before "string".
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "skip_smoke_tests&qu= ot;: {
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "description&= quot;: "Optional field that allows you to skip smoke testing", >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "type":&= quot;boolean"
>>
>> Also the missing space.
>
>
> Good call, I'll correct these.
>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 },
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "system_under_t= est": {
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "$ref&qu= ot;: "#/definitions/node_name"
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>> > diff --git a/dts/framework/dts.py b/dts/framework/dts.py
>> > index 05022845..f76c9ceb 100644
>> > --- a/dts/framework/dts.py
>> > +++ b/dts/framework/dts.py
>> > @@ -5,7 +5,13 @@
>> >
>> >=C2=A0 import sys
>> >
>> > -from .config import CONFIGURATION, BuildTargetConfiguration,= ExecutionConfiguration
>> > +from .config import (
>> > +=C2=A0 =C2=A0 CONFIGURATION,
>> > +=C2=A0 =C2=A0 BuildTargetConfiguration,
>> > +=C2=A0 =C2=A0 ExecutionConfiguration,
>> > +=C2=A0 =C2=A0 TestSuiteConfig,
>> > +)
>> > +from .exception import BlockingTestSuiteError
>> >=C2=A0 from .logger import DTSLOG, getLogger
>> >=C2=A0 from .test_result import BuildTargetResult, DTSResult, = ExecutionResult, Result
>> >=C2=A0 from .test_suite import get_test_suites
>> > @@ -82,7 +88,9 @@ def _run_execution(
>> >=C2=A0 =C2=A0 =C2=A0 running all build targets in the given ex= ecution.
>> >=C2=A0 =C2=A0 =C2=A0 """
>> >=C2=A0 =C2=A0 =C2=A0 dts_logger.info(f"Running execution= with SUT '{execution.system_under_test.name}'.&q= uot;)
>> > -=C2=A0 =C2=A0 execution_result =3D result.add_execution(sut_= node.config)
>> > +=C2=A0 =C2=A0 execution_result =3D result.add_execution(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_node.config, sut_node.get_no= de_versions()
>> > +=C2=A0 =C2=A0 )
>> >
>> >=C2=A0 =C2=A0 =C2=A0 try:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_node.set_up_execution(e= xecution)
>> > @@ -118,14 +126,17 @@ def _run_build_target(
>> >
>> >=C2=A0 =C2=A0 =C2=A0 try:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_node.set_up_build_targe= t(build_target)
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 result.dpdk_version =3D sut_node= .dpdk_version
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # result.dpdk_version =3D sut_no= de.dpdk_version
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 build_target_result.add_build_ta= rget_versions(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_node.get_build= _target_versions()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 build_target_result.update_= setup(Result.PASS)
>> >=C2=A0 =C2=A0 =C2=A0 except Exception as e:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_logger.exception("= Build target setup failed.")
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 build_target_result.update_= setup(Result.FAIL, e)
>> >
>> >=C2=A0 =C2=A0 =C2=A0 else:
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 _run_suites(sut_node, execution,= build_target_result)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 _run_all_suites(sut_node, execut= ion, build_target_result)
>> >
>> >=C2=A0 =C2=A0 =C2=A0 finally:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 try:
>> > @@ -136,7 +147,46 @@ def _run_build_target(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 build_target_= result.update_teardown(Result.FAIL, e)
>> >
>> >
>> > -def _run_suites(
>> > +def _run_single_suite(
>> > +=C2=A0 =C2=A0 sut_node: SutNode,
>> > +=C2=A0 =C2=A0 execution: ExecutionConfiguration,
>> > +=C2=A0 =C2=A0 build_target_result: BuildTargetResult,
>> > +=C2=A0 =C2=A0 test_suite_config: TestSuiteConfig,
>> > +) -> None:
>> > +=C2=A0 =C2=A0 """Runs a single test suite. >> > +
>> > +=C2=A0 =C2=A0 Args:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_node: Node to run tests on.<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 execution: Execution the test ca= se belongs to.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 build_target_result: Build targe= t configuration test case is run on
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 test_suite_config: Test suite co= nfiguration
>> > +
>> > +=C2=A0 =C2=A0 Raises:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 BlockingTestSuiteError: If a tes= t suite that was marked as blocking fails.
>> > +=C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 try:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 full_suite_path =3D f"tests= .TestSuite_{test_suite_config.test_suite}"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 test_suite_classes =3D get_test_= suites(full_suite_path)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 suites_str =3D ", ".jo= in((x.__name__ for x in test_suite_classes))
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_logger.debug(f"Found te= st suites '{suites_str}' in '{full_suite_path}'.")
>> > +=C2=A0 =C2=A0 except Exception as e:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_logger.exception("An er= ror occurred when searching for test suites.")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 result.update_setup(Result.ERROR= , e)
>> > +
>> > +=C2=A0 =C2=A0 else:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 for test_suite_class in test_sui= te_classes:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_suite =3D tes= t_suite_class(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_= node,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test= _suite_config.test_cases,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 exec= ution.func,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 buil= d_target_result,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_= node._build_target_config,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 resu= lt,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_suite.run() >> > +
>> > +
>> > +def _run_all_suites(
>> >=C2=A0 =C2=A0 =C2=A0 sut_node: SutNode,
>> >=C2=A0 =C2=A0 =C2=A0 execution: ExecutionConfiguration,
>> >=C2=A0 =C2=A0 =C2=A0 build_target_result: BuildTargetResult, >> > @@ -146,27 +196,32 @@ def _run_suites(
>> >=C2=A0 =C2=A0 =C2=A0 with possibly only a subset of test cases= .
>> >=C2=A0 =C2=A0 =C2=A0 If no subset is specified, run all test c= ases.
>> >=C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 end_build_target =3D False
>> > +=C2=A0 =C2=A0 smoke_test_config =3D TestSuiteConfig.from_dic= t("smoke_tests")
>> > +=C2=A0 =C2=A0 if not execution.skip_smoke_tests:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 try:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 _run_single_suite(=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_= node, execution, build_target_result, smoke_test_config
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 except BlockingTestSuiteError as= e:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_logger.excepti= on("Smoke tests failed, stopping build target execution.")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 result.add_error(e= )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return
>> >=C2=A0 =C2=A0 =C2=A0 for test_suite_config in execution.test_s= uites:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 try:
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 full_suite_path = =3D f"tests.TestSuite_{test_suite_config.test_suite}"
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_suite_classes= =3D get_test_suites(full_suite_path)
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 suites_str =3D &qu= ot;, ".join((x.__name__ for x in test_suite_classes))
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_logger.debug(<= br> >> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;Found test suites '{suites_str}' in '{full_suite_path}'.= "
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 _run_single_suite(=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_= node, execution, build_target_result, test_suite_config
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 except Exception as e:
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_logger.excepti= on("An error occurred when searching for test suites.")
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 result.update_setu= p(Result.ERROR, e)
>> > -
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 else:
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for test_suite_cla= ss in test_suite_classes:
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test= _suite =3D test_suite_class(
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 sut_node,
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 test_suite_config.test_cases,
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 execution.func,
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 build_target_result,
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ) >> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test= _suite.run()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 except BlockingTestSuiteError as= e:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_logger.excepti= on(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;An error occurred within {test_suite_config.test_suite}. "
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;Skipping build target..."
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 result.add_error(e= )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 end_build_target = =3D True
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # if a blocking test failed and = we need to bail out of suite executions
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if end_build_target:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 break
>> >
>>
>> This doesn't look great to me. I think we could just add a lis= t[TestSuiteConfig] parameter to _run_suites. We could then just pass a list= with/without smoke TestSuiteConfig, something like:
>> test_suites =3D execution.test_suites
>> if not execution.skip_smoke_tests:
>>=C2=A0 =C2=A0 =C2=A0test_suites[:0] =3D [TestSuiteConfig.from_dict(= "smoke_tests")]
>> _run_suites(sut_node, execution, build_target_result, test_suites)=
>>
>> A note on method ordering:
>> _run_single_suite before _run_all_suites was confusing to me, as t= he methods are used in the reverse order - _run_build_target calls _run_all= _suites which only then calls _run_single_suite. I'd like the methods t= o be ordered in the order they're used in the code.
>
>
> I agree this was ugly, that's a good way to handle it and I'll= migrate to just adding it to the front of the list. I think I can actually= just use the execution.test_suites array directly that is already being us= ed and just append it to the front of that. This way the list of test suite= s in the execution configuration is accurate to what actually ran. I don= 9;t recall why this happened with the method ordering either but I'll g= et that corrected as well. I'm thinking it'll look something more l= ike this:

The example is missing, but not needed, just prepend it to the list if
possible - my snippet operated under the assumption of config being a
frozen dataset, but that may not matter for mutable types.

>
>>
>> >
>> >=C2=A0 def _exit_dts() -> None:
>> > diff --git a/dts/framework/exception.py b/dts/framework/excep= tion.py
>> > index ca353d98..dfb12df4 100644
>> > --- a/dts/framework/exception.py
>> > +++ b/dts/framework/exception.py
>> > @@ -25,6 +25,7 @@ class ErrorSeverity(IntEnum):
>> >=C2=A0 =C2=A0 =C2=A0 SSH_ERR =3D 4
>> >=C2=A0 =C2=A0 =C2=A0 DPDK_BUILD_ERR =3D 10
>> >=C2=A0 =C2=A0 =C2=A0 TESTCASE_VERIFY_ERR =3D 20
>> > +=C2=A0 =C2=A0 BLOCKING_TESTSUITE_ERR =3D 25
>> >
>> >
>> >=C2=A0 class DTSError(Exception):
>> > @@ -144,3 +145,14 @@ def __init__(self, value: str):
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def __str__(self) -> str:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return repr(self.value)
>> > +
>> > +
>> > +class BlockingTestSuiteError(DTSError):
>> > +=C2=A0 =C2=A0 suite_name: str
>> > +=C2=A0 =C2=A0 severity: ClassVar[ErrorSeverity] =3D ErrorSev= erity.BLOCKING_TESTSUITE_ERR
>> > +
>> > +=C2=A0 =C2=A0 def __init__(self, suite_name: str) -> None= :
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.suite_name =3D suite_name >> > +
>> > +=C2=A0 =C2=A0 def __str__(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return f"Blocking suite {se= lf.suite_name} failed."
>> > diff --git a/dts/framework/remote_session/__init__.py b/dts/f= ramework/remote_session/__init__.py
>> > index ee221503..04342b8d 100644
>> > --- a/dts/framework/remote_session/__init__.py
>> > +++ b/dts/framework/remote_session/__init__.py
>> > @@ -1,5 +1,6 @@
>> >=C2=A0 # SPDX-License-Identifier: BSD-3-Clause
>> >=C2=A0 # Copyright(c) 2023 PANTHEON.tech s.r.o.
>> > +# Copyright(c) 2022-2023 University of New Hampshire
>>
>> Should the year be just 2023? I'm not sure about these copyrig= ht statements.
>
>
> I think just 2023 makes sense, I wasn't completely sure either but= I was trying to align more with older copyright statements throughout the = code and wasn't sure why some others had a range. In anycase, just 2023= makes sense to me and it matches the one above so I have no problem making= it that.

2022-2023 is there in files which were applied in 2022 and modified in
2023 (by the same author/organization). This would be the first
modification by UNH, hence 2023 would make more sense to me.


That's a very good distinction, thank you= for the info!

>
>>
>> >
>> >=C2=A0 """
>> >=C2=A0 The package provides modules for managing remote connec= tions to a remote host (node),
>> > @@ -17,7 +18,14 @@
>> >
>> >=C2=A0 from .linux_session import LinuxSession
>> >=C2=A0 from .os_session import OSSession
>> > -from .remote import CommandResult, RemoteSession, SSHSession=
>> > +from .remote import (
>> > +=C2=A0 =C2=A0 CommandResult,
>> > +=C2=A0 =C2=A0 InteractiveRemoteSession,
>> > +=C2=A0 =C2=A0 InteractiveShell,
>> > +=C2=A0 =C2=A0 RemoteSession,
>> > +=C2=A0 =C2=A0 SSHSession,
>> > +=C2=A0 =C2=A0 TestPmdShell,
>> > +)
>> >
>> >
>> >=C2=A0 def create_session(
>> > diff --git a/dts/framework/remote_session/os_session.py b/dts= /framework/remote_session/os_session.py
>> > index 4c48ae25..f5f53923 100644
>> > --- a/dts/framework/remote_session/os_session.py
>> > +++ b/dts/framework/remote_session/os_session.py
>> > @@ -12,7 +12,13 @@
>> >=C2=A0 from framework.testbed_model import LogicalCore
>> >=C2=A0 from framework.utils import EnvVarsDict, MesonArgs
>> >
>> > -from .remote import CommandResult, RemoteSession, create_rem= ote_session
>> > +from .remote import (
>> > +=C2=A0 =C2=A0 CommandResult,
>> > +=C2=A0 =C2=A0 InteractiveRemoteSession,
>> > +=C2=A0 =C2=A0 RemoteSession,
>> > +=C2=A0 =C2=A0 create_interactive_session,
>> > +=C2=A0 =C2=A0 create_remote_session,
>> > +)
>> >
>> >
>> >=C2=A0 class OSSession(ABC):
>> > @@ -26,6 +32,7 @@ class OSSession(ABC):
>> >=C2=A0 =C2=A0 =C2=A0 name: str
>> >=C2=A0 =C2=A0 =C2=A0 _logger: DTSLOG
>> >=C2=A0 =C2=A0 =C2=A0 remote_session: RemoteSession
>> > +=C2=A0 =C2=A0 interactive_session: InteractiveRemoteSession<= br> >> >
>> >=C2=A0 =C2=A0 =C2=A0 def __init__(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self,
>> > @@ -37,6 +44,7 @@ def __init__(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.name =3D name
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger =3D logger
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.remote_session =3D cre= ate_remote_session(node_config, name, logger)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.interactive_session =3D cre= ate_interactive_session(node_config, name, logger)
>>
>> We may not want to create the interactive session at this point. T= his does create a connection to the node which we don't want (it is ext= ra time consumed) when creating an extra session on top of the main_session= (with Node.create_session). I think we could move this to OSSession.create= _interactive_shell. More below.
>
>
> I think the idea of initializing it here was what we had discussed bef= ore about having an open SSH session for interactive shells open in the bac= kground throughout the entire run. If I understand what you're saying, = you're suggesting that we would only initialize this connection when we= need it the first time and then leave it in the background afterwards. I c= an see how this would be more efficient if you had a run where the interact= ive session was never used, but it would add extra logic to make sure that = the connection is only initialized once and that it doesn't happen ever= y time you create an interactive shell. This is something that could be don= e, but considering that this will always be initialized with smoke tests, t= he only way you wouldn't have an interactive remote session created at = the start is if you disable smoke tests. I think it is easier to just have = it running in the background rather than spawn it when it's used and ha= ve to worry about if a connection currently exists or not.

Right, with smoke tests almost always running, there may not be that
much of an advantage in initializing it only when needed. On the other
hand, the check could be very simple - the same thing we do with
properties such as SutNode.os_name.


I agree that it wouldn't be hard to check= if it were defined, I was just thinking that if we were going to spend the= time more often than not anyway, it would make sense to do it initially so= that it doesn't cause a slow during the test suite and instead during = initialization. If you disagree however, we could easily change this in the= future and do it as needed as I think, in the rare case, you are right tha= t it would be more efficient, but otherwise it made more sense to me to run= it during the initialization stages of the run.
=C2=A0=
>>
>>
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def close(self, force: bool =3D False) -&= gt; None:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > @@ -173,3 +181,27 @@ def setup_hugepages(self, hugepage_amoun= t: int, force_first_numa: bool) -> None:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if needed and mount the hug= epages if needed.
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 If force_first_numa is True= , configure hugepages just on the first socket.
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +
>> > +=C2=A0 =C2=A0 @abstractmethod
>> > +=C2=A0 =C2=A0 def get_compiler_version(self, compiler_name: = str) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Get installed version of compile= r used for DPDK
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +
>> > +=C2=A0 =C2=A0 @abstractmethod
>> > +=C2=A0 =C2=A0 def get_os_name(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Get name of OS for the node
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +
>> > +=C2=A0 =C2=A0 @abstractmethod
>> > +=C2=A0 =C2=A0 def get_os_version(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Get version of OS for the node >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +
>> > +=C2=A0 =C2=A0 @abstractmethod
>> > +=C2=A0 =C2=A0 def get_kernel_version(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Get kernel version for the node<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>>
>> These could be merged into one method (returning NodeVersionInfo),= is there a reason to separate them? We could get some performance improvem= ents with one method (e.g. getting /etc/os-release just once and then proce= ss locally or we could use a script we execute remotely to have just one re= mote call, but that's something to think about in the future).
>
>
> That's a good point. Originally I left them separate so that durin= g the run you could just access what you need, but you can just access the = data you need from the object.
>
>>
>> > diff --git a/dts/framework/remote_session/posix_session.py b/= dts/framework/remote_session/posix_session.py
>> > index d38062e8..dc6689f5 100644
>> > --- a/dts/framework/remote_session/posix_session.py
>> > +++ b/dts/framework/remote_session/posix_session.py
>> > @@ -219,3 +219,33 @@ def _remove_dpdk_runtime_dirs(
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def get_dpdk_file_prefix(self, dpdk_prefi= x) -> str:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return ""
>> > +
>> > +=C2=A0 =C2=A0 def get_compiler_version(self, compiler_name: = str) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 match compiler_name:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case "gcc&quo= t;:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn self.send_command(f"{compiler_name} --version", 60).stdout.spl= it(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 "\n"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )[0]=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case "clang&q= uot;:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn self.send_command(f"{compiler_name} --version", 60).stdout.spl= it(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 "\n"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )[0]=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case "msvc&qu= ot;:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn self.send_command("cl", 60).stdout
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case "icc&quo= t;:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn self.send_command(f"{compiler_name} -V", 60).stdout
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case _:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rais= e ValueError(f"Unknown compiler {compiler_name}")
>> > +
>> > +=C2=A0 =C2=A0 def get_os_name(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self.send_command(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "awk -F=3D &#= 39;$1=3D=3D\"NAME\" {print $2}' /etc/os-release", 60
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ).stdout
>> > +
>> > +=C2=A0 =C2=A0 def get_os_version(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self.send_command(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "awk -F=3D &#= 39;$1=3D=3D\"VERSION\" {print $2}' /etc/os-release", 60,= True
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ).stdout
>> > +
>> > +=C2=A0 =C2=A0 def get_kernel_version(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self.send_command("u= name -r", 60).stdout
>>
>> These 60 second timeouts seem high to me. All of these commands sh= ould execute quickly so the default timeout seems fine to me.
>
>
> That makes sense to me, no sense in making it too long.
>
>>
>>
>> > diff --git a/dts/framework/remote_session/remote/__init__.py = b/dts/framework/remote_session/remote/__init__.py
>> > index 8a151221..ef7b4851 100644
>> > --- a/dts/framework/remote_session/remote/__init__.py
>> > +++ b/dts/framework/remote_session/remote/__init__.py
>> > @@ -1,16 +1,28 @@
>> >=C2=A0 # SPDX-License-Identifier: BSD-3-Clause
>> >=C2=A0 # Copyright(c) 2023 PANTHEON.tech s.r.o.
>> > +# Copyright(c) 2022-2023 University of New Hampshire
>> >
>> >=C2=A0 # pylama:ignore=3DW0611
>> >
>> > +from paramiko import AutoAddPolicy, SSHClient
>> > +
>> >=C2=A0 from framework.config import NodeConfiguration
>> >=C2=A0 from framework.logger import DTSLOG
>> >
>> > +from .interactive_remote_session import InteractiveRemoteSes= sion
>> > +from .interactive_shell import InteractiveShell
>> >=C2=A0 from .remote_session import CommandResult, RemoteSessio= n
>> >=C2=A0 from .ssh_session import SSHSession
>> > +from .testpmd_shell import TestPmdShell
>> >
>> >
>> >=C2=A0 def create_remote_session(
>> >=C2=A0 =C2=A0 =C2=A0 node_config: NodeConfiguration, name: str= , logger: DTSLOG
>> >=C2=A0 ) -> RemoteSession:
>> >=C2=A0 =C2=A0 =C2=A0 return SSHSession(node_config, name, logg= er)
>> > +
>> > +
>> > +def create_interactive_session(
>> > +=C2=A0 =C2=A0 node_config: NodeConfiguration, name: str, log= ger: DTSLOG
>> > +) -> InteractiveRemoteSession:
>> > +=C2=A0 =C2=A0 return InteractiveRemoteSession(node_config, l= ogger)
>> > diff --git a/dts/framework/remote_session/remote/interactive_= remote_session.py b/dts/framework/remote_session/remote/interactive_remote_= session.py
>> > new file mode 100644
>> > index 00000000..4e88308a
>> > --- /dev/null
>> > +++ b/dts/framework/remote_session/remote/interactive_remote_= session.py
>> > @@ -0,0 +1,113 @@
>> > +# SPDX-License-Identifier: BSD-3-Clause
>> > +# Copyright(c) 2023 University of New Hampshire
>> > +
>> > +import socket
>> > +import traceback
>> > +from pathlib import PurePath
>> > +
>> > +from paramiko import AutoAddPolicy, SSHClient, Transport
>> > +from paramiko.ssh_exception import (
>> > +=C2=A0 =C2=A0 AuthenticationException,
>> > +=C2=A0 =C2=A0 BadHostKeyException,
>> > +=C2=A0 =C2=A0 NoValidConnectionsError,
>> > +=C2=A0 =C2=A0 SSHException,
>> > +)
>> > +
>> > +from framework.config import InteractiveApp, NodeConfigurati= on
>> > +from framework.exception import SSHConnectionError
>> > +from framework.logger import DTSLOG
>> > +from framework.settings import SETTINGS
>> > +
>> > +from .interactive_shell import InteractiveShell
>> > +from .testpmd_shell import TestPmdShell
>> > +
>> > +
>> > +class InteractiveRemoteSession:
>> > +=C2=A0 =C2=A0 hostname: str
>> > +=C2=A0 =C2=A0 ip: str
>> > +=C2=A0 =C2=A0 port: int
>> > +=C2=A0 =C2=A0 username: str
>> > +=C2=A0 =C2=A0 password: str
>> > +=C2=A0 =C2=A0 _logger: DTSLOG
>> > +=C2=A0 =C2=A0 _node_config: NodeConfiguration
>> > +=C2=A0 =C2=A0 session: SSHClient
>> > +=C2=A0 =C2=A0 _transport: Transport | None
>> > +
>> > +=C2=A0 =C2=A0 def __init__(self, node_config: NodeConfigurat= ion, _logger: DTSLOG) -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._node_config =3D node_confi= g
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger =3D _logger
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.hostname =3D node_config.ho= stname
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.username =3D node_config.us= er
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.password =3D node_config.pa= ssword if node_config.password else ""
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 port =3D 22
>>
>> framework/remote_session/remote/interactive_remote_session.py:45: = error: Incompatible types in assignment (expression has type "str"= ;, variable has type "int")
>>
>> We can address this by making 22 a string (port =3D "22"= ).
>
>
> I'll fix this as well.
>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.ip =3D node_config.hostname=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ":" in node_config.= hostname:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.ip, port =3D = node_config.hostname.split(":")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.port =3D int(port)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger.info(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f"Initializin= g interactive connection for {self.username}@{self.hostname}"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._connect()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger.info(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f"Interactive= connection successful for {self.username}@{self.hostname}"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>> > +=C2=A0 =C2=A0 def _connect(self) -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 client =3D SSHClient()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 client.set_missing_host_key_poli= cy(AutoAddPolicy)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.session =3D client
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 retry_attempts =3D 10
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 for retry_attempt in range(retry= _attempts):
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 try:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 clie= nt.connect(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 self.ip,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 username=3Dself.username,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 port=3Dself.port,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 password=3Dself.password,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 timeout=3D20 if self.port else 10,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ) >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 except (TypeError,= BadHostKeyException, AuthenticationException) as e:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self= ._logger.exception(e)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rais= e SSHConnectionError(self.hostname) from e
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 except (NoValidCon= nectionsError, socket.error, SSHException) as e:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self= ._logger.debug(traceback.format_exc())
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self= ._logger.warning(e)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self= ._logge= r.info(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 "Retrying interactive session connection: "
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 f"retry number {retry_attempt +1}"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ) >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 else:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 brea= k
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 else:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 raise SSHConnectio= nError(self.hostname)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # Interactive sessions are used = on an "as needed" basis so we have
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # to set a keepalive
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._transport =3D self.session= .get_transport()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if self._transport is not None:<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._transport.se= t_keepalive(30)
>> > +
>> > +=C2=A0 =C2=A0 def create_interactive_shell(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 shell_type: InteractiveApp,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 path_to_app: PurePath,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 timeout: float =3D SETTINGS.time= out,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 eal_parameters: str =3D "&q= uot;,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 app_parameters=3D"", >> > +=C2=A0 =C2=A0 ) -> InteractiveShell:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 See "create_interactive_she= ll" in SutNode
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 match (shell_type.name):
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case "shell&q= uot;:
>>
>> Comparing with the enum values seems better.
>
>
> Good point, that does seem like it would be more consistent, I'll = make the change.
>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn InteractiveShell(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 self.session, self._logger, path_to_app, timeout
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ) >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case "testpmd= ":
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn TestPmdShell(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 self.session,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 self._logger,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 path_to_app,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 timeout=3Dtimeout,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 eal_flags=3Deal_parameters,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 cmd_line_options=3Dapp_parameters,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ) >>
>> framework/remote_session/remote/interactive_remote_session.py:89: = error: Missing return statement
>>
>> This is probably because we don't have a case for the rest of = the values.
>
>
> You're right, I didn't handle the default case. I'll chang= e it to defaulting to just returning a base interactive shell with a messag= e explaining that they didn't handle the type.
>
>>
>>
>> > diff --git a/dts/framework/remote_session/remote/interactive_= shell.py b/dts/framework/remote_session/remote/interactive_shell.py
>> > new file mode 100644
>> > index 00000000..d98503be
>> > --- /dev/null
>> > +++ b/dts/framework/remote_session/remote/interactive_shell.p= y
>> > @@ -0,0 +1,98 @@
>> > +from pathlib import PurePath
>> > +
>> > +from paramiko import Channel, SSHClient, channel
>> > +
>> > +from framework.logger import DTSLOG
>> > +from framework.settings import SETTINGS
>> > +
>> > +
>> > +class InteractiveShell:
>> > +
>> > +=C2=A0 =C2=A0 _interactive_session: SSHClient
>> > +=C2=A0 =C2=A0 _stdin: channel.ChannelStdinFile
>> > +=C2=A0 =C2=A0 _stdout: channel.ChannelFile
>> > +=C2=A0 =C2=A0 _ssh_channel: Channel
>> > +=C2=A0 =C2=A0 _logger: DTSLOG
>> > +=C2=A0 =C2=A0 _timeout: float
>> > +
>> > +=C2=A0 =C2=A0 def __init__(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 interactive_session: SSHClient,<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 logger: DTSLOG,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 path_to_app: PurePath | None =3D= None,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 timeout: float =3D SETTINGS.time= out,
>> > +=C2=A0 =C2=A0 ) -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._interactive_session =3D in= teractive_session
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._ssh_channel =3D self._inte= ractive_session.invoke_shell()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stdin =3D self._ssh_channe= l.makefile_stdin("w")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stdout =3D self._ssh_chann= el.makefile("r")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._ssh_channel.settimeout(tim= eout)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._ssh_channel.set_combine_st= derr(True)=C2=A0 # combines stdout and stderr streams
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger =3D logger
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._timeout =3D timeout
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if path_to_app:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.send_command_= no_output(str(path_to_app))=C2=A0 # starts the application
>>
>> This should be part of the derived class - the code to start an ap= p is going to be app-specific. Since this is an interactive session, the as= sumption is that we're always going to start an app and then work with = it, so starting the app in the constructor should be safe.
>> We actually have that for testpmd, so we just need to unify the ab= ove with what we have.
>
>
> That makes sense, the only reason I didn't do this before was beca= use I was starting the application differently in test_pmd (with EAL flags = and command line options) so I would call it without this path when I calle= d the super init(). However, I can change this by just making a method call= ed _start_application() that can be overridden in the subclasses and that w= ill clean this process up quite a bit.
>

Good, I had something like that in mind.
=C2=A0
>>
>>
>> > +
>> > +=C2=A0 =C2=A0 def empty_stdout_buffer(self) -> None:
>>
>> Same comment on ordering as above.
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Removes all da= ta from the stdout buffer.
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Because of the way paramiko hand= les read buffers, there is no way to effectively
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 remove data from, or "flush= ", read buffers. This method essentially moves our
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 offset on the buffer to the end = and thus "removes" the data from the buffer.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Timeouts are thrown on read oper= ations of paramiko pipes based on whether data
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 had been received before timeout= so we assume that if we reach the timeout then
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 we are at the end of the buffer.=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._ssh_channel.settimeout(1)<= br> >>
>> Waiting a whole second seems to be a lot. We actually may not need= this method from the use in the code - that is we change how the app start= s.
>
>
> We will still need this code whenever you send a command and don't= get its output. What it is doing is essentially moving the pointer for the= output buffer to the end of the file which is needed because if you send a= command and you don't want any output, if we don't still move past= the output in the buffer then it will persist and bleed into when you send= a command and do want to collect output. Having this method allows you to = know you are starting from an empty buffer when you are getting the output = of the commands.
>

Ah, I was commenting on send_command_no_output when I mentioned "this<= br> method", so I need to restate my point. We can do basically the same thing with "send_command" and the only difference I see is that w= e
don't care about prompt in send_command_no_output. Is there a scenario<= br> where we need that?

This method was to address= the situation that I had brought up a while back when discussing how to ha= ndle interactive applications. The scenario where you want to run an applic= ation but you cannot consume a newline character because the line you are o= n requires input. In the case of testpmd and "bash-like" applicat= ions, we can consume a newline character safely but you can't with ever= y interactive environment. The example I used then was if you ran a script = and it asked you to enter a password or a name for something. Consuming a n= ewline in this case might not give you the prompt again but rather would en= d up taking in an unintended newline.
=C2=A0

> In the case of timing however, I could cut this down to half a second,= it just gives the chance that a command that takes longer to produce its o= utput will still contaminate the buffer. If this causes a problem we could = always increase the time in the future.
>

The point is to not have any set time (since that's either too slow or<= br> unreliable), if we can avoid this.

I agree tha= t it isn't super reliable, but I think it is good to have even if it is= n't used as often. The reason for this is because if the case arose whe= re you didn't want to collect output up until a point in the middle of = the stdout string or maybe passed in a prompt that didn't include all o= f the output provided, this offers some way to at least clear the buffer so= mehow.
=C2=A0

>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 try:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for line in self._= stdout:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pass=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 except TimeoutError:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pass
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._ssh_channel.settimeout(sel= f._timeout)=C2=A0 # reset timeout
>> > +
>> > +=C2=A0 =C2=A0 def send_command_no_output(self, command: str)= -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Send command t= o channel without recording output.
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 This method will not verify any = input or output, it will simply assume the
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 command succeeded. This method w= ill also consume all output in the buffer
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 after executing the command.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger.info(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f"Sending com= mand {command.strip()} and not collecting output"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stdin.write(f"{comman= d}\n")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stdin.flush()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.empty_stdout_buffer()
>> > +
>> > +=C2=A0 =C2=A0 def send_command_get_output(self, command: str= , prompt: str) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Send a command= and get all output before the expected ending string.
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Lines that expect input are not = included in the stdout buffer so they cannot be
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 used for expect. For example, if= you were prompted to log into something
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 with a username and password, yo= u cannot expect "username:" because it won't
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 yet be in the stdout buffer. A w= ork around for this could be consuming an
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 extra newline character to force= the current prompt into the stdout buffer.
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Returns:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 All output in the = buffer before expected string
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger.info(f"Sending co= mmand {command.strip()}...")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stdin.write(f"{comman= d}\n")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stdin.flush()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 out: str =3D ""
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 for line in self._stdout:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 out +=3D line
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if prompt in line = and not line.rstrip().endswith(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 comm= and.rstrip()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ):=C2=A0 # ignore = line that sent command
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 brea= k
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger.debug(f"Got ou= tput: {out}")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return out
>> > +
>> > +=C2=A0 =C2=A0 def close(self) -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stdin.close()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._ssh_channel.close()
>> > +
>> > +=C2=A0 =C2=A0 def __del__(self) -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.close()
>> > diff --git a/dts/framework/remote_session/remote/testpmd_shel= l.py b/dts/framework/remote_session/remote/testpmd_shell.py
>> > new file mode 100644
>> > index 00000000..873f3c82
>> > --- /dev/null
>> > +++ b/dts/framework/remote_session/remote/testpmd_shell.py >> > @@ -0,0 +1,58 @@
>> > +# SPDX-License-Identifier: BSD-3-Clause
>> > +# Copyright(c) 2023 University of New Hampshire
>> > +
>> > +
>> > +from pathlib import PurePath
>> > +
>> > +from paramiko import SSHClient
>> > +
>> > +from framework.logger import DTSLOG
>> > +from framework.settings import SETTINGS
>> > +
>> > +from .interactive_shell import InteractiveShell
>> > +
>> > +
>> > +class TestPmdShell(InteractiveShell):
>> > +=C2=A0 =C2=A0 expected_prompt: str =3D "testpmd>&quo= t;
>> > +
>> > +=C2=A0 =C2=A0 def __init__(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 interactive_session: SSHClient,<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 logger: DTSLOG,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 path_to_testpmd: PurePath,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 eal_flags: str =3D "",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 cmd_line_options: str =3D "= ",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 timeout: float =3D SETTINGS.time= out,
>> > +=C2=A0 =C2=A0 ) -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Initializes an= interactive testpmd session using specified parameters."""<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 super(TestPmdShell, self).__init= __(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 interactive_sessio= n, logger=3Dlogger, timeout=3Dtimeout
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.send_command_get_output( >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f"{path_to_te= stpmd} {eal_flags} -- -i {cmd_line_options}\n",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.expected_prom= pt,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>> > +=C2=A0 =C2=A0 def send_command(self, command: str, prompt: s= tr =3D expected_prompt) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Specific way o= f handling the command for testpmd
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 An extra newline character is co= nsumed in order to force the current line into
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 the stdout buffer.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self.send_command_get_out= put(f"{command}\n", prompt)
>> > +
>> > +=C2=A0 =C2=A0 def get_devices(self) -> list[str]:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Get a list of = device names that are known to testpmd
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Uses the device info listed in t= estpmd and then parses the output to
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return only the names of the dev= ices.
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Returns:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 A list of strings = representing device names (e.g. 0000:14:00.1)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dev_info: str =3D self.send_comm= and("show device info all")
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dev_list: list[str] =3D []
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 for line in dev_info.split("= ;\n"):
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if "device na= me:" in line.lower():
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dev_= list.append(line.strip().split(": ")[1].strip())
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return dev_list
>>
>> This could be an object (or a list of TestPmdDevice object or some= thing similar). We could start with just the name and add to it in the futu= re.
>
>
> I think for the scope of smoke tests it would make sense to leave this= as just the names for now because all we are doing is making sure we can d= iscover them and that they appear. I do agree that it could be useful to ha= ve more info, but it would likely make more sense to add to it in the futur= e when we need that additional information.
>

The objects could contain just the name. I'd like to do this since we already do the same thing with VirtualDevice (in
framework/testbed_model/hw/virtual_device.py).


I didn't consider this, but it's not = a bad approach, I'll change the list to be a list of objects.
=C2=A0
>>
>>
>> > diff --git a/dts/framework/test_result.py b/dts/framework/tes= t_result.py
>> > index 74391982..029665fb 100644
>> > --- a/dts/framework/test_result.py
>> > +++ b/dts/framework/test_result.py
>> > @@ -1,5 +1,6 @@
>> >=C2=A0 # SPDX-License-Identifier: BSD-3-Clause
>> >=C2=A0 # Copyright(c) 2023 PANTHEON.tech s.r.o.
>> > +# Copyright(c) 2022-2023 University of New Hampshire
>> >
>> >=C2=A0 """
>> >=C2=A0 Generic result container and reporters
>> > @@ -8,14 +9,17 @@
>> >=C2=A0 import os.path
>> >=C2=A0 from collections.abc import MutableSequence
>> >=C2=A0 from enum import Enum, auto
>> > +from typing import Dict
>>
>> No need, we can just use built-in dict (since Python3.9).
>>
>> >
>> >=C2=A0 from .config import (
>> >=C2=A0 =C2=A0 =C2=A0 OS,
>> >=C2=A0 =C2=A0 =C2=A0 Architecture,
>> >=C2=A0 =C2=A0 =C2=A0 BuildTargetConfiguration,
>> > +=C2=A0 =C2=A0 BuildTargetVersionInfo,
>> >=C2=A0 =C2=A0 =C2=A0 Compiler,
>> >=C2=A0 =C2=A0 =C2=A0 CPUType,
>> >=C2=A0 =C2=A0 =C2=A0 NodeConfiguration,
>> > +=C2=A0 =C2=A0 NodeVersionInfo,
>> >=C2=A0 )
>> >=C2=A0 from .exception import DTSError, ErrorSeverity
>> >=C2=A0 from .logger import DTSLOG
>> > @@ -67,12 +71,14 @@ class Statistics(dict):
>> >=C2=A0 =C2=A0 =C2=A0 Using a dict provides a convenient way to= format the data.
>> >=C2=A0 =C2=A0 =C2=A0 """
>> >
>> > -=C2=A0 =C2=A0 def __init__(self, dpdk_version):
>> > +=C2=A0 =C2=A0 def __init__(self, output_info: Dict[str, str]= | None):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 super(Statistics, self).__i= nit__()
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for result in Result:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self[result.name] =3D 0
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self["PASS RATE"]= =3D 0.0
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 self["DPDK VERSION"] = =3D dpdk_version
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if output_info:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for info_key, info= _val in output_info.items():
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self= [info_key] =3D info_val
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def __iadd__(self, other: Result) -> &= quot;Statistics":
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > @@ -206,6 +212,8 @@ class BuildTargetResult(BaseResult):
>> >=C2=A0 =C2=A0 =C2=A0 os: OS
>> >=C2=A0 =C2=A0 =C2=A0 cpu: CPUType
>> >=C2=A0 =C2=A0 =C2=A0 compiler: Compiler
>> > +=C2=A0 =C2=A0 compiler_version: str | None
>> > +=C2=A0 =C2=A0 dpdk_version: str | None
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def __init__(self, build_target: BuildTar= getConfiguration):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 super(BuildTargetResult, se= lf).__init__()
>> > @@ -213,6 +221,12 @@ def __init__(self, build_target: BuildTa= rgetConfiguration):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.os =3D build_target.os=
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.cpu =3D build_target.c= pu
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.compiler =3D build_tar= get.compiler
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.compiler_version =3D None >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.dpdk_version =3D None
>> > +
>> > +=C2=A0 =C2=A0 def add_build_target_versions(self, versions: = BuildTargetVersionInfo) -> None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.compiler_version =3D versio= ns.compiler_version
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.dpdk_version =3D versions.d= pdk_version
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def add_test_suite(self, test_suite_name:= str) -> TestSuiteResult:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_suite_result =3D TestS= uiteResult(test_suite_name)
>> > @@ -228,10 +242,17 @@ class ExecutionResult(BaseResult):
>> >=C2=A0 =C2=A0 =C2=A0 """
>> >
>> >=C2=A0 =C2=A0 =C2=A0 sut_node: NodeConfiguration
>> > +=C2=A0 =C2=A0 sut_os_name: str
>> > +=C2=A0 =C2=A0 sut_os_version: str
>> > +=C2=A0 =C2=A0 sut_kernel_version: str
>> >
>> > -=C2=A0 =C2=A0 def __init__(self, sut_node: NodeConfiguration= ):
>> > +=C2=A0 =C2=A0 def __init__(self, sut_node: NodeConfiguration= , sut_version_info: NodeVersionInfo):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 super(ExecutionResult, self= ).__init__()
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_node =3D sut_node<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_version_info =3D sut_ve= rsion_info
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_os_name =3D sut_version= _info.os_name
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_os_version =3D sut_vers= ion_info.os_version
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_kernel_version =3D sut_= version_info.kernel_version
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def add_build_target(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self, build_target: BuildTa= rgetConfiguration
>> > @@ -258,6 +279,7 @@ class DTSResult(BaseResult):
>> >=C2=A0 =C2=A0 =C2=A0 """
>> >
>> >=C2=A0 =C2=A0 =C2=A0 dpdk_version: str | None
>> > +=C2=A0 =C2=A0 output: dict | None
>> >=C2=A0 =C2=A0 =C2=A0 _logger: DTSLOG
>> >=C2=A0 =C2=A0 =C2=A0 _errors: list[Exception]
>> >=C2=A0 =C2=A0 =C2=A0 _return_code: ErrorSeverity
>> > @@ -267,14 +289,17 @@ class DTSResult(BaseResult):
>> >=C2=A0 =C2=A0 =C2=A0 def __init__(self, logger: DTSLOG):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 super(DTSResult, self).__in= it__()
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.dpdk_version =3D None<= br> >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.output =3D None
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger =3D logger
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._errors =3D []
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._return_code =3D Error= Severity.NO_ERR
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stats_result =3D None=
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stats_filename =3D os= .path.join(SETTINGS.output_dir, "statistics.txt")
>> >
>> > -=C2=A0 =C2=A0 def add_execution(self, sut_node: NodeConfigur= ation) -> ExecutionResult:
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 execution_result =3D ExecutionRe= sult(sut_node)
>> > +=C2=A0 =C2=A0 def add_execution(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self, sut_node: NodeConfiguratio= n, sut_version_info: NodeVersionInfo
>> > +=C2=A0 =C2=A0 ) -> ExecutionResult:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 execution_result =3D ExecutionRe= sult(sut_node, sut_version_info)
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._inner_results.append(= execution_result)
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return execution_result
>> >
>> > @@ -296,7 +321,8 @@ def process(self) -> None:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for error in = self._errors:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= self._logger.debug(repr(error))
>> >
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stats_result =3D Statistic= s(self.dpdk_version)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._stats_result =3D Statistic= s(self.output)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # add information gathered from = the smoke tests to the statistics
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.add_stats(self._stats_= result)
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 with open(self._stats_filen= ame, "w+") as stats_file:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 stats_file.wr= ite(str(self._stats_result))
>> > diff --git a/dts/framework/test_suite.py b/dts/framework/test= _suite.py
>> > index 0705f38f..ad66014c 100644
>> > --- a/dts/framework/test_suite.py
>> > +++ b/dts/framework/test_suite.py
>> > @@ -10,11 +10,24 @@
>> >=C2=A0 import inspect
>> >=C2=A0 import re
>> >=C2=A0 from types import MethodType
>> > +from typing import Dict
>> >
>> > -from .exception import ConfigurationError, SSHTimeoutError, = TestCaseVerifyError
>> > +from .config import BuildTargetConfiguration
>> > +from .exception import (
>> > +=C2=A0 =C2=A0 BlockingTestSuiteError,
>> > +=C2=A0 =C2=A0 ConfigurationError,
>> > +=C2=A0 =C2=A0 SSHTimeoutError,
>> > +=C2=A0 =C2=A0 TestCaseVerifyError,
>> > +)
>> >=C2=A0 from .logger import DTSLOG, getLogger
>> >=C2=A0 from .settings import SETTINGS
>> > -from .test_result import BuildTargetResult, Result, TestCase= Result, TestSuiteResult
>> > +from .test_result import (
>> > +=C2=A0 =C2=A0 BuildTargetResult,
>> > +=C2=A0 =C2=A0 DTSResult,
>> > +=C2=A0 =C2=A0 Result,
>> > +=C2=A0 =C2=A0 TestCaseResult,
>> > +=C2=A0 =C2=A0 TestSuiteResult,
>> > +)
>> >=C2=A0 from .testbed_model import SutNode
>> >
>> >
>> > @@ -37,10 +50,12 @@ class TestSuite(object):
>> >=C2=A0 =C2=A0 =C2=A0 """
>> >
>> >=C2=A0 =C2=A0 =C2=A0 sut_node: SutNode
>> > +=C2=A0 =C2=A0 is_blocking =3D False
>> >=C2=A0 =C2=A0 =C2=A0 _logger: DTSLOG
>> >=C2=A0 =C2=A0 =C2=A0 _test_cases_to_run: list[str]
>> >=C2=A0 =C2=A0 =C2=A0 _func: bool
>> >=C2=A0 =C2=A0 =C2=A0 _result: TestSuiteResult
>> > +=C2=A0 =C2=A0 _dts_result: DTSResult
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def __init__(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self,
>> > @@ -48,6 +63,8 @@ def __init__(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_cases: list[str],
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 func: bool,
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 build_target_result: BuildT= argetResult,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 build_target_conf: BuildTargetCo= nfiguration,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dts_result: DTSResult,
>> >=C2=A0 =C2=A0 =C2=A0 ):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_node =3D sut_node<= br> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._logger =3D getLogger(= self.__class__.__name__)
>> > @@ -55,6 +72,8 @@ def __init__(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._test_cases_to_run.ext= end(SETTINGS.test_cases)
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._func =3D func
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._result =3D build_targ= et_result.add_test_suite(self.__class__.__name__)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.build_target_info =3D build= _target_conf
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dts_result =3D dts_result<= br> >> >
>> >=C2=A0 =C2=A0 =C2=A0 def set_up_suite(self) -> None:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > @@ -118,6 +137,8 @@ def run(self) -> None:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 f"the next test suite may be affected."
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= )
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= self._result.update_setup(Result.ERROR, e)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if len(self._resul= t.get_errors()) > 0 and self.is_blocking:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rais= e BlockingTestSuiteError(test_suite_name)
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def _execute_test_suite(self) -> None:=
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > @@ -232,6 +253,12 @@ def _execute_test_case(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_case_res= ult.update(Result.SKIP)
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 raise Keyboar= dInterrupt("Stop DTS")
>> >
>> > +=C2=A0 =C2=A0 def write_to_statistics_file(self, output: Dic= t[str, str]):
>>
>> This isn't used anywhere as far as I can tell.
>
>
> It was previously but when we discussed removing the statistics output= I left this in but you're right, thinking about it now it doesn't = make sense to leave it in until we reformat statistics.txt.
>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if self._dts_result.output is no= t None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dts_result.o= utput.update(output)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 else:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dts_result.o= utput =3D output
>> > +
>>
>> This could be simplified if we init output to an empty dict in the= constructor.
>>
>> >
>> >=C2=A0 def get_test_suites(testsuite_module_path: str) -> l= ist[type[TestSuite]]:
>> >=C2=A0 =C2=A0 =C2=A0 def is_test_suite(object) -> bool:
>> > diff --git a/dts/framework/testbed_model/node.py b/dts/framew= ork/testbed_model/node.py
>> > index d48fafe6..c5147e0e 100644
>> > --- a/dts/framework/testbed_model/node.py
>> > +++ b/dts/framework/testbed_model/node.py
>> > @@ -40,6 +40,7 @@ class Node(object):
>> >=C2=A0 =C2=A0 =C2=A0 lcores: list[LogicalCore]
>> >=C2=A0 =C2=A0 =C2=A0 _logger: DTSLOG
>> >=C2=A0 =C2=A0 =C2=A0 _other_sessions: list[OSSession]
>> > +=C2=A0 =C2=A0 _execution_config: ExecutionConfiguration
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def __init__(self, node_config: NodeConfi= guration):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.config =3D node_config=
>> > @@ -64,6 +65,7 @@ def set_up_execution(self, execution_config= : ExecutionConfiguration) -> None:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._setup_hugepages()
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._set_up_execution(exec= ution_config)
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._execution_config =3D execu= tion_config
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def _set_up_execution(self, execution_con= fig: ExecutionConfiguration) -> None:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > diff --git a/dts/framework/testbed_model/sut_node.py b/dts/fr= amework/testbed_model/sut_node.py
>> > index 2b2b50d9..27849da7 100644
>> > --- a/dts/framework/testbed_model/sut_node.py
>> > +++ b/dts/framework/testbed_model/sut_node.py
>> > @@ -1,14 +1,21 @@
>> >=C2=A0 # SPDX-License-Identifier: BSD-3-Clause
>> >=C2=A0 # Copyright(c) 2010-2014 Intel Corporation
>> >=C2=A0 # Copyright(c) 2023 PANTHEON.tech s.r.o.
>> > +# Copyright(c) 2022-2023 University of New Hampshire
>> >
>> >=C2=A0 import os
>> >=C2=A0 import tarfile
>> >=C2=A0 import time
>> >=C2=A0 from pathlib import PurePath
>> >
>> > -from framework.config import BuildTargetConfiguration, NodeC= onfiguration
>> > -from framework.remote_session import CommandResult, OSSessio= n
>> > +from framework.config import (
>> > +=C2=A0 =C2=A0 BuildTargetConfiguration,
>> > +=C2=A0 =C2=A0 BuildTargetVersionInfo,
>> > +=C2=A0 =C2=A0 InteractiveApp,
>> > +=C2=A0 =C2=A0 NodeConfiguration,
>> > +=C2=A0 =C2=A0 NodeVersionInfo,
>> > +)
>> > +from framework.remote_session import CommandResult, Interact= iveShell, OSSession
>> >=C2=A0 from framework.settings import SETTINGS
>> >=C2=A0 from framework.utils import EnvVarsDict, MesonArgs
>> >
>> > @@ -26,13 +33,17 @@ class SutNode(Node):
>> >
>> >=C2=A0 =C2=A0 =C2=A0 _dpdk_prefix_list: list[str]
>> >=C2=A0 =C2=A0 =C2=A0 _dpdk_timestamp: str
>> > -=C2=A0 =C2=A0 _build_target_config: BuildTargetConfiguration= | None
>> > +=C2=A0 =C2=A0 _build_target_config: BuildTargetConfiguration=
>>
>> What's the reason for this change? My guess would be mypy, but= the change still results in mypy complaining:
>> framework/testbed_model/sut_node.py:51: error: Incompatible types = in assignment (expression has type "None", variable has type &quo= t;BuildTargetConfiguration")
>
>
> I would guess it was for mypy as well which I got to be passing before= I submitted the patch, but if there is still an error with it then I will = revert.
>
>>
>>
>>
>> >=C2=A0 =C2=A0 =C2=A0 _env_vars: EnvVarsDict
>> >=C2=A0 =C2=A0 =C2=A0 _remote_tmp_dir: PurePath
>> >=C2=A0 =C2=A0 =C2=A0 __remote_dpdk_dir: PurePath | None
>> > -=C2=A0 =C2=A0 _dpdk_version: str | None
>> >=C2=A0 =C2=A0 =C2=A0 _app_compile_timeout: float
>> >=C2=A0 =C2=A0 =C2=A0 _dpdk_kill_session: OSSession | None
>> > +=C2=A0 =C2=A0 _dpdk_version: str | None
>> > +=C2=A0 =C2=A0 _os_name: str | None
>> > +=C2=A0 =C2=A0 _os_version: str | None
>> > +=C2=A0 =C2=A0 _kernel_version: str | None
>> > +=C2=A0 =C2=A0 _compiler_version: str | None
>> >
>> >=C2=A0 =C2=A0 =C2=A0 def __init__(self, node_config: NodeConfi= guration):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 super(SutNode, self).__init= __(node_config)
>> > @@ -41,12 +52,16 @@ def __init__(self, node_config: NodeConfi= guration):
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._env_vars =3D EnvVarsD= ict()
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._remote_tmp_dir =3D se= lf.main_session.get_remote_tmp_dir()
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.__remote_dpdk_dir =3D = None
>> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dpdk_version =3D None
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._app_compile_timeout = =3D 90
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dpdk_kill_session =3D= None
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dpdk_timestamp =3D (<= br> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f"{str(o= s.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"= ;
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dpdk_version =3D None
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._os_name =3D None
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._os_version =3D None
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._kernel_version =3D None >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._compiler_version =3D None<= br> >> >
>> >=C2=A0 =C2=A0 =C2=A0 @property
>> >=C2=A0 =C2=A0 =C2=A0 def _remote_dpdk_dir(self) -> PurePath= :
>> > @@ -75,6 +90,44 @@ def dpdk_version(self) -> str:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return self._dpdk_version >> >
>> > +=C2=A0 =C2=A0 @property
>> > +=C2=A0 =C2=A0 def os_name(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if self._os_name is None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._os_name =3D = self.main_session.get_os_name()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self._os_name
>> > +
>> > +=C2=A0 =C2=A0 @property
>> > +=C2=A0 =C2=A0 def os_version(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if self._os_version is None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._os_version = =3D self.main_session.get_os_version()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self._os_version
>> > +
>> > +=C2=A0 =C2=A0 @property
>> > +=C2=A0 =C2=A0 def kernel_version(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if self._kernel_version is None:=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._kernel_versi= on =3D self.main_session.get_kernel_version()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self._kernel_version
>> > +
>> > +=C2=A0 =C2=A0 @property
>> > +=C2=A0 =C2=A0 def compiler_version(self) -> str:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if self._compiler_version is Non= e:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._compiler_ver= sion =3D self.main_session.get_compiler_version(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self= ._
build_target_config.compiler.name
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self._compiler_version >> > +
>> > +=C2=A0 =C2=A0 def get_node_versions(self) -> NodeVersionI= nfo:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NodeVersionInfo(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 os_name=3Dself.os_= name,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 os_version=3Dself.= os_version,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 kernel_version=3Ds= elf.kernel_version,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>> > +=C2=A0 =C2=A0 def get_build_target_versions(self) -> Buil= dTargetVersionInfo:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return BuildTargetVersionInfo( >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 dpdk_version=3Dsel= f.dpdk_version, compiler_version=3Dself.compiler_version
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>> >=C2=A0 =C2=A0 =C2=A0 def _guess_dpdk_remote_dir(self) -> Pu= rePath:
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return self.main_session.gu= ess_dpdk_remote_dir(self._remote_tmp_dir)
>> >
>> > @@ -84,6 +137,10 @@ def _set_up_build_target(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Setup DPDK on the SUT node.=
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # we want to ensure that dpdk_ve= rsion and compiler_version is reset for new
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # build targets
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._dpdk_version =3D None
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._compiler_version =3D None<= br> >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._configure_build_targe= t(build_target_config)
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._copy_dpdk_tarball() >> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self._build_dpdk()
>> > @@ -262,6 +319,49 @@ def run_dpdk_app(
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f"{app_p= ath} {eal_args}", timeout, verify=3DTrue
>> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> >
>> > +=C2=A0 =C2=A0 def create_interactive_shell(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 self,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 shell_type: InteractiveApp,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 path_to_app: PurePath | None =3D= None,
>>
>> Can we join shell_type and path_to_app together? Both contain the = same type of information.
>> If we can't, let's drop path_to_app, as we don't provi= de (in this patch) a way for users to modify this. We can add it later if n= eeded.
>
>
> The idea behind it was allowing users to specify where their applicati= on is when they call this method. Users can pass a path into this applicati= on if their application isn't at the default path which I think is a go= od addition to have right now in the case of a user having their binary in = a different location.
>

As I mentioned in my comment, there isn't a way for users to pass the path (at least I don't see one). Users in this case is people who run DTS, not test case developers. Test cases shouldn't work with those
paths which need to be framework-wide.
I agree it would be good to have an option that users could configure
and we can talk about how to do this, but I would be best to do in a
different patch.


You're right, apologies, I was thinking m= ore from the perspective of developers but that doesn't really make sen= se either. I'll remove this path as something that can be passed in and= we can talk about potentially adding it back later on.
=C2=A0
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 timeout: float =3D SETTINGS.time= out,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 eal_parameters: str =3D "&q= uot;,
>>
>> Any reason we're not using the EalParameters class here?
>
>
> I didn't use it originally because it requires an lcore list and m= emory channels if you provide them. Although, I could use EalParameters and= just make them optional which would have the same effect but be more struc= tured is what makes the most sense to me so I will make that change.

The way to create EAL parameters is with the
SutNode.create_eal_parameters method which addresses both of your
points - there's no need to make it optional, as you can use
self.create_eal_parameters() as the default value.

>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 app_parameters=3D"", >>
>> This is also currently not used, so maybe we should drop it and ad= d it in the future, probably as a dedicated object representing the paramet= ers.
>
>
> That makes sense, you're right that a more structured class for pr= oviding parameters would work better. The difficulty with a dedicated objec= t however is that you have different parameters for different applications = so you couldn't get very granular. Regardless, it can be removed from t= his patch for now and added back in the future.
>

I imagined an object with methods for adding different kinds of
parameters - positional, without value, with value, long/short name
etc. But that's not that relevant right now, we'll think about it w= hen
we actually need it (and the use case will make what everything
clearer).


Thanks for clarify your thoughts on it, I agr= ee that something like that could be useful and is a good idea for when we = need something like this.

=C2=A0
>>
>>
>> > +=C2=A0 =C2=A0 ) -> InteractiveShell:
>>
>> This should return a Union of all the different sessions we suppor= t, otherwise mypy complains:
>> tests/TestSuite_smoke_tests.py:68: error: Incompatible types in as= signment (expression has type "InteractiveShell", variable has ty= pe "TestPmdShell")
>>
>
> I figured this would be allowed because they were subclasses and didn&= #39;t realize mypy would have a problem with it. I will fix this in the nex= t version.
>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Create a handl= er for an interactive session.
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 This method is a factory that ca= lls a method in OSSession to create shells for
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 different DPDK applications.
>> > +
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Args:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 shell_type: Enum v= alue representing the desired application.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 path_to_app: Repre= sents a path to the application you are attempting to
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 laun= ch. This path will be executed at the start of the app
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 init= ialization. If one isn't provided, the default specified in the
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 enum= eration will be used.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 timeout: Timeout f= or the ssh channel
>>
>> We should specify what sort of timeout this is - for creating the = channel or for sent commands?
>
>
> Very good point, this is pretty vague if you weren't the one who w= rote it. I'll make sure to clear this up.
>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 eal_parameters: Li= st of EAL parameters to use to launch the app. This is
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 igno= red for base "shell" types.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 app_parameters: Co= mmand-line flags to pass into the application on launch.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Returns:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Instance of the de= sired interactive application.
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 default_path: PurePath | None >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # if we just want a default shel= l, there is no need to append the DPDK build
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 # directory to the path
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if shell_type.name =3D=3D "= ;shell":
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 default_path =3D N= one
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 else:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 default_path =3D s= elf.main_session.join_remote_path(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self= .remote_dpdk_build_dir, *shell_type.get_path()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>>
>> All this path handling looks clunky, it should be part of the Inte= ractiveApp class. Would adding something like update_path to InteractiveApp= work?
>
>
> I agree this was a little messy, but after changing the default paths = to PurePaths, I could clean this up more by just defaulting to shell_type.g= et_path() and then having a single if statement that checks if it isn't= a shell type and, if not, set the default path with appending the build di= r. it would look something more like this:
>
> default_path =3D shell_type.get_path()
>
>=C2=A0 if shell_type !=3D InteractiveApp.shell:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0default_path =3D self.main_session.jo= in_remote_path(
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0self.remote_dpdk_build_= dir, shell_type.get_path()
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0)
>

My point was that we're still passing two arguments (the app enum and the path) where we could be passing just one, I think. We could just
update the path of InteractiveApp and pass only that:
@unique
class InteractiveApp(Enum):
=C2=A0 =C2=A0 shell =3D [pathlib.PurePath("")]
=C2=A0 =C2=A0 testpmd =3D [pathlib.PurePath("app", "dpdk-tes= tpmd")]

@property
def path(self) -> pathlib.PurePath:
=C2=A0 =C2=A0 return self.value[0]

@path.setter
def path(self, path: pathlib.PurePath):
=C2=A0 =C2=A0 self.value[0] =3D path

The above stores the default path with the option of updating it. That
would be much more elegant than passing both the path and the Enum.
A note on the implementation: the value has to be a mutable type,
because we can't assign new values. With mutable types, we're
modifying the existing value.

Another important question, which I didn't think of before, is do we need the shell interactive app? It's not used anywhere and I don't<= br> think we'll ever use it, so let's remove it. We can add it later if= we
find an actual use for it.

I really only inc= luded it as a generic application that someone could use to pilot an app ma= nually which could be useful for things like simple scripts that need input= that it wouldn't make sense to make a whole class for but you're r= ight, I think it makes sense for developers to just create a subclass and h= andle it properly. I'll remove this as an option and modify the pathing= as you recommended.
=C2=A0

>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return self.main_session.interac= tive_session.create_interactive_shell(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 shell_type,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 path_to_app or def= ault_path,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 timeout,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 eal_parameters, >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 app_parameters, >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>> >
>> >=C2=A0 class EalParameters(object):
>> >=C2=A0 =C2=A0 =C2=A0 def __init__(
>> > diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/T= estSuite_smoke_tests.py
>> > new file mode 100644
>> > index 00000000..a2fd9fae
>> > --- /dev/null
>> > +++ b/dts/tests/TestSuite_smoke_tests.py
>> > @@ -0,0 +1,101 @@
>> > +# SPDX-License-Identifier: BSD-3-Clause
>> > +# Copyright(c) 2023 University of New Hampshire
>> > +
>> > +from framework.config import InteractiveApp
>> > +from framework.remote_session import TestPmdShell
>> > +from framework.test_suite import TestSuite
>> > +
>> > +
>> > +class SmokeTests(TestSuite):
>> > +=C2=A0 =C2=A0 is_blocking =3D True
>> > +=C2=A0 =C2=A0 # in this list, the first index is the address= of the nic and the second is
>> > +=C2=A0 =C2=A0 # the driver for that nic.
>> > +=C2=A0 =C2=A0 list_of_nics: list[tuple[str, str]] =3D []
>>
>> Would using a dict make more sense?
>
>
> I thought a tuple was a good option because of the two pieces of data = but I think a dict would also be useful because it would be more clear when= it is accessed later. I think turning this into a dict (and maybe migratin= g to an object later) is a good idea.
>

Now that you mentioned objects, I'd say let's just store the ports.= I
don't really see a reason for a dict or new object.


I agree that now that I have moved over to th= e other way of using ports this abstraction out of the existing list doesn&= #39;t make much sense. I'll change the list to just be the ports.
=C2=A0
logger.info( >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;Running driver tests with the following virtual "
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;devices: {list_of_vdevs}"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_node.main= _session.send_command(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;meson test -C {self.dpdk_build_dir_path} --suite driver-tests "
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f= 9;--test-args "{list_of_vdevs}"',
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 300,=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 veri= fy=3DTrue,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 else:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_node.main= _session.send_command(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;meson test -C {self.dpdk_build_dir_path} --suite driver-tests",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 300,=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 veri= fy=3DTrue,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>> > +=C2=A0 =C2=A0 def test_devices_listed_in_testpmd(self) ->= None:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Test:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Uses testpmd drive= r to verify that devices have been found by testpmd
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>>
>> Every other test description is written in imperative and with low= ecase first letter. Let's unite everything to imperative with uppercase= first letter and properly ending the sentence with a dot.
>
>
> Sounds good to me, I'll reformat them.
>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 testpmd_driver: TestPmdShell =3D= self.sut_node.create_interactive_shell(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 InteractiveApp.tes= tpmd
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 dev_list: list[str] =3D testpmd_= driver.get_devices()
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 for nic in self.list_of_nics: >>
>> Here's where the more explicit naming would be beneficial - I = had to think about what's what.
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.verify(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 nic[= 0] in dev_list,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;Device {nic[0]} was not listed in testpmd's available devices, "= ;
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quo= t;please check your configuration",
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +
>> > +=C2=A0 =C2=A0 def test_device_bound_to_driver(self) -> No= ne:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 Test:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ensure that all dr= ivers listed in the config are bound to the correct driver
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 path_to_dev =3D self.sut_node.ma= in_session.join_remote_path(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.sut_node._rem= ote_dpdk_dir, "usertools", "dpdk-devbind.py"
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 for nic in self.list_of_nics: >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 out =3D self.sut_n= ode.main_session.send_command(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;{path_to_dev} --status | grep {nic[0]}", 60
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>>
>> This may be something we want to put into SutNode, something like = get_device_info. We'd only send the command once (not for every NIC as = we're doing here) and parse the output.
>
>
> I think similar to the above test for testpmd, it makes sense to isola= te the checking if the devices are bound to the right driver to smoke tests= . Just because after smoke tests verifies that they are bound to the right = driver it would never be used in the SutNode again because you could then t= rust the config file. Also, if we were to later change this to be isolated = to only checking the NICs that are being used for this execution like we ha= d mentioned, you would want to isolate this to those nics that are actually= being used in the execution, not all of the NICs in the SutNode again so I= think keeping it in smoke tests makes more sense.
>

In the test we only care about the port being bound to the right
driver. I was thinking more broadly - execute the command once and
store all of the information we can. With the dpdk-devbind script,
there's not much to store, but we don't have to use that.

With that in mind, let's not bloat the patch any more. We can do the more generic implementation later.

> I do agree though that this can be done in only one call to dev-bind w= hich I wasn't thinking about when writing it and that I can refactor fo= r the next version of the patch.
>
>>
>>
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 self.verify(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 len(= out.stdout) !=3D 0,
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f&qu= ot;Failed to find configured device ({nic[0]}) using dpdk-devbind.py",=
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 for string in out.= stdout.split(" "):
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if &= quot;drv=3D" in string:
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 self.verify(
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 string.split("=3D")[1] =3D=3D nic[1], >> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 f"Driver for device {nic[0]} does not match d= river listed in "
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 f'configuration (bound to {string.split("= =3D")[1]})',
>> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 )
>> > --
>> > 2.41.0
>> >
>
>
> Thank you for the feedback, it's always very helpful.
> -Jeremy

Thank you again for the comments,
Jeremy=
--000000000000439e3206003a00c2--