* [RFC PATCH v1 0/5] dts: add tg abstractions and scapy
@ 2023-04-20 9:31 Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 1/5] dts: add scapy dependency Juraj Linkeš
` (5 more replies)
0 siblings, 6 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-04-20 9:31 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
jspewock, probb
Cc: dev, Juraj Linkeš
The implementation adds abstractions for all traffic generators as well
as those that can capture individual packets and investigate (not just
count) them.
The traffic generators reside on traffic generator nodes which are also
added, along with some related code.
Juraj Linkeš (5):
dts: add scapy dependency
dts: add traffic generator config
dts: traffic generator abstractions
dts: scapy traffic generator implementation
dts: add traffic generator node to dts runner
dts/conf.yaml | 25 ++
dts/framework/config/__init__.py | 107 +++++-
dts/framework/config/conf_yaml_schema.json | 172 ++++++++-
dts/framework/dts.py | 42 ++-
dts/framework/remote_session/linux_session.py | 55 +++
dts/framework/remote_session/os_session.py | 22 +-
dts/framework/remote_session/posix_session.py | 3 +
.../remote_session/remote/remote_session.py | 7 +
dts/framework/testbed_model/__init__.py | 1 +
.../capturing_traffic_generator.py | 155 ++++++++
dts/framework/testbed_model/hw/port.py | 55 +++
dts/framework/testbed_model/node.py | 4 +-
dts/framework/testbed_model/scapy.py | 348 ++++++++++++++++++
dts/framework/testbed_model/sut_node.py | 5 +-
dts/framework/testbed_model/tg_node.py | 62 ++++
.../testbed_model/traffic_generator.py | 59 +++
dts/poetry.lock | 18 +-
dts/pyproject.toml | 1 +
18 files changed, 1103 insertions(+), 38 deletions(-)
create mode 100644 dts/framework/testbed_model/capturing_traffic_generator.py
create mode 100644 dts/framework/testbed_model/hw/port.py
create mode 100644 dts/framework/testbed_model/scapy.py
create mode 100644 dts/framework/testbed_model/tg_node.py
create mode 100644 dts/framework/testbed_model/traffic_generator.py
--
2.30.2
^ permalink raw reply [flat|nested] 29+ messages in thread
* [RFC PATCH v1 1/5] dts: add scapy dependency
2023-04-20 9:31 [RFC PATCH v1 0/5] dts: add tg abstractions and scapy Juraj Linkeš
@ 2023-04-20 9:31 ` Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 2/5] dts: add traffic generator config Juraj Linkeš
` (4 subsequent siblings)
5 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-04-20 9:31 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
jspewock, probb
Cc: dev, Juraj Linkeš
Required for scapy traffic generator.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/poetry.lock | 18 +++++++++++++++++-
dts/pyproject.toml | 1 +
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/dts/poetry.lock b/dts/poetry.lock
index 64d6c18f35..4b6c42e280 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -425,6 +425,22 @@ files = [
{file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
]
+[[package]]
+name = "scapy"
+version = "2.5.0"
+description = "Scapy: interactive packet manipulation tool"
+category = "main"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4"
+files = [
+ {file = "scapy-2.5.0.tar.gz", hash = "sha256:5b260c2b754fd8d409ba83ee7aee294ecdbb2c235f9f78fe90bc11cb6e5debc2"},
+]
+
+[package.extras]
+basic = ["ipython"]
+complete = ["cryptography (>=2.0)", "ipython", "matplotlib", "pyx"]
+docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
+
[[package]]
name = "snowballstemmer"
version = "2.2.0"
@@ -504,4 +520,4 @@ jsonschema = ">=4,<5"
[metadata]
lock-version = "2.0"
python-versions = "^3.10"
-content-hash = "af71d1ffeb4372d870bd02a8bf101577254e03ddb8b12d02ab174f80069fd853"
+content-hash = "fba5dcbb12d55a9c6b3f59f062d3a40973ff8360edb023b6e4613522654ba7c1"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 72d5b0204d..fc9b6278bb 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -23,6 +23,7 @@ pexpect = "^4.8.0"
warlock = "^2.0.1"
PyYAML = "^6.0"
types-PyYAML = "^6.0.8"
+scapy = "^2.5.0"
[tool.poetry.group.dev.dependencies]
mypy = "^0.961"
--
2.30.2
^ permalink raw reply [flat|nested] 29+ messages in thread
* [RFC PATCH v1 2/5] dts: add traffic generator config
2023-04-20 9:31 [RFC PATCH v1 0/5] dts: add tg abstractions and scapy Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 1/5] dts: add scapy dependency Juraj Linkeš
@ 2023-04-20 9:31 ` Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 3/5] dts: traffic generator abstractions Juraj Linkeš
` (3 subsequent siblings)
5 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-04-20 9:31 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
jspewock, probb
Cc: dev, Juraj Linkeš
Node configuration - where to connect, what ports to use and what TG to
use.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 25 +++
dts/framework/config/__init__.py | 107 +++++++++++--
dts/framework/config/conf_yaml_schema.json | 172 ++++++++++++++++++++-
3 files changed, 287 insertions(+), 17 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index a9bd8a3ecf..4e5fd3560f 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,6 +13,7 @@ executions:
test_suites:
- hello_world
system_under_test: "SUT 1"
+ traffic_generator_system: "TG 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
@@ -25,3 +26,27 @@ nodes:
hugepages: # optional; if removed, will use system hugepage configuration
amount: 256
force_first_numa: false
+ ports:
+ - pci: "0000:00:08.0"
+ dpdk_os_driver: vfio-pci
+ os_driver: i40e
+ peer_node: "TG 1"
+ peer_pci: "0000:00:08.0"
+ - name: "TG 1"
+ hostname: tg1.change.me.localhost
+ user: root
+ arch: x86_64
+ os: linux
+ lcores: ""
+ use_first_core: false
+ hugepages: # optional; if removed, will use system hugepage configuration
+ amount: 256
+ force_first_numa: false
+ ports:
+ - pci: "0000:00:08.0"
+ dpdk_os_driver: rdma
+ os_driver: rdma
+ peer_node: "SUT 1"
+ peer_pci: "0000:00:08.0"
+ traffic_generator:
+ type: SCAPY
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ebb0823ff5..6b1c3159f7 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -12,7 +12,7 @@
import pathlib
from dataclasses import dataclass
from enum import Enum, auto, unique
-from typing import Any, TypedDict
+from typing import Any, TypedDict, Union
import warlock # type: ignore
import yaml
@@ -61,6 +61,18 @@ class Compiler(StrEnum):
msvc = auto()
+@unique
+class NodeType(StrEnum):
+ physical = auto()
+ virtual = auto()
+
+
+@unique
+class TrafficGeneratorType(StrEnum):
+ NONE = auto()
+ SCAPY = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -72,6 +84,41 @@ class HugepageConfiguration:
force_first_numa: bool
+@dataclass(slots=True, frozen=True)
+class PortConfig:
+ id: int
+ node: str
+ pci: str
+ dpdk_os_driver: str
+ os_driver: str
+ peer_node: str
+ peer_pci: str
+
+ @staticmethod
+ def from_dict(id: int, node: str, d: dict) -> "PortConfig":
+ return PortConfig(id=id, node=node, **d)
+
+
+@dataclass(slots=True, frozen=True)
+class TrafficGeneratorConfig:
+ traffic_generator_type: TrafficGeneratorType
+
+ @staticmethod
+ def from_dict(d: dict):
+ # This looks useless now, but is designed to allow expansion to traffic
+ # generators that require more configuration later.
+ match TrafficGeneratorType(d["type"]):
+ case TrafficGeneratorType.SCAPY:
+ return ScapyTrafficGeneratorConfig(
+ traffic_generator_type=TrafficGeneratorType.SCAPY
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+ pass
+
+
@dataclass(slots=True, frozen=True)
class NodeConfiguration:
name: str
@@ -82,29 +129,52 @@ class NodeConfiguration:
os: OS
lcores: str
use_first_core: bool
- memory_channels: int
hugepages: HugepageConfiguration | None
+ ports: list[PortConfig]
@staticmethod
- def from_dict(d: dict) -> "NodeConfiguration":
+ def from_dict(d: dict) -> Union["SUTConfiguration", "TGConfiguration"]:
hugepage_config = d.get("hugepages")
if hugepage_config:
if "force_first_numa" not in hugepage_config:
hugepage_config["force_first_numa"] = False
hugepage_config = HugepageConfiguration(**hugepage_config)
- return NodeConfiguration(
- name=d["name"],
- hostname=d["hostname"],
- user=d["user"],
- password=d.get("password"),
- arch=Architecture(d["arch"]),
- os=OS(d["os"]),
- lcores=d.get("lcores", "1"),
- use_first_core=d.get("use_first_core", False),
- memory_channels=d.get("memory_channels", 1),
- hugepages=hugepage_config,
- )
+ common_config = {"name": d["name"],
+ "hostname": d["hostname"],
+ "user": d["user"],
+ "password": d.get("password"),
+ "arch": Architecture(d["arch"]),
+ "os": OS(d["os"]),
+ "lcores": d.get("lcores", "1"),
+ "use_first_core": d.get("use_first_core", False),
+ "hugepages": hugepage_config,
+ "ports": [
+ PortConfig.from_dict(i, d["name"], port)
+ for i, port in enumerate(d["ports"])
+ ]}
+
+ if "traffic_generator" in d:
+ return TGConfiguration(
+ traffic_generator=TrafficGeneratorConfig.from_dict(
+ d["traffic_generator"]),
+ **common_config
+ )
+ else:
+ return SUTConfiguration(
+ memory_channels=d.get("memory_channels", 1),
+ **common_config
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class SUTConfiguration(NodeConfiguration):
+ memory_channels: int
+
+
+@dataclass(slots=True, frozen=True)
+class TGConfiguration(NodeConfiguration):
+ traffic_generator: TrafficGeneratorConfig
@dataclass(slots=True, frozen=True)
@@ -156,7 +226,8 @@ class ExecutionConfiguration:
perf: bool
func: bool
test_suites: list[TestSuiteConfig]
- system_under_test: NodeConfiguration
+ system_under_test: SUTConfiguration
+ traffic_generator_system: TGConfiguration
@staticmethod
def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
@@ -169,12 +240,16 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
sut_name = d["system_under_test"]
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
+ tg_name = d["traffic_generator_system"]
+ assert tg_name in node_map, f"Unknown TG {tg_name} in execution {d}"
+
return ExecutionConfiguration(
build_targets=build_targets,
perf=d["perf"],
func=d["func"],
test_suites=test_suites,
system_under_test=node_map[sut_name],
+ traffic_generator_system=node_map[tg_name],
)
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index ca2d4a1ef2..af1d071368 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -6,6 +6,76 @@
"type": "string",
"description": "A unique identifier for a node"
},
+ "NIC": {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "ConnectX3_MT4103",
+ "ConnectX4_LX_MT4117",
+ "ConnectX4_MT4115",
+ "ConnectX5_MT4119",
+ "ConnectX5_MT4121",
+ "I40E_10G-10G_BASE_T_BC",
+ "I40E_10G-10G_BASE_T_X722",
+ "I40E_10G-SFP_X722",
+ "I40E_10G-SFP_XL710",
+ "I40E_10G-X722_A0",
+ "I40E_1G-1G_BASE_T_X722",
+ "I40E_25G-25G_SFP28",
+ "I40E_40G-QSFP_A",
+ "I40E_40G-QSFP_B",
+ "IAVF-ADAPTIVE_VF",
+ "IAVF-VF",
+ "IAVF_10G-X722_VF",
+ "ICE_100G-E810C_QSFP",
+ "ICE_25G-E810C_SFP",
+ "ICE_25G-E810_XXV_SFP",
+ "IGB-I350_VF",
+ "IGB_1G-82540EM",
+ "IGB_1G-82545EM_COPPER",
+ "IGB_1G-82571EB_COPPER",
+ "IGB_1G-82574L",
+ "IGB_1G-82576",
+ "IGB_1G-82576_QUAD_COPPER",
+ "IGB_1G-82576_QUAD_COPPER_ET2",
+ "IGB_1G-82580_COPPER",
+ "IGB_1G-I210_COPPER",
+ "IGB_1G-I350_COPPER",
+ "IGB_1G-I354_SGMII",
+ "IGB_1G-PCH_LPTLP_I218_LM",
+ "IGB_1G-PCH_LPTLP_I218_V",
+ "IGB_1G-PCH_LPT_I217_LM",
+ "IGB_1G-PCH_LPT_I217_V",
+ "IGB_2.5G-I354_BACKPLANE_2_5GBPS",
+ "IGC-I225_LM",
+ "IGC-I226_LM",
+ "IXGBE_10G-82599_SFP",
+ "IXGBE_10G-82599_SFP_SF_QP",
+ "IXGBE_10G-82599_T3_LOM",
+ "IXGBE_10G-82599_VF",
+ "IXGBE_10G-X540T",
+ "IXGBE_10G-X540_VF",
+ "IXGBE_10G-X550EM_A_SFP",
+ "IXGBE_10G-X550EM_X_10G_T",
+ "IXGBE_10G-X550EM_X_SFP",
+ "IXGBE_10G-X550EM_X_VF",
+ "IXGBE_10G-X550T",
+ "IXGBE_10G-X550_VF",
+ "brcm_57414",
+ "brcm_P2100G",
+ "cavium_0011",
+ "cavium_a034",
+ "cavium_a063",
+ "cavium_a064",
+ "fastlinq_ql41000",
+ "fastlinq_ql41000_vf",
+ "fastlinq_ql45000",
+ "fastlinq_ql45000_vf",
+ "hi1822",
+ "virtio"
+ ]
+ },
+
"ARCH": {
"type": "string",
"enum": [
@@ -20,6 +90,20 @@
"linux"
]
},
+ "OS_WITH_OPTIONS": {
+ "oneOf": [
+ {
+ "$ref": "#/definitions/OS"
+ },
+ {
+ "type": "string",
+ "enum": [
+ "ALL",
+ "OTHER"
+ ]
+ }
+ ]
+ },
"cpu": {
"type": "string",
"description": "Native should be the default on x86",
@@ -94,6 +178,34 @@
"amount"
]
},
+ "mac_address": {
+ "type": "string",
+ "description": "A MAC address",
+ "pattern": "^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$"
+ },
+ "pktgen_type": {
+ "type": "string",
+ "enum": [
+ "IXIA",
+ "IXIA_NETWORK",
+ "TREX",
+ "SCAPY",
+ "NONE"
+ ]
+ },
+ "pci_address": {
+ "type": "string",
+ "pattern": "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$"
+ },
+ "port_peer_address": {
+ "description": "Peer is a TRex port, and IXIA port or a PCI address",
+ "oneOf": [
+ {
+ "description": "PCI peer port",
+ "$ref": "#/definitions/pci_address"
+ }
+ ]
+ },
"test_suite": {
"type": "string",
"enum": [
@@ -165,6 +277,60 @@
},
"hugepages": {
"$ref": "#/definitions/hugepages"
+ },
+ "ports": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "description": "Each port should be described on both sides of the connection. This makes configuration slightly more verbose but greatly simplifies implementation. If there are an inconsistencies, then DTS will not run until that issue is fixed. An example inconsistency would be port 1, node 1 says it is connected to port 1, node 2, but port 1, node 2 says it is connected to port 2, node 1.",
+ "properties": {
+ "pci": {
+ "$ref": "#/definitions/pci_address",
+ "description": "The local PCI address of the port"
+ },
+ "dpdk_os_driver": {
+ "type": "string",
+ "description": "The driver that the kernel should bind this device to for DPDK to use it. (ex: vfio-pci)"
+ },
+ "os_driver": {
+ "type": "string",
+ "description": "The driver normally used by this port (ex: i40e)"
+ },
+ "peer_node": {
+ "type": "string",
+ "description": "The name of the node the peer port is on"
+ },
+ "peer_pci": {
+ "$ref": "#/definitions/pci_address",
+ "description": "The PCI address of the peer port"
+ }
+ },
+ "additionalProperties": false,
+ "required": [
+ "pci",
+ "dpdk_os_driver",
+ "os_driver",
+ "peer_node",
+ "peer_pci"
+ ]
+ },
+ "minimum": 1
+ },
+ "traffic_generator": {
+ "oneOf": [
+ {
+ "type": "object",
+ "description": "Scapy traffic generator",
+ "properties": {
+ "type": {
+ "type": "string",
+ "enum": [
+ "SCAPY"
+ ]
+ }
+ }
+ }
+ ]
}
},
"additionalProperties": false,
@@ -213,6 +379,9 @@
},
"system_under_test": {
"$ref": "#/definitions/node_name"
+ },
+ "traffic_generator_system": {
+ "$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
@@ -221,7 +390,8 @@
"perf",
"func",
"test_suites",
- "system_under_test"
+ "system_under_test",
+ "traffic_generator_system"
]
},
"minimum": 1
--
2.30.2
^ permalink raw reply [flat|nested] 29+ messages in thread
* [RFC PATCH v1 3/5] dts: traffic generator abstractions
2023-04-20 9:31 [RFC PATCH v1 0/5] dts: add tg abstractions and scapy Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 1/5] dts: add scapy dependency Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 2/5] dts: add traffic generator config Juraj Linkeš
@ 2023-04-20 9:31 ` Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 4/5] dts: scapy traffic generator implementation Juraj Linkeš
` (2 subsequent siblings)
5 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-04-20 9:31 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
jspewock, probb
Cc: dev, Juraj Linkeš
There are traffic abstractions for all traffic generators and for
traffic generators that can capture (not just count) packets.
There also related abstractions, such as TGNode where the traffic
generators reside and some related code.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/os_session.py | 22 ++-
dts/framework/remote_session/posix_session.py | 3 +
.../capturing_traffic_generator.py | 155 ++++++++++++++++++
dts/framework/testbed_model/hw/port.py | 55 +++++++
dts/framework/testbed_model/node.py | 4 +-
dts/framework/testbed_model/sut_node.py | 5 +-
dts/framework/testbed_model/tg_node.py | 62 +++++++
.../testbed_model/traffic_generator.py | 59 +++++++
8 files changed, 360 insertions(+), 5 deletions(-)
create mode 100644 dts/framework/testbed_model/capturing_traffic_generator.py
create mode 100644 dts/framework/testbed_model/hw/port.py
create mode 100644 dts/framework/testbed_model/tg_node.py
create mode 100644 dts/framework/testbed_model/traffic_generator.py
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 4c48ae2567..56d7fef06c 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -10,6 +10,7 @@
from framework.logger import DTSLOG
from framework.settings import SETTINGS
from framework.testbed_model import LogicalCore
+from framework.testbed_model.hw.port import PortIdentifier
from framework.utils import EnvVarsDict, MesonArgs
from .remote import CommandResult, RemoteSession, create_remote_session
@@ -37,6 +38,7 @@ def __init__(
self.name = name
self._logger = logger
self.remote_session = create_remote_session(node_config, name, logger)
+ self._disable_terminal_colors()
def close(self, force: bool = False) -> None:
"""
@@ -53,7 +55,7 @@ def is_alive(self) -> bool:
def send_command(
self,
command: str,
- timeout: float,
+ timeout: float = SETTINGS.timeout,
verify: bool = False,
env: EnvVarsDict | None = None,
) -> CommandResult:
@@ -64,6 +66,12 @@ def send_command(
"""
return self.remote_session.send_command(command, timeout, verify, env)
+ @abstractmethod
+ def _disable_terminal_colors(self) -> None:
+ """
+ Disable the colors in the ssh session.
+ """
+
@abstractmethod
def guess_dpdk_remote_dir(self, remote_dir) -> PurePath:
"""
@@ -173,3 +181,15 @@ def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> None:
if needed and mount the hugepages if needed.
If force_first_numa is True, configure hugepages just on the first socket.
"""
+
+ @abstractmethod
+ def get_logical_name_of_port(self, id: PortIdentifier) -> str | None:
+ """
+ Gets the logical name (eno1, ens5, etc) of a port by the port's identifier.
+ """
+
+ @abstractmethod
+ def check_link_is_up(self, id: PortIdentifier) -> bool:
+ """
+ Check that the link is up.
+ """
diff --git a/dts/framework/remote_session/posix_session.py b/dts/framework/remote_session/posix_session.py
index d38062e8d6..288fbabf1e 100644
--- a/dts/framework/remote_session/posix_session.py
+++ b/dts/framework/remote_session/posix_session.py
@@ -219,3 +219,6 @@ def _remove_dpdk_runtime_dirs(
def get_dpdk_file_prefix(self, dpdk_prefix) -> str:
return ""
+
+ def _disable_terminal_colors(self) -> None:
+ self.remote_session.send_command("export TERM=xterm-mono")
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/capturing_traffic_generator.py
new file mode 100644
index 0000000000..7beeb139c1
--- /dev/null
+++ b/dts/framework/testbed_model/capturing_traffic_generator.py
@@ -0,0 +1,155 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+#
+
+import itertools
+import uuid
+from abc import abstractmethod
+
+import scapy.utils
+from scapy.packet import Packet
+
+from framework.testbed_model.hw.port import PortIdentifier
+from framework.settings import SETTINGS
+
+from .traffic_generator import TrafficGenerator
+
+
+def _get_default_capture_name() -> str:
+ """
+ This is the function used for the default implementation of capture names.
+ """
+ return str(uuid.uuid4())
+
+
+class CapturingTrafficGenerator(TrafficGenerator):
+ """
+ A mixin interface which enables a packet generator to declare that it can capture
+ packets and return them to the user.
+
+ All packet functions added by this class should write out the captured packets
+ to a pcap file in output, allowing for easier analysis of failed tests.
+ """
+
+ def is_capturing(self) -> bool:
+ return True
+
+ @abstractmethod
+ def send_packet_and_capture(
+ self,
+ send_port_id: PortIdentifier,
+ packet: Packet,
+ receive_port_id: PortIdentifier,
+ duration_s: int,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ """
+ Send a packet on the send port and then capture all traffic on receive port
+ for the given duration.
+
+ Captures packets and adds them to output/<capture_name>.pcap.
+
+ This function must handle no packets even being received.
+ """
+ raise NotImplementedError()
+
+ def send_packets_and_capture(
+ self,
+ send_port_id: PortIdentifier,
+ packets: list[Packet],
+ receive_port_id: PortIdentifier,
+ duration_s: int,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ """
+ Send a group of packets on the send port and then capture all traffic on the
+ receive port for the given duration.
+
+ This function must handle no packets even being received.
+ """
+ self.logger.info(
+ f"Incremental captures will be created at output/{capture_name}-<n>.pcap"
+ )
+ received_packets: list[list[Packet]] = []
+ for i, packet in enumerate(packets):
+ received_packets.append(
+ self.send_packet_and_capture(
+ send_port_id,
+ packet,
+ receive_port_id,
+ duration_s,
+ capture_name=f"{capture_name}-{i}",
+ )
+ )
+
+ flattened_received_packets = list(
+ itertools.chain.from_iterable(received_packets)
+ )
+ scapy.utils.wrpcap(f"output/{capture_name}.pcap", flattened_received_packets)
+ return flattened_received_packets
+
+ def send_packet_and_expect_packet(
+ self,
+ send_port_id: PortIdentifier,
+ packet: Packet,
+ receive_port_id: PortIdentifier,
+ expected_packet: Packet,
+ timeout: int = SETTINGS.timeout,
+ capture_name: str = _get_default_capture_name(),
+ ) -> None:
+ """
+ Sends the provided packet, capturing received packets. Then asserts that the
+ only 1 packet was received, and that the packet that was received is equal to
+ the expected packet.
+ """
+ packets: list[Packet] = self.send_packet_and_capture(
+ send_port_id, packet, receive_port_id, timeout
+ )
+
+ assert len(packets) != 0, "Expected a packet, but none were captured"
+ assert len(packets) == 1, (
+ "More packets than expected were received, "
+ f"capture written to output/{capture_name}.pcap"
+ )
+ assert packets[0] == expected_packet, (
+ f"Received packet differed from expected packet, capture written to "
+ f"output/{capture_name}.pcap"
+ )
+
+ def send_packets_and_expect_packets(
+ self,
+ send_port_id: PortIdentifier,
+ packets: list[Packet],
+ receive_port_id: PortIdentifier,
+ expected_packets: list[Packet],
+ timeout: int = SETTINGS.timeout,
+ capture_name: str = _get_default_capture_name(),
+ ) -> None:
+ """
+ Sends the provided packets, capturing received packets. Then asserts that the
+ correct number of packets was received, and that the packets that were received
+ are equal to the expected packet. This equality is done by comparing packets
+ at the same index.
+ """
+ packets: list[Packet] = self.send_packets_and_capture(
+ send_port_id, packets, receive_port_id, timeout
+ )
+
+ if len(expected_packets) > 0:
+ assert len(packets) != 0, "Expected packets, but none were captured"
+
+ assert len(packets) == len(expected_packets), (
+ "A different number of packets than expected were received, "
+ f"capture written to output/{capture_name}.pcap or split across "
+ f"output/{capture_name}-<n>.pcap"
+ )
+ for i, expected_packet in enumerate(expected_packets):
+ assert packets[i] == expected_packet, (
+ f"Received packet {i} differed from expected packet, capture written "
+ f"to output/{capture_name}.pcap or output/{capture_name}-{i}.pcap"
+ )
+
+ def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+ file_name = f"output/{capture_name}.pcap"
+ self.logger.debug(f"Writing packets to {file_name}")
+ scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/hw/port.py
new file mode 100644
index 0000000000..ebaad563f8
--- /dev/null
+++ b/dts/framework/testbed_model/hw/port.py
@@ -0,0 +1,55 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+#
+
+from dataclasses import dataclass
+
+from framework.config import PortConfig
+
+
+@dataclass(slots=True, frozen=True)
+class PortIdentifier:
+ node: str
+ pci: str
+
+
+@dataclass(slots=True, frozen=True)
+class Port:
+ """
+ identifier: The information that uniquely identifies this port.
+ pci: The PCI address of the port.
+
+ os_driver: The driver normally used by this port (ex: i40e)
+ dpdk_os_driver: The driver that the os should bind this device to for DPDK to use it. (ex: vfio-pci)
+
+ Note: os_driver and dpdk_os_driver may be the same thing, see mlx5_core
+
+ peer: The identifier for whatever this port is plugged into.
+ """
+
+ id: int
+ identifier: PortIdentifier
+ os_driver: str
+ dpdk_os_driver: str
+ peer: PortIdentifier
+
+ @property
+ def node(self) -> str:
+ return self.identifier.node
+
+ @property
+ def pci(self) -> str:
+ return self.identifier.pci
+
+ @staticmethod
+ def from_config(node_name: str, config: PortConfig) -> "Port":
+ return Port(
+ id=config.id,
+ identifier=PortIdentifier(
+ node=node_name,
+ pci=config.pci,
+ ),
+ os_driver=config.os_driver,
+ dpdk_os_driver=config.dpdk_os_driver,
+ peer=PortIdentifier(node=config.peer_node, pci=config.peer_pci),
+ )
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index d48fafe65d..5d2d1a0cf6 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -47,6 +47,8 @@ def __init__(self, node_config: NodeConfiguration):
self._logger = getLogger(self.name)
self.main_session = create_session(self.config, self.name, self._logger)
+ self._logger.info(f"Connected to node: {self.name}")
+
self._get_remote_cpus()
# filter the node lcores according to user config
self.lcores = LogicalCoreListFilter(
@@ -55,8 +57,6 @@ def __init__(self, node_config: NodeConfiguration):
self._other_sessions = []
- self._logger.info(f"Created node: {self.name}")
-
def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
Perform the execution setup that will be done for each execution
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 2b2b50d982..ec9180a98b 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -7,7 +7,7 @@
import time
from pathlib import PurePath
-from framework.config import BuildTargetConfiguration, NodeConfiguration
+from framework.config import BuildTargetConfiguration, SUTConfiguration
from framework.remote_session import CommandResult, OSSession
from framework.settings import SETTINGS
from framework.utils import EnvVarsDict, MesonArgs
@@ -34,7 +34,7 @@ class SutNode(Node):
_app_compile_timeout: float
_dpdk_kill_session: OSSession | None
- def __init__(self, node_config: NodeConfiguration):
+ def __init__(self, node_config: SUTConfiguration):
super(SutNode, self).__init__(node_config)
self._dpdk_prefix_list = []
self._build_target_config = None
@@ -47,6 +47,7 @@ def __init__(self, node_config: NodeConfiguration):
self._dpdk_timestamp = (
f"{str(os.getpid())}_{time.strftime('%Y%m%d%H%M%S', time.localtime())}"
)
+ self._logger.info(f"Created node: {self.name}")
@property
def _remote_dpdk_dir(self) -> PurePath:
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
new file mode 100644
index 0000000000..fdb7329020
--- /dev/null
+++ b/dts/framework/testbed_model/tg_node.py
@@ -0,0 +1,62 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from scapy.layers.inet import IP, UDP
+from scapy.layers.l2 import Ether
+from scapy.packet import Raw
+
+from framework.config import TGConfiguration
+from framework.testbed_model.hw.port import Port, PortIdentifier
+from framework.testbed_model.traffic_generator import TrafficGenerator
+
+from .node import Node
+
+
+class TGNode(Node):
+ """
+ A class for managing connections to the Traffic Generator node and managing
+ traffic generators residing within.
+ """
+
+ ports: list[Port]
+ traffic_generator: TrafficGenerator
+
+ def __init__(self, node_config: TGConfiguration):
+ super(TGNode, self).__init__(node_config)
+ self.ports = [
+ Port.from_config(self.name, port_config)
+ for port_config in node_config.ports
+ ]
+ self.traffic_generator = TrafficGenerator.from_config(
+ self, node_config.traffic_generator
+ )
+ self._logger.info(f"Created node: {self.name}")
+
+ def get_ports_with_loop_topology(
+ self, peer_node: "Node"
+ ) -> tuple[PortIdentifier, PortIdentifier] | None:
+ for port1 in self.ports:
+ if port1.peer.node == peer_node.name:
+ for port2 in self.ports:
+ if port2.peer.node == peer_node.name:
+ return (port1.identifier, port2.identifier)
+ self._logger.warning(
+ f"Attempted to find loop topology between {self.name} and {peer_node.name}, but none could be found"
+ )
+ return None
+
+ def verify(self) -> None:
+ for port in self.ports:
+ self.traffic_generator.assert_port_is_connected(port.identifier)
+ port = self.ports[0]
+ # Check that the traffic generator is working by sending a packet.
+ # send_packet should throw an error if something goes wrong.
+ self.traffic_generator.send_packet(
+ port.identifier, Ether() / IP() / UDP() / Raw(b"Hello World")
+ )
+
+ def close(self) -> None:
+ self.traffic_generator.close()
+ super(TGNode, self).close()
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator.py
new file mode 100644
index 0000000000..ea8f361e8f
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator.py
@@ -0,0 +1,59 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+#
+
+from abc import ABC, abstractmethod
+
+from scapy.packet import Packet
+
+from framework.config import TrafficGeneratorConfig, TrafficGeneratorType
+from framework.logger import DTSLOG
+from framework.testbed_model.hw.port import PortIdentifier
+
+
+class TrafficGenerator(ABC):
+ logger: DTSLOG
+
+ @abstractmethod
+ def send_packet(self, port: PortIdentifier, packet: Packet) -> None:
+ """
+ Sends a packet and blocks until it is fully sent.
+
+ What fully sent means is defined by the traffic generator.
+ """
+ raise NotImplementedError()
+
+ def send_packets(self, port: PortIdentifier, packets: list[Packet]) -> None:
+ """
+ Sends a list of packets and blocks until they are fully sent.
+
+ What "fully sent" means is defined by the traffic generator.
+ """
+ # default implementation, this should be overridden if there is a better
+ # way to do this on a specific packet generator
+ for packet in packets:
+ self.send_packet(port, packet)
+
+ def is_capturing(self) -> bool:
+ """
+ Whether this traffic generator can capture traffic
+ """
+ return False
+
+ @staticmethod
+ def from_config(
+ node: "Node", traffic_generator_config: TrafficGeneratorConfig
+ ) -> "TrafficGenerator":
+ from .scapy import ScapyTrafficGenerator
+
+ match traffic_generator_config.traffic_generator_type:
+ case TrafficGeneratorType.SCAPY:
+ return ScapyTrafficGenerator(node, node.ports)
+
+ @abstractmethod
+ def close(self):
+ pass
+
+ @abstractmethod
+ def assert_port_is_connected(self, id: PortIdentifier) -> None:
+ pass
--
2.30.2
^ permalink raw reply [flat|nested] 29+ messages in thread
* [RFC PATCH v1 4/5] dts: scapy traffic generator implementation
2023-04-20 9:31 [RFC PATCH v1 0/5] dts: add tg abstractions and scapy Juraj Linkeš
` (2 preceding siblings ...)
2023-04-20 9:31 ` [RFC PATCH v1 3/5] dts: traffic generator abstractions Juraj Linkeš
@ 2023-04-20 9:31 ` Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 5/5] dts: add traffic generator node to dts runner Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
5 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-04-20 9:31 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
jspewock, probb
Cc: dev, Juraj Linkeš
Scapy is a traffic generator capable of sending and receiving traffic.
Since it's a software traffic generator, it's not suitable for
performance testing, but it is suitable for functional testing.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/linux_session.py | 55 +++
.../remote_session/remote/remote_session.py | 7 +
dts/framework/testbed_model/scapy.py | 348 ++++++++++++++++++
3 files changed, 410 insertions(+)
create mode 100644 dts/framework/testbed_model/scapy.py
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index a1e3bc3a92..b99a27bba4 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,13 +2,29 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+import json
+from typing import TypedDict
+from typing_extensions import NotRequired
+
+
from framework.exception import RemoteCommandExecutionError
from framework.testbed_model import LogicalCore
+from framework.testbed_model.hw.port import PortIdentifier
from framework.utils import expand_range
from .posix_session import PosixSession
+class LshwOutputConfigurationDict(TypedDict):
+ link: str
+
+
+class LshwOutputDict(TypedDict):
+ businfo: str
+ logicalname: NotRequired[str]
+ configuration: LshwOutputConfigurationDict
+
+
class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
@@ -105,3 +121,42 @@ def _configure_huge_pages(
self.remote_session.send_command(
f"echo {amount} | sudo tee {hugepage_config_path}"
)
+
+ def get_lshw_info(self) -> list[LshwOutputDict]:
+ output = self.remote_session.send_expect("lshw -quiet -json -C network", "#")
+ assert not isinstance(
+ output, int
+ ), "send_expect returned an int when it should have been a string"
+ return json.loads(output)
+
+ def get_logical_name_of_port(self, id: PortIdentifier) -> str | None:
+ self._logger.debug(f"Searching for logical name of {id.pci}")
+ assert (
+ id.node == self.name
+ ), "Attempted to get the logical port name on the wrong node"
+ port_info_list: list[LshwOutputDict] = self.get_lshw_info()
+ for port_info in port_info_list:
+ if f"pci@{id.pci}" == port_info.get("businfo"):
+ if "logicalname" in port_info:
+ self._logger.debug(
+ f"Found logical name for port {id.pci}, {port_info.get('logicalname')}"
+ )
+ return port_info.get("logicalname")
+ else:
+ self._logger.warning(
+ f"Attempted to get the logical name of {id.pci}, but none existed"
+ )
+ return None
+ self._logger.warning(f"No port at pci address {id.pci} found.")
+ return None
+
+ def check_link_is_up(self, id: PortIdentifier) -> bool | None:
+ self._logger.debug(f"Checking link status for {id.pci}")
+ port_info_list: list[LshwOutputDict] = self.get_lshw_info()
+ for port_info in port_info_list:
+ if f"pci@{id.pci}" == port_info.get("businfo"):
+ status = port_info["configuration"]["link"]
+ self._logger.debug(f"Found link status for port {id.pci}, {status}")
+ return status == "up"
+ self._logger.warning(f"No port at pci address {id.pci} found.")
+ return None
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 91dee3cb4f..5b36e2d7d2 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -84,6 +84,13 @@ def _connect(self) -> None:
Create connection to assigned node.
"""
+ @abstractmethod
+ def send_expect(
+ self, command: str, prompt: str, timeout: float = 15,
+ verify: bool = False
+ ) -> str | int:
+ """"""
+
def send_command(
self,
command: str,
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/scapy.py
new file mode 100644
index 0000000000..1e5caab897
--- /dev/null
+++ b/dts/framework/testbed_model/scapy.py
@@ -0,0 +1,348 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+#
+
+import inspect
+import json
+import marshal
+import types
+import xmlrpc.client
+from typing import TypedDict
+from xmlrpc.server import SimpleXMLRPCServer
+
+import scapy.all
+from scapy.packet import Packet
+from typing_extensions import NotRequired
+
+from framework.config import OS
+from framework.logger import getLogger
+from .tg_node import TGNode
+from .hw.port import Port, PortIdentifier
+from .capturing_traffic_generator import (
+ CapturingTrafficGenerator,
+ _get_default_capture_name,
+)
+from framework.settings import SETTINGS
+from framework.remote_session import OSSession
+
+"""
+========= BEGIN RPC FUNCTIONS =========
+
+All of the functions in this section are intended to be exported to a python
+shell which runs a scapy RPC server. These functions are made available via that
+RPC server to the packet generator. To add a new function to the RPC server,
+first write the function in this section. Then, if you need any imports, make sure to
+add them to SCAPY_RPC_SERVER_IMPORTS as well. After that, add the function to the list
+in EXPORTED_FUNCTIONS. Note that kwargs (keyword arguments) do not work via xmlrpc,
+so you may need to construct wrapper functions around many scapy types.
+"""
+
+"""
+Add the line needed to import something in a normal python environment
+as an entry to this array. It will be imported before any functions are
+sent to the server.
+"""
+SCAPY_RPC_SERVER_IMPORTS = [
+ "from scapy.all import *",
+ "import xmlrpc",
+ "import sys",
+ "from xmlrpc.server import SimpleXMLRPCServer",
+ "import marshal",
+ "import pickle",
+ "import types",
+]
+
+
+def scapy_sr1_different_interfaces(
+ packets: list[Packet], send_iface: str, recv_iface: str, timeout_s: int
+) -> bytes:
+ packets = [scapy.all.Packet(packet.data) for packet in packets]
+ sniffer = scapy.all.AsyncSniffer(
+ iface=recv_iface,
+ store=True,
+ timeout=timeout_s,
+ started_callback=lambda _: scapy.all.sendp(packets, iface=send_iface),
+ stop_filter=lambda _: True,
+ )
+ sniffer.start()
+ packets = sniffer.stop(join=True)
+ assert len(packets) != 0, "Not enough packets were sniffed"
+ assert len(packets) == 1, "More packets than expected were sniffed"
+ return packets[0].build()
+
+
+def scapy_send_packets_and_capture(
+ packets: list[Packet], send_iface: str, recv_iface: str, duration_s: int
+) -> list[bytes]:
+ packets = [scapy.all.Packet(packet.data) for packet in packets]
+ sniffer = scapy.all.AsyncSniffer(
+ iface=recv_iface,
+ store=True,
+ timeout=duration_s,
+ started_callback=lambda _: scapy.all.sendp(packets, iface=send_iface),
+ )
+ sniffer.start()
+ return [packet.build() for packet in sniffer.stop(join=True)]
+
+
+def scapy_send_packets(packets: list[xmlrpc.client.Binary], send_iface: str) -> None:
+ packets = [scapy.all.Packet(packet.data) for packet in packets]
+ scapy.all.sendp(packets, iface=send_iface, realtime=True, verbose=True)
+
+
+"""
+Functions to be exposed by the scapy RPC server.
+"""
+RPC_FUNCTIONS = [
+ scapy_send_packets,
+ scapy_send_packets_and_capture,
+ scapy_sr1_different_interfaces,
+]
+
+
+class QuittableXMLRPCServer(SimpleXMLRPCServer):
+ def __init__(self, *args, **kwargs):
+ kwargs["allow_none"] = True
+ super().__init__(*args, **kwargs)
+ self.register_introspection_functions()
+ self.register_function(self.quit)
+ self.register_function(self.add_rpc_function)
+
+ def quit(self) -> None:
+ self._BaseServer__shutdown_request = True
+ return None
+
+ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+ function_code = marshal.loads(function_bytes.data)
+ function = types.FunctionType(function_code, globals(), name)
+ self.register_function(function)
+
+ def serve_forever(self, poll_interval: float = 0.5) -> None:
+ print("XMLRPC OK")
+ super().serve_forever(poll_interval)
+
+
+"""
+========= END RPC FUNCTIONS =========
+"""
+
+
+class NetworkInfoDict(TypedDict):
+ businfo: str
+ logicalname: NotRequired[str]
+
+
+class ScapyTrafficGenerator(CapturingTrafficGenerator):
+ """
+ Provides access to scapy functions via an RPC interface
+ """
+
+ tg_node: TGNode
+ ports: list[Port]
+ session: OSSession
+ scapy: xmlrpc.client.ServerProxy
+ iface_names: dict[PortIdentifier, str]
+
+ def __init__(self, tg_node: TGNode, ports: list[Port]):
+ self.tg_node = tg_node
+
+ assert tg_node.config.os == OS.linux, (
+ "Linux is the only supported OS for scapy traffic generation"
+ )
+
+ self.session = tg_node.create_session("scapy")
+ self.logger = getLogger("scapy-pktgen-messages", node=tg_node.name)
+ self.ports = ports
+
+ # No fancy colors
+
+ prompt_str = "<PROMPT>"
+ self.session.remote_session.send_expect(f'export PS1="{prompt_str}"', prompt_str)
+
+ network_info_str: str = self.session.remote_session.send_expect(
+ "lshw -quiet -json -C network", prompt_str, timeout=10
+ )
+
+ network_info_list: list[NetworkInfoDict] = json.loads(network_info_str)
+ network_info_lookup: dict[str, str] = {
+ network_info["businfo"]: network_info.get("logicalname")
+ for network_info in network_info_list
+ }
+
+ self.iface_names = dict()
+ for port in self.ports:
+ businfo_str = f"pci@{port.pci}"
+ assert businfo_str in network_info_lookup, (
+ f"Expected '{businfo_str}' in lshw output for {self.tg_node.name}, but "
+ f"it was not present."
+ )
+
+ self.iface_names[port.identifier] = network_info_lookup[businfo_str]
+
+ assert (
+ self.iface_names[port.identifier] is not None
+ ), f"No interface was present for {port.pci} on {self.tg_node.name}"
+
+ self._run_command("python3")
+
+ self._add_helper_functions_to_scapy()
+ self.session.remote_session.send_expect(
+ 'server = QuittableXMLRPCServer(("0.0.0.0", 8000)); server.serve_forever()',
+ "XMLRPC OK",
+ timeout=5,
+ )
+
+ server_url: str = f"http://{self.tg_node.config.hostname}:8000"
+
+ self.scapy = xmlrpc.client.ServerProxy(
+ server_url, allow_none=True, verbose=SETTINGS.verbose
+ )
+
+ for function in RPC_FUNCTIONS:
+ # A slightly hacky way to move a function to the remote server.
+ # It is constructed from the name and code on the other side.
+ # Pickle cannot handle functions, nor can any of the other serialization
+ # frameworks aside from the libraries used to generate pyc files, which
+ # are even more messy to work with.
+ function_bytes = marshal.dumps(function.__code__)
+ self.scapy.add_rpc_function(function.__name__, function_bytes)
+
+ def _add_helper_functions_to_scapy(self):
+ for import_statement in SCAPY_RPC_SERVER_IMPORTS:
+ self._run_command(import_statement + "\r\n")
+
+ for helper_function in {QuittableXMLRPCServer}:
+ # load the source of the function
+ src = inspect.getsource(helper_function)
+ # Lines with only whitespace break the repl if in the middle of a function
+ # or class, so strip all lines containing only whitespace
+ src = "\n".join(
+ [line for line in src.splitlines() if not line.isspace() and line != ""]
+ )
+
+ spacing = "\n" * 4
+
+ # execute it in the python terminal
+ self._run_command(spacing + src + spacing)
+
+ def _run_command(self, command: str) -> str:
+ return self.session.remote_session.send_expect(command, ">>>")
+
+ def _get_port_interface_or_error(self, port: PortIdentifier) -> str:
+ match self.iface_names.get(port):
+ case None:
+ assert (
+ False
+ ), f"{port} is not a valid port on this packet generator on {self.tg_node.name}."
+ case iface:
+ return iface
+
+ def send_packet(self, port: PortIdentifier, packet: Packet) -> None:
+ iface = self._get_port_interface_or_error(port)
+ self.logger.info("Sending packet")
+ self.logger.debug("Packet contents: \n" + packet._do_summary()[1])
+ self.scapy.scapy_send_packets([packet.build()], iface)
+
+ def send_packets(self, port: PortIdentifier, packets: list[Packet]) -> None:
+ iface = self._get_port_interface_or_error(port)
+ self.logger.info("Sending packets")
+ packet_summaries = json.dumps(
+ list(map(lambda pkt: pkt._do_summary()[1], packets)), indent=4
+ )
+ packets = [packet.build() for packet in packets]
+ self.logger.debug("Packet contents: \n" + packet_summaries)
+ self.scapy.scapy_send_packets(packets, iface)
+
+ def send_packet_and_capture(
+ self,
+ send_port_id: PortIdentifier,
+ packet: Packet,
+ receive_port_id: PortIdentifier,
+ duration_s: int,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ packets = self.scapy.scapy_send_packets_and_capture(
+ [packet.build()], send_port_id, receive_port_id, duration_s
+ )
+ self._write_capture_from_packets(capture_name, packets)
+ return packets
+
+ def send_packets_and_capture(
+ self,
+ send_port_id: PortIdentifier,
+ packets: Packet,
+ receive_port_id: PortIdentifier,
+ duration_s: int,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ packets: list[bytes] = [packet.build() for packet in packets]
+ packets: list[bytes] = self.scapy.scapy_send_packets_and_capture(
+ packets, send_port_id, receive_port_id, duration_s
+ )
+ packets: list[Packet] = [scapy.all.Packet(packet) for packet in packets]
+ self._write_capture_from_packets(capture_name, packets)
+ return packets
+
+ def send_packet_and_expect_packet(
+ self,
+ send_port_id: PortIdentifier,
+ packet: Packet,
+ receive_port_id: PortIdentifier,
+ expected_packet: Packet,
+ timeout: int = SETTINGS.timeout,
+ capture_name: str = _get_default_capture_name(),
+ ) -> None:
+ self.send_packets_and_expect_packets(
+ send_port_id,
+ [packet],
+ receive_port_id,
+ [expected_packet],
+ timeout,
+ capture_name,
+ )
+
+ def send_packets_and_expect_packets(
+ self,
+ send_port_id: PortIdentifier,
+ packets: list[Packet],
+ receive_port_id: PortIdentifier,
+ expected_packets: list[Packet],
+ timeout: int = SETTINGS.timeout,
+ capture_name: str = _get_default_capture_name(),
+ ) -> None:
+ send_iface = self._get_port_interface_or_error(send_port_id)
+ recv_iface = self._get_port_interface_or_error(receive_port_id)
+
+ packets = [packet.build() for packet in packets]
+
+ received_packets = self.scapy.scapy_sr1_different_interfaces(
+ packets, send_iface, recv_iface, timeout
+ )
+
+ received_packets = [scapy.all.Packet(packet) for packet in received_packets]
+
+ self._write_capture_from_packets(capture_name, received_packets)
+
+ assert len(received_packets) == len(
+ expected_packets
+ ), "Incorrect number of packets received"
+ for i, expected_packet in enumerate(expected_packets):
+ assert (
+ received_packets[i] == expected_packet
+ ), f"Received packet {i} differed from expected packet"
+
+ def close(self):
+ try:
+ self.scapy.quit()
+ except ConnectionRefusedError:
+ # Because the python instance closes, we get no RPC response.
+ # Thus, this error is expected
+ pass
+ try:
+ self.session.close(force=True)
+ except TimeoutError:
+ # Pexpect does not like being in a python prompt when it closes
+ pass
+
+ def assert_port_is_connected(self, id: PortIdentifier) -> None:
+ self.tg_node.main_session.check_link_is_up(id)
--
2.30.2
^ permalink raw reply [flat|nested] 29+ messages in thread
* [RFC PATCH v1 5/5] dts: add traffic generator node to dts runner
2023-04-20 9:31 [RFC PATCH v1 0/5] dts: add tg abstractions and scapy Juraj Linkeš
` (3 preceding siblings ...)
2023-04-20 9:31 ` [RFC PATCH v1 4/5] dts: scapy traffic generator implementation Juraj Linkeš
@ 2023-04-20 9:31 ` Juraj Linkeš
2023-05-03 18:02 ` Jeremy Spewock
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
5 siblings, 1 reply; 29+ messages in thread
From: Juraj Linkeš @ 2023-04-20 9:31 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage,
jspewock, probb
Cc: dev, Juraj Linkeš
Initialize the TG node and do basic verification.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/dts.py | 42 ++++++++++++++++---------
dts/framework/testbed_model/__init__.py | 1 +
2 files changed, 28 insertions(+), 15 deletions(-)
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 0502284580..9c82bfe1f4 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -9,7 +9,7 @@
from .logger import DTSLOG, getLogger
from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
from .test_suite import get_test_suites
-from .testbed_model import SutNode
+from .testbed_model import SutNode, TGNode, Node
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("DTSRunner")
@@ -27,28 +27,40 @@ def run_all() -> None:
# check the python version of the server that run dts
check_dts_python_version()
- nodes: dict[str, SutNode] = {}
+ nodes: dict[str, Node] = {}
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
sut_node = None
+ tg_node = None
if execution.system_under_test.name in nodes:
# a Node with the same name already exists
sut_node = nodes[execution.system_under_test.name]
- else:
- # the SUT has not been initialized yet
- try:
+
+ if execution.traffic_generator_system.name in nodes:
+ # a Node with the same name already exists
+ tg_node = nodes[execution.traffic_generator_system.name]
+
+ try:
+ if not sut_node:
sut_node = SutNode(execution.system_under_test)
- result.update_setup(Result.PASS)
- except Exception as e:
- dts_logger.exception(
- f"Connection to node {execution.system_under_test} failed."
- )
- result.update_setup(Result.FAIL, e)
- else:
- nodes[sut_node.name] = sut_node
-
- if sut_node:
+ if not tg_node:
+ tg_node = TGNode(execution.traffic_generator_system)
+ tg_node.verify()
+ result.update_setup(Result.PASS)
+ except Exception as e:
+ failed_node = execution.system_under_test.name
+ if sut_node:
+ failed_node = execution.traffic_generator_system.name
+ dts_logger.exception(
+ f"Creation of node {failed_node} failed."
+ )
+ result.update_setup(Result.FAIL, e)
+ else:
+ nodes[sut_node.name] = sut_node
+ nodes[tg_node.name] = tg_node
+
+ if sut_node and tg_node:
_run_execution(sut_node, execution, result)
except Exception as e:
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index f54a947051..5cbb859e47 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -20,3 +20,4 @@
)
from .node import Node
from .sut_node import SutNode
+from .tg_node import TGNode
--
2.30.2
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC PATCH v1 5/5] dts: add traffic generator node to dts runner
2023-04-20 9:31 ` [RFC PATCH v1 5/5] dts: add traffic generator node to dts runner Juraj Linkeš
@ 2023-05-03 18:02 ` Jeremy Spewock
0 siblings, 0 replies; 29+ messages in thread
From: Jeremy Spewock @ 2023-05-03 18:02 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, wathsala.vithanage, probb, dev
[-- Attachment #1: Type: text/plain, Size: 3729 bytes --]
Acked-by: Jeremy Spewock <jspewock@iol.unh.edu>
On Thu, Apr 20, 2023 at 5:51 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Initialize the TG node and do basic verification.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/framework/dts.py | 42 ++++++++++++++++---------
> dts/framework/testbed_model/__init__.py | 1 +
> 2 files changed, 28 insertions(+), 15 deletions(-)
>
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index 0502284580..9c82bfe1f4 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -9,7 +9,7 @@
> from .logger import DTSLOG, getLogger
> from .test_result import BuildTargetResult, DTSResult, ExecutionResult,
> Result
> from .test_suite import get_test_suites
> -from .testbed_model import SutNode
> +from .testbed_model import SutNode, TGNode, Node
> from .utils import check_dts_python_version
>
> dts_logger: DTSLOG = getLogger("DTSRunner")
> @@ -27,28 +27,40 @@ def run_all() -> None:
> # check the python version of the server that run dts
> check_dts_python_version()
>
> - nodes: dict[str, SutNode] = {}
> + nodes: dict[str, Node] = {}
> try:
> # for all Execution sections
> for execution in CONFIGURATION.executions:
> sut_node = None
> + tg_node = None
> if execution.system_under_test.name in nodes:
> # a Node with the same name already exists
> sut_node = nodes[execution.system_under_test.name]
> - else:
> - # the SUT has not been initialized yet
> - try:
> +
> + if execution.traffic_generator_system.name in nodes:
> + # a Node with the same name already exists
> + tg_node = nodes[execution.traffic_generator_system.name]
> +
> + try:
> + if not sut_node:
> sut_node = SutNode(execution.system_under_test)
> - result.update_setup(Result.PASS)
> - except Exception as e:
> - dts_logger.exception(
> - f"Connection to node
> {execution.system_under_test} failed."
> - )
> - result.update_setup(Result.FAIL, e)
> - else:
> - nodes[sut_node.name] = sut_node
> -
> - if sut_node:
> + if not tg_node:
> + tg_node = TGNode(execution.traffic_generator_system)
> + tg_node.verify()
> + result.update_setup(Result.PASS)
> + except Exception as e:
> + failed_node = execution.system_under_test.name
> + if sut_node:
> + failed_node = execution.traffic_generator_system.name
> + dts_logger.exception(
> + f"Creation of node {failed_node} failed."
> + )
> + result.update_setup(Result.FAIL, e)
> + else:
> + nodes[sut_node.name] = sut_node
> + nodes[tg_node.name] = tg_node
> +
> + if sut_node and tg_node:
> _run_execution(sut_node, execution, result)
>
> except Exception as e:
> diff --git a/dts/framework/testbed_model/__init__.py
> b/dts/framework/testbed_model/__init__.py
> index f54a947051..5cbb859e47 100644
> --- a/dts/framework/testbed_model/__init__.py
> +++ b/dts/framework/testbed_model/__init__.py
> @@ -20,3 +20,4 @@
> )
> from .node import Node
> from .sut_node import SutNode
> +from .tg_node import TGNode
> --
> 2.30.2
>
>
[-- Attachment #2: Type: text/html, Size: 5344 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v2 0/6] dts: tg abstractions and scapy tg
2023-04-20 9:31 [RFC PATCH v1 0/5] dts: add tg abstractions and scapy Juraj Linkeš
` (4 preceding siblings ...)
2023-04-20 9:31 ` [RFC PATCH v1 5/5] dts: add traffic generator node to dts runner Juraj Linkeš
@ 2023-07-17 11:07 ` Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 1/6] dts: add scapy dependency Juraj Linkeš
` (7 more replies)
5 siblings, 8 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-17 11:07 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, jspewock, probb
Cc: dev, Juraj Linkeš
Add abstractions for traffic generator split into those that can and
can't capture traffic.
The Scapy implementation uses an XML-RPC server for remote control. This
requires an interactive session to add Scapy funcions to the server. The
interactive session code is based on another patch [0].
The basic test case is there to showcase the Scapy implementation - it
sends just one UDP packet and verifies it on the other end.
[0]: http://patches.dpdk.org/project/dpdk/patch/20230713165347.21997-3-jspewock@iol.unh.edu/
Juraj Linkeš (6):
dts: add scapy dependency
dts: add traffic generator config
dts: traffic generator abstractions
dts: add python remote interactive shell
dts: scapy traffic generator implementation
dts: add basic UDP test case
doc/guides/tools/dts.rst | 31 ++
dts/conf.yaml | 27 +-
dts/framework/config/__init__.py | 115 ++++---
dts/framework/config/conf_yaml_schema.json | 32 +-
dts/framework/dts.py | 65 ++--
dts/framework/remote_session/__init__.py | 3 +-
dts/framework/remote_session/linux_session.py | 96 ++++++
dts/framework/remote_session/os_session.py | 75 +++--
.../remote_session/remote/__init__.py | 1 +
.../remote/interactive_shell.py | 18 +-
.../remote_session/remote/python_shell.py | 24 ++
.../remote_session/remote/testpmd_shell.py | 33 +-
dts/framework/test_suite.py | 221 ++++++++++++-
dts/framework/testbed_model/__init__.py | 1 +
.../capturing_traffic_generator.py | 135 ++++++++
dts/framework/testbed_model/hw/port.py | 60 ++++
dts/framework/testbed_model/node.py | 64 +++-
dts/framework/testbed_model/scapy.py | 290 ++++++++++++++++++
dts/framework/testbed_model/sut_node.py | 52 ++--
dts/framework/testbed_model/tg_node.py | 99 ++++++
.../testbed_model/traffic_generator.py | 72 +++++
dts/framework/utils.py | 13 +
dts/poetry.lock | 21 +-
dts/pyproject.toml | 1 +
dts/tests/TestSuite_os_udp.py | 45 +++
dts/tests/TestSuite_smoke_tests.py | 6 +-
26 files changed, 1434 insertions(+), 166 deletions(-)
create mode 100644 dts/framework/remote_session/remote/python_shell.py
create mode 100644 dts/framework/testbed_model/capturing_traffic_generator.py
create mode 100644 dts/framework/testbed_model/hw/port.py
create mode 100644 dts/framework/testbed_model/scapy.py
create mode 100644 dts/framework/testbed_model/tg_node.py
create mode 100644 dts/framework/testbed_model/traffic_generator.py
create mode 100644 dts/tests/TestSuite_os_udp.py
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v2 1/6] dts: add scapy dependency
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
@ 2023-07-17 11:07 ` Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 2/6] dts: add traffic generator config Juraj Linkeš
` (6 subsequent siblings)
7 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-17 11:07 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, jspewock, probb
Cc: dev, Juraj Linkeš
Required for scapy traffic generator.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/poetry.lock | 21 ++++++++++++++++-----
dts/pyproject.toml | 1 +
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/dts/poetry.lock b/dts/poetry.lock
index 2438f337cd..8cb9920ec7 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -346,6 +346,19 @@ category = "main"
optional = false
python-versions = ">=3.6"
+[[package]]
+name = "scapy"
+version = "2.5.0"
+description = "Scapy: interactive packet manipulation tool"
+category = "main"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4"
+
+[package.extras]
+basic = ["ipython"]
+complete = ["ipython", "pyx", "cryptography (>=2.0)", "matplotlib"]
+docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
+
[[package]]
name = "six"
version = "1.16.0"
@@ -409,7 +422,7 @@ jsonschema = ">=4,<5"
[metadata]
lock-version = "1.1"
python-versions = "^3.10"
-content-hash = "719c43bcaa5d181921debda884f8f714063df0b2336d61e9f64ecab034e8b139"
+content-hash = "907bf4ae92b05bbdb7cf2f37fc63e530702f1fff9990afa1f8e6c369b97ba592"
[metadata.files]
attrs = []
@@ -431,10 +444,7 @@ mypy-extensions = []
paramiko = []
pathlib2 = []
pathspec = []
-platformdirs = [
- {file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
- {file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
-]
+platformdirs = []
pycodestyle = []
pycparser = []
pydocstyle = []
@@ -443,6 +453,7 @@ pylama = []
pynacl = []
pyrsistent = []
pyyaml = []
+scapy = []
six = []
snowballstemmer = []
toml = []
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 50bcdb327a..bd7591f7fb 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -13,6 +13,7 @@ warlock = "^2.0.1"
PyYAML = "^6.0"
types-PyYAML = "^6.0.8"
fabric = "^2.7.1"
+scapy = "^2.5.0"
[tool.poetry.dev-dependencies]
mypy = "^0.961"
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v2 2/6] dts: add traffic generator config
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 1/6] dts: add scapy dependency Juraj Linkeš
@ 2023-07-17 11:07 ` Juraj Linkeš
2023-07-18 15:55 ` Jeremy Spewock
2023-07-17 11:07 ` [PATCH v2 3/6] dts: traffic generator abstractions Juraj Linkeš
` (5 subsequent siblings)
7 siblings, 1 reply; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-17 11:07 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, jspewock, probb
Cc: dev, Juraj Linkeš
Node configuration - where to connect, what ports to use and what TG to
use.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 26 ++++++-
dts/framework/config/__init__.py | 87 +++++++++++++++++++---
dts/framework/config/conf_yaml_schema.json | 29 +++++++-
dts/framework/dts.py | 12 +--
dts/framework/testbed_model/node.py | 4 +-
dts/framework/testbed_model/sut_node.py | 6 +-
6 files changed, 141 insertions(+), 23 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 3a5d87cb49..7f089022ba 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,10 +13,11 @@ executions:
skip_smoke_tests: false # optional flag that allow you to skip smoke tests
test_suites:
- hello_world
- system_under_test:
+ system_under_test_node:
node_name: "SUT 1"
vdevs: # optional; if removed, vdevs won't be used in the execution
- "crypto_openssl"
+ traffic_generator_node: "TG 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
@@ -40,3 +41,26 @@ nodes:
os_driver: i40e
peer_node: "TG 1"
peer_pci: "0000:00:08.1"
+ - name: "TG 1"
+ hostname: tg1.change.me.localhost
+ user: dtsuser
+ arch: x86_64
+ os: linux
+ lcores: ""
+ ports:
+ - pci: "0000:00:08.0"
+ os_driver_for_dpdk: rdma
+ os_driver: rdma
+ peer_node: "SUT 1"
+ peer_pci: "0000:00:08.0"
+ - pci: "0000:00:08.1"
+ os_driver_for_dpdk: rdma
+ os_driver: rdma
+ peer_node: "SUT 1"
+ peer_pci: "0000:00:08.1"
+ use_first_core: false
+ hugepages: # optional; if removed, will use system hugepage configuration
+ amount: 256
+ force_first_numa: false
+ traffic_generator:
+ type: SCAPY
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index fad56cc520..72aa021b97 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -13,7 +13,7 @@
from dataclasses import dataclass
from enum import Enum, auto, unique
from pathlib import PurePath
-from typing import Any, TypedDict
+from typing import Any, TypedDict, Union
import warlock # type: ignore
import yaml
@@ -55,6 +55,11 @@ class Compiler(StrEnum):
msvc = auto()
+@unique
+class TrafficGeneratorType(StrEnum):
+ SCAPY = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -79,6 +84,29 @@ class PortConfig:
def from_dict(node: str, d: dict) -> "PortConfig":
return PortConfig(node=node, **d)
+ def __str__(self) -> str:
+ return f"Port {self.pci} on node {self.node}"
+
+
+@dataclass(slots=True, frozen=True)
+class TrafficGeneratorConfig:
+ traffic_generator_type: TrafficGeneratorType
+
+ @staticmethod
+ def from_dict(d: dict):
+ # This looks useless now, but is designed to allow expansion to traffic
+ # generators that require more configuration later.
+ match TrafficGeneratorType(d["type"]):
+ case TrafficGeneratorType.SCAPY:
+ return ScapyTrafficGeneratorConfig(
+ traffic_generator_type=TrafficGeneratorType.SCAPY
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+ pass
+
@dataclass(slots=True, frozen=True)
class NodeConfiguration:
@@ -90,17 +118,17 @@ class NodeConfiguration:
os: OS
lcores: str
use_first_core: bool
- memory_channels: int
hugepages: HugepageConfiguration | None
ports: list[PortConfig]
@staticmethod
- def from_dict(d: dict) -> "NodeConfiguration":
+ def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
hugepage_config = d.get("hugepages")
if hugepage_config:
if "force_first_numa" not in hugepage_config:
hugepage_config["force_first_numa"] = False
hugepage_config = HugepageConfiguration(**hugepage_config)
+
common_config = {
"name": d["name"],
"hostname": d["hostname"],
@@ -110,12 +138,31 @@ def from_dict(d: dict) -> "NodeConfiguration":
"os": OS(d["os"]),
"lcores": d.get("lcores", "1"),
"use_first_core": d.get("use_first_core", False),
- "memory_channels": d.get("memory_channels", 1),
"hugepages": hugepage_config,
"ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
}
- return NodeConfiguration(**common_config)
+ if "traffic_generator" in d:
+ return TGNodeConfiguration(
+ traffic_generator=TrafficGeneratorConfig.from_dict(
+ d["traffic_generator"]
+ ),
+ **common_config,
+ )
+ else:
+ return SutNodeConfiguration(
+ memory_channels=d.get("memory_channels", 1), **common_config
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class SutNodeConfiguration(NodeConfiguration):
+ memory_channels: int
+
+
+@dataclass(slots=True, frozen=True)
+class TGNodeConfiguration(NodeConfiguration):
+ traffic_generator: ScapyTrafficGeneratorConfig
@dataclass(slots=True, frozen=True)
@@ -194,23 +241,40 @@ class ExecutionConfiguration:
perf: bool
func: bool
test_suites: list[TestSuiteConfig]
- system_under_test: NodeConfiguration
+ system_under_test_node: SutNodeConfiguration
+ traffic_generator_node: TGNodeConfiguration
vdevs: list[str]
skip_smoke_tests: bool
@staticmethod
- def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ def from_dict(
+ d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
+ ) -> "ExecutionConfiguration":
build_targets: list[BuildTargetConfiguration] = list(
map(BuildTargetConfiguration.from_dict, d["build_targets"])
)
test_suites: list[TestSuiteConfig] = list(
map(TestSuiteConfig.from_dict, d["test_suites"])
)
- sut_name = d["system_under_test"]["node_name"]
+ sut_name = d["system_under_test_node"]["node_name"]
skip_smoke_tests = d.get("skip_smoke_tests", False)
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
+ system_under_test_node = node_map[sut_name]
+ assert isinstance(
+ system_under_test_node, SutNodeConfiguration
+ ), f"Invalid SUT configuration {system_under_test_node}"
+
+ tg_name = d["traffic_generator_node"]
+ assert tg_name in node_map, f"Unknown TG {tg_name} in execution {d}"
+ traffic_generator_node = node_map[tg_name]
+ assert isinstance(
+ traffic_generator_node, TGNodeConfiguration
+ ), f"Invalid TG configuration {traffic_generator_node}"
+
vdevs = (
- d["system_under_test"]["vdevs"] if "vdevs" in d["system_under_test"] else []
+ d["system_under_test_node"]["vdevs"]
+ if "vdevs" in d["system_under_test_node"]
+ else []
)
return ExecutionConfiguration(
build_targets=build_targets,
@@ -218,7 +282,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
func=d["func"],
skip_smoke_tests=skip_smoke_tests,
test_suites=test_suites,
- system_under_test=node_map[sut_name],
+ system_under_test_node=system_under_test_node,
+ traffic_generator_node=traffic_generator_node,
vdevs=vdevs,
)
@@ -229,7 +294,7 @@ class Configuration:
@staticmethod
def from_dict(d: dict) -> "Configuration":
- nodes: list[NodeConfiguration] = list(
+ nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
map(NodeConfiguration.from_dict, d["nodes"])
)
assert len(nodes) > 0, "There must be a node to test"
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 61f52b4365..76df84840a 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -164,6 +164,11 @@
"amount"
]
},
+ "mac_address": {
+ "type": "string",
+ "description": "A MAC address",
+ "pattern": "^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$"
+ },
"pci_address": {
"type": "string",
"pattern": "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$"
@@ -286,6 +291,22 @@
]
},
"minimum": 1
+ },
+ "traffic_generator": {
+ "oneOf": [
+ {
+ "type": "object",
+ "description": "Scapy traffic generator. Used for functional testing.",
+ "properties": {
+ "type": {
+ "type": "string",
+ "enum": [
+ "SCAPY"
+ ]
+ }
+ }
+ }
+ ]
}
},
"additionalProperties": false,
@@ -336,7 +357,7 @@
"description": "Optional field that allows you to skip smoke testing",
"type": "boolean"
},
- "system_under_test": {
+ "system_under_test_node": {
"type":"object",
"properties": {
"node_name": {
@@ -353,6 +374,9 @@
"required": [
"node_name"
]
+ },
+ "traffic_generator_node": {
+ "$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
@@ -361,7 +385,8 @@
"perf",
"func",
"test_suites",
- "system_under_test"
+ "system_under_test_node",
+ "traffic_generator_node"
]
},
"minimum": 1
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 7b09d8fba8..372bc72787 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -38,17 +38,17 @@ def run_all() -> None:
# for all Execution sections
for execution in CONFIGURATION.executions:
sut_node = None
- if execution.system_under_test.name in nodes:
+ if execution.system_under_test_node.name in nodes:
# a Node with the same name already exists
- sut_node = nodes[execution.system_under_test.name]
+ sut_node = nodes[execution.system_under_test_node.name]
else:
# the SUT has not been initialized yet
try:
- sut_node = SutNode(execution.system_under_test)
+ sut_node = SutNode(execution.system_under_test_node)
result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception(
- f"Connection to node {execution.system_under_test} failed."
+ f"Connection to node {execution.system_under_test_node} failed."
)
result.update_setup(Result.FAIL, e)
else:
@@ -87,7 +87,9 @@ def _run_execution(
Run the given execution. This involves running the execution setup as well as
running all build targets in the given execution.
"""
- dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+ dts_logger.info(
+ f"Running execution with SUT '{execution.system_under_test_node.name}'."
+ )
execution_result = result.add_execution(sut_node.config, sut_node.node_info)
try:
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index c5147e0ee6..d2d55d904e 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -48,6 +48,8 @@ def __init__(self, node_config: NodeConfiguration):
self._logger = getLogger(self.name)
self.main_session = create_session(self.config, self.name, self._logger)
+ self._logger.info(f"Connected to node: {self.name}")
+
self._get_remote_cpus()
# filter the node lcores according to user config
self.lcores = LogicalCoreListFilter(
@@ -56,8 +58,6 @@ def __init__(self, node_config: NodeConfiguration):
self._other_sessions = []
- self._logger.info(f"Created node: {self.name}")
-
def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
Perform the execution setup that will be done for each execution
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index 53953718a1..bcad364435 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -13,8 +13,8 @@
BuildTargetConfiguration,
BuildTargetInfo,
InteractiveApp,
- NodeConfiguration,
NodeInfo,
+ SutNodeConfiguration,
)
from framework.remote_session import (
CommandResult,
@@ -83,6 +83,7 @@ class SutNode(Node):
Another key capability is building DPDK according to given build target.
"""
+ config: SutNodeConfiguration
_dpdk_prefix_list: list[str]
_dpdk_timestamp: str
_build_target_config: BuildTargetConfiguration | None
@@ -95,7 +96,7 @@ class SutNode(Node):
_node_info: NodeInfo | None
_compiler_version: str | None
- def __init__(self, node_config: NodeConfiguration):
+ def __init__(self, node_config: SutNodeConfiguration):
super(SutNode, self).__init__(node_config)
self._dpdk_prefix_list = []
self._build_target_config = None
@@ -110,6 +111,7 @@ def __init__(self, node_config: NodeConfiguration):
self._dpdk_version = None
self._node_info = None
self._compiler_version = None
+ self._logger.info(f"Created node: {self.name}")
@property
def _remote_dpdk_dir(self) -> PurePath:
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v2 3/6] dts: traffic generator abstractions
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 1/6] dts: add scapy dependency Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 2/6] dts: add traffic generator config Juraj Linkeš
@ 2023-07-17 11:07 ` Juraj Linkeš
2023-07-18 19:56 ` Jeremy Spewock
2023-07-17 11:07 ` [PATCH v2 4/6] dts: add python remote interactive shell Juraj Linkeš
` (4 subsequent siblings)
7 siblings, 1 reply; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-17 11:07 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, jspewock, probb
Cc: dev, Juraj Linkeš
There are traffic abstractions for all traffic generators and for
traffic generators that can capture (not just count) packets.
There also related abstractions, such as TGNode where the traffic
generators reside and some related code.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
doc/guides/tools/dts.rst | 31 ++++
dts/framework/dts.py | 61 ++++----
dts/framework/remote_session/linux_session.py | 78 ++++++++++
dts/framework/remote_session/os_session.py | 15 ++
dts/framework/test_suite.py | 4 +-
dts/framework/testbed_model/__init__.py | 1 +
.../capturing_traffic_generator.py | 135 ++++++++++++++++++
dts/framework/testbed_model/hw/port.py | 60 ++++++++
dts/framework/testbed_model/node.py | 15 ++
dts/framework/testbed_model/scapy.py | 74 ++++++++++
dts/framework/testbed_model/tg_node.py | 99 +++++++++++++
.../testbed_model/traffic_generator.py | 72 ++++++++++
dts/framework/utils.py | 13 ++
13 files changed, 632 insertions(+), 26 deletions(-)
create mode 100644 dts/framework/testbed_model/capturing_traffic_generator.py
create mode 100644 dts/framework/testbed_model/hw/port.py
create mode 100644 dts/framework/testbed_model/scapy.py
create mode 100644 dts/framework/testbed_model/tg_node.py
create mode 100644 dts/framework/testbed_model/traffic_generator.py
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index c7b31623e4..2f97d1df6e 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -153,6 +153,37 @@ There are two areas that need to be set up on a System Under Test:
sudo usermod -aG sudo <sut_user>
+
+Setting up Traffic Generator Node
+---------------------------------
+
+These need to be set up on a Traffic Generator Node:
+
+#. **Traffic generator dependencies**
+
+ The traffic generator running on the traffic generator node must be installed beforehand.
+ For Scapy traffic generator, only a few Python libraries need to be installed:
+
+ .. code-block:: console
+
+ sudo apt install python3-pip
+ sudo pip install --upgrade pip
+ sudo pip install scapy==2.5.0
+
+#. **Hardware dependencies**
+
+ The traffic generators, like DPDK, need a proper driver and firmware.
+ The Scapy traffic generator doesn't have strict requirements - the drivers that come
+ with most OS distributions will be satisfactory.
+
+
+#. **User with administrator privileges**
+
+ Similarly to the System Under Test, traffic generators need administrator privileges
+ to be able to use the devices.
+ Refer to the `System Under Test section <sut_admin_user>` for details.
+
+
Running DTS
-----------
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 372bc72787..265ed7fd5b 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -15,7 +15,7 @@
from .logger import DTSLOG, getLogger
from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
from .test_suite import get_test_suites
-from .testbed_model import SutNode
+from .testbed_model import SutNode, TGNode
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("DTSRunner")
@@ -33,29 +33,31 @@ def run_all() -> None:
# check the python version of the server that run dts
check_dts_python_version()
- nodes: dict[str, SutNode] = {}
+ sut_nodes: dict[str, SutNode] = {}
+ tg_nodes: dict[str, TGNode] = {}
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_node = None
- if execution.system_under_test_node.name in nodes:
- # a Node with the same name already exists
- sut_node = nodes[execution.system_under_test_node.name]
- else:
- # the SUT has not been initialized yet
- try:
+ sut_node = sut_nodes.get(execution.system_under_test_node.name)
+ tg_node = tg_nodes.get(execution.traffic_generator_node.name)
+
+ try:
+ if not sut_node:
sut_node = SutNode(execution.system_under_test_node)
- result.update_setup(Result.PASS)
- except Exception as e:
- dts_logger.exception(
- f"Connection to node {execution.system_under_test_node} failed."
- )
- result.update_setup(Result.FAIL, e)
- else:
- nodes[sut_node.name] = sut_node
-
- if sut_node:
- _run_execution(sut_node, execution, result)
+ sut_nodes[sut_node.name] = sut_node
+ if not tg_node:
+ tg_node = TGNode(execution.traffic_generator_node)
+ tg_nodes[tg_node.name] = tg_node
+ result.update_setup(Result.PASS)
+ except Exception as e:
+ failed_node = execution.system_under_test_node.name
+ if sut_node:
+ failed_node = execution.traffic_generator_node.name
+ dts_logger.exception(f"Creation of node {failed_node} failed.")
+ result.update_setup(Result.FAIL, e)
+
+ else:
+ _run_execution(sut_node, tg_node, execution, result)
except Exception as e:
dts_logger.exception("An unexpected error has occurred.")
@@ -64,7 +66,7 @@ def run_all() -> None:
finally:
try:
- for node in nodes.values():
+ for node in (sut_nodes | tg_nodes).values():
node.close()
result.update_teardown(Result.PASS)
except Exception as e:
@@ -81,7 +83,10 @@ def run_all() -> None:
def _run_execution(
- sut_node: SutNode, execution: ExecutionConfiguration, result: DTSResult
+ sut_node: SutNode,
+ tg_node: TGNode,
+ execution: ExecutionConfiguration,
+ result: DTSResult,
) -> None:
"""
Run the given execution. This involves running the execution setup as well as
@@ -101,7 +106,9 @@ def _run_execution(
else:
for build_target in execution.build_targets:
- _run_build_target(sut_node, build_target, execution, execution_result)
+ _run_build_target(
+ sut_node, tg_node, build_target, execution, execution_result
+ )
finally:
try:
@@ -114,6 +121,7 @@ def _run_execution(
def _run_build_target(
sut_node: SutNode,
+ tg_node: TGNode,
build_target: BuildTargetConfiguration,
execution: ExecutionConfiguration,
execution_result: ExecutionResult,
@@ -134,7 +142,7 @@ def _run_build_target(
build_target_result.update_setup(Result.FAIL, e)
else:
- _run_all_suites(sut_node, execution, build_target_result)
+ _run_all_suites(sut_node, tg_node, execution, build_target_result)
finally:
try:
@@ -147,6 +155,7 @@ def _run_build_target(
def _run_all_suites(
sut_node: SutNode,
+ tg_node: TGNode,
execution: ExecutionConfiguration,
build_target_result: BuildTargetResult,
) -> None:
@@ -161,7 +170,7 @@ def _run_all_suites(
for test_suite_config in execution.test_suites:
try:
_run_single_suite(
- sut_node, execution, build_target_result, test_suite_config
+ sut_node, tg_node, execution, build_target_result, test_suite_config
)
except BlockingTestSuiteError as e:
dts_logger.exception(
@@ -177,6 +186,7 @@ def _run_all_suites(
def _run_single_suite(
sut_node: SutNode,
+ tg_node: TGNode,
execution: ExecutionConfiguration,
build_target_result: BuildTargetResult,
test_suite_config: TestSuiteConfig,
@@ -205,6 +215,7 @@ def _run_single_suite(
for test_suite_class in test_suite_classes:
test_suite = test_suite_class(
sut_node,
+ tg_node,
test_suite_config.test_cases,
execution.func,
build_target_result,
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index f13f399121..284c74795d 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,13 +2,47 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+import json
+from typing import TypedDict
+
+from typing_extensions import NotRequired
+
from framework.exception import RemoteCommandExecutionError
from framework.testbed_model import LogicalCore
+from framework.testbed_model.hw.port import Port
from framework.utils import expand_range
from .posix_session import PosixSession
+class LshwConfigurationOutput(TypedDict):
+ link: str
+
+
+class LshwOutput(TypedDict):
+ """
+ A model of the relevant information from json lshw output, e.g.:
+ {
+ ...
+ "businfo" : "pci@0000:08:00.0",
+ "logicalname" : "enp8s0",
+ "version" : "00",
+ "serial" : "52:54:00:59:e1:ac",
+ ...
+ "configuration" : {
+ ...
+ "link" : "yes",
+ ...
+ },
+ ...
+ """
+
+ businfo: str
+ logicalname: NotRequired[str]
+ serial: NotRequired[str]
+ configuration: LshwConfigurationOutput
+
+
class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
@@ -102,3 +136,47 @@ def _configure_huge_pages(
self.send_command(
f"echo {amount} | tee {hugepage_config_path}", privileged=True
)
+
+ def update_ports(self, ports: list[Port]) -> None:
+ self._logger.debug("Gathering port info.")
+ for port in ports:
+ assert (
+ port.node == self.name
+ ), "Attempted to gather port info on the wrong node"
+
+ port_info_list = self._get_lshw_info()
+ for port in ports:
+ for port_info in port_info_list:
+ if f"pci@{port.pci}" == port_info.get("businfo"):
+ self._update_port_attr(
+ port, port_info.get("logicalname"), "logical_name"
+ )
+ self._update_port_attr(port, port_info.get("serial"), "mac_address")
+ port_info_list.remove(port_info)
+ break
+ else:
+ self._logger.warning(f"No port at pci address {port.pci} found.")
+
+ def _get_lshw_info(self) -> list[LshwOutput]:
+ output = self.send_command("lshw -quiet -json -C network", verify=True)
+ return json.loads(output.stdout)
+
+ def _update_port_attr(
+ self, port: Port, attr_value: str | None, attr_name: str
+ ) -> None:
+ if attr_value:
+ setattr(port, attr_name, attr_value)
+ self._logger.debug(
+ f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
+ )
+ else:
+ self._logger.warning(
+ f"Attempted to get '{attr_name}' of port {port.pci}, "
+ f"but it doesn't exist."
+ )
+
+ def configure_port_state(self, port: Port, enable: bool) -> None:
+ state = "up" if enable else "down"
+ self.send_command(
+ f"ip link set dev {port.logical_name} {state}", privileged=True
+ )
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index cc13b02f16..633d06eb5d 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -12,6 +12,7 @@
from framework.remote_session.remote import InteractiveShell, TestPmdShell
from framework.settings import SETTINGS
from framework.testbed_model import LogicalCore
+from framework.testbed_model.hw.port import Port
from framework.utils import MesonArgs
from .remote import (
@@ -255,3 +256,17 @@ def get_node_info(self) -> NodeInfo:
"""
Collect information about the node
"""
+
+ @abstractmethod
+ def update_ports(self, ports: list[Port]) -> None:
+ """
+ Get additional information about ports:
+ Logical name (e.g. enp7s0) if applicable
+ Mac address
+ """
+
+ @abstractmethod
+ def configure_port_state(self, port: Port, enable: bool) -> None:
+ """
+ Enable/disable port.
+ """
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index de94c9332d..056460dd05 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -20,7 +20,7 @@
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode
+from .testbed_model import SutNode, TGNode
class TestSuite(object):
@@ -51,11 +51,13 @@ class TestSuite(object):
def __init__(
self,
sut_node: SutNode,
+ tg_node: TGNode,
test_cases: list[str],
func: bool,
build_target_result: BuildTargetResult,
):
self.sut_node = sut_node
+ self.tg_node = tg_node
self._logger = getLogger(self.__class__.__name__)
self._test_cases_to_run = test_cases
self._test_cases_to_run.extend(SETTINGS.test_cases)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index f54a947051..5cbb859e47 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -20,3 +20,4 @@
)
from .node import Node
from .sut_node import SutNode
+from .tg_node import TGNode
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/capturing_traffic_generator.py
new file mode 100644
index 0000000000..1130d87f1e
--- /dev/null
+++ b/dts/framework/testbed_model/capturing_traffic_generator.py
@@ -0,0 +1,135 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Traffic generator that can capture packets.
+
+In functional testing, we need to interrogate received packets to check their validity.
+Here we define the interface common to all
+traffic generators capable of capturing traffic.
+"""
+
+import uuid
+from abc import abstractmethod
+
+import scapy.utils # type: ignore[import]
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.settings import SETTINGS
+from framework.utils import get_packet_summaries
+
+from .hw.port import Port
+from .traffic_generator import TrafficGenerator
+
+
+def _get_default_capture_name() -> str:
+ """
+ This is the function used for the default implementation of capture names.
+ """
+ return str(uuid.uuid4())
+
+
+class CapturingTrafficGenerator(TrafficGenerator):
+ """
+ A mixin interface which enables a packet generator to declare that it can capture
+ packets and return them to the user.
+
+ The methods of capturing traffic generators obey the following workflow:
+ 1. send packets
+ 2. capture packets
+ 3. write the capture to a .pcap file
+ 4. return the received packets
+ """
+
+ @property
+ def is_capturing(self) -> bool:
+ return True
+
+ def send_packet_and_capture(
+ self,
+ packet: Packet,
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ """Send a packet, return received traffic.
+
+ Send a packet on the send_port and then return all traffic captured
+ on the receive_port for the given duration. Also record the captured traffic
+ in a pcap file.
+
+ Args:
+ packet: The packet to send.
+ send_port: The egress port on the TG node.
+ receive_port: The ingress port in the TG node.
+ duration: Capture traffic for this amount of time after sending the packet.
+ capture_name: The name of the .pcap file where to store the capture.
+
+ Returns:
+ A list of received packets. May be empty if no packets are captured.
+ """
+ return self.send_packets_and_capture(
+ [packet], send_port, receive_port, duration, capture_name
+ )
+
+ def send_packets_and_capture(
+ self,
+ packets: list[Packet],
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ """Send packets, return received traffic.
+
+ Send packets on the send_port and then return all traffic captured
+ on the receive_port for the given duration. Also record the captured traffic
+ in a pcap file.
+
+ Args:
+ packets: The packets to send.
+ send_port: The egress port on the TG node.
+ receive_port: The ingress port in the TG node.
+ duration: Capture traffic for this amount of time after sending the packets.
+ capture_name: The name of the .pcap file where to store the capture.
+
+ Returns:
+ A list of received packets. May be empty if no packets are captured.
+ """
+ self._logger.debug(get_packet_summaries(packets))
+ self._logger.debug(
+ f"Sending packet on {send_port.logical_name}, "
+ f"receiving on {receive_port.logical_name}."
+ )
+ received_packets = self._send_packets_and_capture(
+ packets,
+ send_port,
+ receive_port,
+ duration,
+ )
+
+ self._logger.debug(
+ f"Received packets: {get_packet_summaries(received_packets)}"
+ )
+ self._write_capture_from_packets(capture_name, received_packets)
+ return received_packets
+
+ @abstractmethod
+ def _send_packets_and_capture(
+ self,
+ packets: list[Packet],
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ ) -> list[Packet]:
+ """
+ The extended classes must implement this method which
+ sends packets on send_port and receives packets on the receive_port
+ for the specified duration. It must be able to handle no received packets.
+ """
+
+ def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+ file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
+ self._logger.debug(f"Writing packets to {file_name}.")
+ scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/hw/port.py
new file mode 100644
index 0000000000..680c29bfe3
--- /dev/null
+++ b/dts/framework/testbed_model/hw/port.py
@@ -0,0 +1,60 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from dataclasses import dataclass
+
+from framework.config import PortConfig
+
+
+@dataclass(slots=True, frozen=True)
+class PortIdentifier:
+ node: str
+ pci: str
+
+
+@dataclass(slots=True)
+class Port:
+ """
+ identifier: The PCI address of the port on a node.
+
+ os_driver: The driver used by this port when the OS is controlling it.
+ Example: i40e
+ os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it,
+ Example: vfio-pci.
+
+ Note: os_driver and os_driver_for_dpdk may be the same thing.
+ Example: mlx5_core
+
+ peer: The identifier of a port this port is connected with.
+ """
+
+ identifier: PortIdentifier
+ os_driver: str
+ os_driver_for_dpdk: str
+ peer: PortIdentifier
+ mac_address: str = ""
+ logical_name: str = ""
+
+ def __init__(self, node_name: str, config: PortConfig):
+ self.identifier = PortIdentifier(
+ node=node_name,
+ pci=config.pci,
+ )
+ self.os_driver = config.os_driver
+ self.os_driver_for_dpdk = config.os_driver_for_dpdk
+ self.peer = PortIdentifier(node=config.peer_node, pci=config.peer_pci)
+
+ @property
+ def node(self) -> str:
+ return self.identifier.node
+
+ @property
+ def pci(self) -> str:
+ return self.identifier.pci
+
+
+@dataclass(slots=True, frozen=True)
+class PortLink:
+ sut_port: Port
+ tg_port: Port
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index d2d55d904e..e09931cedf 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -25,6 +25,7 @@
LogicalCoreListFilter,
lcore_filter,
)
+from .hw.port import Port
class Node(object):
@@ -38,6 +39,7 @@ class Node(object):
config: NodeConfiguration
name: str
lcores: list[LogicalCore]
+ ports: list[Port]
_logger: DTSLOG
_other_sessions: list[OSSession]
_execution_config: ExecutionConfiguration
@@ -57,6 +59,13 @@ def __init__(self, node_config: NodeConfiguration):
).filter()
self._other_sessions = []
+ self._init_ports()
+
+ def _init_ports(self) -> None:
+ self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
+ self.main_session.update_ports(self.ports)
+ for port in self.ports:
+ self.configure_port_state(port)
def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
@@ -168,6 +177,12 @@ def _setup_hugepages(self):
self.config.hugepages.amount, self.config.hugepages.force_first_numa
)
+ def configure_port_state(self, port: Port, enable: bool = True) -> None:
+ """
+ Enable/disable port.
+ """
+ self.main_session.configure_port_state(port, enable)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/scapy.py
new file mode 100644
index 0000000000..1a23dc9fa3
--- /dev/null
+++ b/dts/framework/testbed_model/scapy.py
@@ -0,0 +1,74 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Scapy traffic generator.
+
+Traffic generator used for functional testing, implemented using the Scapy library.
+The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
+
+The XML-RPC server runs in an interactive remote SSH session running Python console,
+where we start the server. The communication with the server is facilitated with
+a local server proxy.
+"""
+
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.config import OS, ScapyTrafficGeneratorConfig
+from framework.logger import getLogger
+
+from .capturing_traffic_generator import (
+ CapturingTrafficGenerator,
+ _get_default_capture_name,
+)
+from .hw.port import Port
+from .tg_node import TGNode
+
+
+class ScapyTrafficGenerator(CapturingTrafficGenerator):
+ """Provides access to scapy functions via an RPC interface.
+
+ The traffic generator first starts an XML-RPC on the remote TG node.
+ Then it populates the server with functions which use the Scapy library
+ to send/receive traffic.
+
+ Any packets sent to the remote server are first converted to bytes.
+ They are received as xmlrpc.client.Binary objects on the server side.
+ When the server sends the packets back, they are also received as
+ xmlrpc.client.Binary object on the client side, are converted back to Scapy
+ packets and only then returned from the methods.
+
+ Arguments:
+ tg_node: The node where the traffic generator resides.
+ config: The user configuration of the traffic generator.
+ """
+
+ _config: ScapyTrafficGeneratorConfig
+ _tg_node: TGNode
+
+ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
+ self._config = config
+ self._tg_node = tg_node
+ self._logger = getLogger(
+ f"{self._tg_node.name} {self._config.traffic_generator_type}"
+ )
+
+ assert (
+ self._tg_node.config.os == OS.linux
+ ), "Linux is the only supported OS for scapy traffic generation"
+
+ def _send_packets(self, packets: list[Packet], port: Port) -> None:
+ raise NotImplementedError()
+
+ def _send_packets_and_capture(
+ self,
+ packets: list[Packet],
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ raise NotImplementedError()
+
+ def close(self):
+ pass
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
new file mode 100644
index 0000000000..27025cfa31
--- /dev/null
+++ b/dts/framework/testbed_model/tg_node.py
@@ -0,0 +1,99 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Traffic generator node.
+
+This is the node where the traffic generator resides.
+The distinction between a node and a traffic generator is as follows:
+A node is a host that DTS connects to. It could be a baremetal server,
+a VM or a container.
+A traffic generator is software running on the node.
+A traffic generator node is a node running a traffic generator.
+A node can be a traffic generator node as well as system under test node.
+"""
+
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.config import (
+ ScapyTrafficGeneratorConfig,
+ TGNodeConfiguration,
+ TrafficGeneratorType,
+)
+from framework.exception import ConfigurationError
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .hw.port import Port
+from .node import Node
+
+
+class TGNode(Node):
+ """Manage connections to a node with a traffic generator.
+
+ Apart from basic node management capabilities, the Traffic Generator node has
+ specialized methods for handling the traffic generator running on it.
+
+ Arguments:
+ node_config: The user configuration of the traffic generator node.
+
+ Attributes:
+ traffic_generator: The traffic generator running on the node.
+ """
+
+ traffic_generator: CapturingTrafficGenerator
+
+ def __init__(self, node_config: TGNodeConfiguration):
+ super(TGNode, self).__init__(node_config)
+ self.traffic_generator = create_traffic_generator(
+ self, node_config.traffic_generator
+ )
+ self._logger.info(f"Created node: {self.name}")
+
+ def send_packet_and_capture(
+ self,
+ packet: Packet,
+ send_port: Port,
+ receive_port: Port,
+ duration: float = 1,
+ ) -> list[Packet]:
+ """Send a packet, return received traffic.
+
+ Send a packet on the send_port and then return all traffic captured
+ on the receive_port for the given duration. Also record the captured traffic
+ in a pcap file.
+
+ Args:
+ packet: The packet to send.
+ send_port: The egress port on the TG node.
+ receive_port: The ingress port in the TG node.
+ duration: Capture traffic for this amount of time after sending the packet.
+
+ Returns:
+ A list of received packets. May be empty if no packets are captured.
+ """
+ return self.traffic_generator.send_packet_and_capture(
+ packet, send_port, receive_port, duration
+ )
+
+ def close(self) -> None:
+ """Free all resources used by the node"""
+ self.traffic_generator.close()
+ super(TGNode, self).close()
+
+
+def create_traffic_generator(
+ tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+ """A factory function for creating traffic generator object from user config."""
+
+ from .scapy import ScapyTrafficGenerator
+
+ match traffic_generator_config.traffic_generator_type:
+ case TrafficGeneratorType.SCAPY:
+ return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+ case _:
+ raise ConfigurationError(
+ "Unknown traffic generator: "
+ f"{traffic_generator_config.traffic_generator_type}"
+ )
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator.py
new file mode 100644
index 0000000000..28c35d3ce4
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator.py
@@ -0,0 +1,72 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""The base traffic generator.
+
+These traffic generators can't capture received traffic,
+only count the number of received packets.
+"""
+
+from abc import ABC, abstractmethod
+
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.logger import DTSLOG
+from framework.utils import get_packet_summaries
+
+from .hw.port import Port
+
+
+class TrafficGenerator(ABC):
+ """The base traffic generator.
+
+ Defines the few basic methods that each traffic generator must implement.
+ """
+
+ _logger: DTSLOG
+
+ def send_packet(self, packet: Packet, port: Port) -> None:
+ """Send a packet and block until it is fully sent.
+
+ What fully sent means is defined by the traffic generator.
+
+ Args:
+ packet: The packet to send.
+ port: The egress port on the TG node.
+ """
+ self.send_packets([packet], port)
+
+ def send_packets(self, packets: list[Packet], port: Port) -> None:
+ """Send packets and block until they are fully sent.
+
+ What fully sent means is defined by the traffic generator.
+
+ Args:
+ packets: The packets to send.
+ port: The egress port on the TG node.
+ """
+ self._logger.info(f"Sending packet{'s' if len(packets) > 1 else ''}.")
+ self._logger.debug(get_packet_summaries(packets))
+ self._send_packets(packets, port)
+
+ @abstractmethod
+ def _send_packets(self, packets: list[Packet], port: Port) -> None:
+ """
+ The extended classes must implement this method which
+ sends packets on send_port. The method should block until all packets
+ are fully sent.
+ """
+
+ @property
+ def is_capturing(self) -> bool:
+ """Whether this traffic generator can capture traffic.
+
+ Returns:
+ True if the traffic generator can capture traffic, False otherwise.
+ """
+ return False
+
+ @abstractmethod
+ def close(self) -> None:
+ """Free all resources used by the traffic generator."""
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 60abe46edf..d27c2c5b5f 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -4,6 +4,7 @@
# Copyright(c) 2022-2023 University of New Hampshire
import atexit
+import json
import os
import subprocess
import sys
@@ -11,6 +12,8 @@
from pathlib import Path
from subprocess import SubprocessError
+from scapy.packet import Packet # type: ignore[import]
+
from .exception import ConfigurationError
@@ -64,6 +67,16 @@ def expand_range(range_str: str) -> list[int]:
return expanded_range
+def get_packet_summaries(packets: list[Packet]):
+ if len(packets) == 1:
+ packet_summaries = packets[0].summary()
+ else:
+ packet_summaries = json.dumps(
+ list(map(lambda pkt: pkt.summary(), packets)), indent=4
+ )
+ return f"Packet contents: \n{packet_summaries}"
+
+
def RED(text: str) -> str:
return f"\u001B[31;1m{str(text)}\u001B[0m"
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v2 4/6] dts: add python remote interactive shell
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
` (2 preceding siblings ...)
2023-07-17 11:07 ` [PATCH v2 3/6] dts: traffic generator abstractions Juraj Linkeš
@ 2023-07-17 11:07 ` Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 5/6] dts: scapy traffic generator implementation Juraj Linkeš
` (3 subsequent siblings)
7 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-17 11:07 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, jspewock, probb
Cc: dev, Juraj Linkeš
The shell can be used to remotely run any Python code interactively.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/config/__init__.py | 28 +-----------
dts/framework/remote_session/__init__.py | 2 +-
dts/framework/remote_session/os_session.py | 42 +++++++++---------
.../remote/interactive_shell.py | 18 +++++---
.../remote_session/remote/python_shell.py | 24 +++++++++++
.../remote_session/remote/testpmd_shell.py | 33 +++-----------
dts/framework/testbed_model/node.py | 35 ++++++++++++++-
dts/framework/testbed_model/sut_node.py | 43 ++++++++-----------
dts/tests/TestSuite_smoke_tests.py | 6 +--
9 files changed, 119 insertions(+), 112 deletions(-)
create mode 100644 dts/framework/remote_session/remote/python_shell.py
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 72aa021b97..b5830f6301 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -11,8 +11,7 @@
import os.path
import pathlib
from dataclasses import dataclass
-from enum import Enum, auto, unique
-from pathlib import PurePath
+from enum import auto, unique
from typing import Any, TypedDict, Union
import warlock # type: ignore
@@ -331,28 +330,3 @@ def load_config() -> Configuration:
CONFIGURATION = load_config()
-
-
-@unique
-class InteractiveApp(Enum):
- """An enum that represents different supported interactive applications.
-
- The values in this enum must all be set to objects that have a key called
- "default_path" where "default_path" represents a PurePath object for the path
- to the application. This default path will be passed into the handler class
- for the application so that it can start the application.
- """
-
- testpmd = {"default_path": PurePath("app", "dpdk-testpmd")}
-
- @property
- def path(self) -> PurePath:
- """Default path of the application.
-
- For DPDK apps, this will be appended to the DPDK build directory.
- """
- return self.value["default_path"]
-
- @path.setter
- def path(self, path: PurePath) -> None:
- self.value["default_path"] = path
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 2c408c2557..1155dd8318 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -17,7 +17,7 @@
from framework.logger import DTSLOG
from .linux_session import LinuxSession
-from .os_session import OSSession
+from .os_session import InteractiveShellType, OSSession
from .remote import (
CommandResult,
InteractiveRemoteSession,
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index 633d06eb5d..c17a17a267 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -5,11 +5,11 @@
from abc import ABC, abstractmethod
from collections.abc import Iterable
from pathlib import PurePath
-from typing import Union
+from typing import Type, TypeVar
-from framework.config import Architecture, InteractiveApp, NodeConfiguration, NodeInfo
+from framework.config import Architecture, NodeConfiguration, NodeInfo
from framework.logger import DTSLOG
-from framework.remote_session.remote import InteractiveShell, TestPmdShell
+from framework.remote_session.remote import InteractiveShell
from framework.settings import SETTINGS
from framework.testbed_model import LogicalCore
from framework.testbed_model.hw.port import Port
@@ -23,6 +23,8 @@
create_remote_session,
)
+InteractiveShellType = TypeVar("InteractiveShellType", bound=InteractiveShell)
+
class OSSession(ABC):
"""
@@ -81,30 +83,26 @@ def send_command(
def create_interactive_shell(
self,
- shell_type: InteractiveApp,
- path_to_app: PurePath,
+ shell_cls: Type[InteractiveShellType],
eal_parameters: str,
timeout: float,
- ) -> Union[InteractiveShell, TestPmdShell]:
+ privileged: bool,
+ ) -> InteractiveShellType:
"""
See "create_interactive_shell" in SutNode
"""
- match (shell_type):
- case InteractiveApp.testpmd:
- return TestPmdShell(
- self.interactive_session.session,
- self._logger,
- path_to_app,
- timeout=timeout,
- eal_flags=eal_parameters,
- )
- case _:
- self._logger.info(
- f"Unhandled app type {shell_type.name}, defaulting to shell."
- )
- return InteractiveShell(
- self.interactive_session.session, self._logger, path_to_app, timeout
- )
+ app_command = (
+ self._get_privileged_command(str(shell_cls.path))
+ if privileged
+ else str(shell_cls.path)
+ )
+ return shell_cls(
+ self.interactive_session.session,
+ self._logger,
+ app_command,
+ eal_parameters,
+ timeout,
+ )
@abstractmethod
def _get_privileged_command(self, command: str) -> str:
diff --git a/dts/framework/remote_session/remote/interactive_shell.py b/dts/framework/remote_session/remote/interactive_shell.py
index 2cabe9edca..1211d91aa9 100644
--- a/dts/framework/remote_session/remote/interactive_shell.py
+++ b/dts/framework/remote_session/remote/interactive_shell.py
@@ -17,13 +17,18 @@ class InteractiveShell:
_ssh_channel: Channel
_logger: DTSLOG
_timeout: float
- _path_to_app: PurePath
+ _startup_command: str
+ _app_args: str
+ _default_prompt: str = ""
+ path: PurePath
+ dpdk_app: bool = False
def __init__(
self,
interactive_session: SSHClient,
logger: DTSLOG,
- path_to_app: PurePath,
+ startup_command: str,
+ app_args: str = "",
timeout: float = SETTINGS.timeout,
) -> None:
self._interactive_session = interactive_session
@@ -34,16 +39,19 @@ def __init__(
self._ssh_channel.set_combine_stderr(True) # combines stdout and stderr streams
self._logger = logger
self._timeout = timeout
- self._path_to_app = path_to_app
+ self._startup_command = startup_command
+ self._app_args = app_args
self._start_application()
def _start_application(self) -> None:
- """Starts a new interactive application based on _path_to_app.
+ """Starts a new interactive application based on _startup_command.
This method is often overridden by subclasses as their process for
starting may look different.
"""
- self.send_command_get_output(f"{self._path_to_app}", "")
+ self.send_command_get_output(
+ f"{self._startup_command} {self._app_args}", self._default_prompt
+ )
def send_command_get_output(self, command: str, prompt: str) -> str:
"""Send a command and get all output before the expected ending string.
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/remote/python_shell.py
new file mode 100644
index 0000000000..66d5787c86
--- /dev/null
+++ b/dts/framework/remote_session/remote/python_shell.py
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from pathlib import PurePath
+
+from .interactive_shell import InteractiveShell
+
+
+class PythonShell(InteractiveShell):
+ _startup_command: str
+ _default_prompt: str = ">>>"
+ path: PurePath = PurePath("python3")
+
+ def _start_application(self) -> None:
+ self._startup_command = f"{self._startup_command}\n"
+ super()._start_application()
+
+ def send_command(self, command: str, prompt: str = _default_prompt) -> str:
+ """Specific way of handling the command for python
+
+ An extra newline character is consumed in order to force the current line into
+ the stdout buffer.
+ """
+ return self.send_command_get_output(f"{command}\n", prompt)
diff --git a/dts/framework/remote_session/remote/testpmd_shell.py b/dts/framework/remote_session/remote/testpmd_shell.py
index c0261c00f6..1288cfd10c 100644
--- a/dts/framework/remote_session/remote/testpmd_shell.py
+++ b/dts/framework/remote_session/remote/testpmd_shell.py
@@ -3,11 +3,6 @@
from pathlib import PurePath
-from paramiko import SSHClient # type: ignore
-
-from framework.logger import DTSLOG
-from framework.settings import SETTINGS
-
from .interactive_shell import InteractiveShell
@@ -22,34 +17,18 @@ def __str__(self) -> str:
class TestPmdShell(InteractiveShell):
- expected_prompt: str = "testpmd>"
+ path: PurePath = PurePath("app", "dpdk-testpmd")
+ dpdk_app: bool = True
+ _default_prompt: str = "testpmd>"
_eal_flags: str
- def __init__(
- self,
- interactive_session: SSHClient,
- logger: DTSLOG,
- path_to_testpmd: PurePath,
- eal_flags: str,
- timeout: float = SETTINGS.timeout,
- ) -> None:
- """Initializes an interactive testpmd session using specified parameters."""
- self._eal_flags = eal_flags
-
- super(TestPmdShell, self).__init__(
- interactive_session,
- logger=logger,
- path_to_app=path_to_testpmd,
- timeout=timeout,
- )
-
def _start_application(self) -> None:
- """Starts a new interactive testpmd shell using _path_to_app."""
+ """Starts a new interactive testpmd shell using _startup_command."""
self.send_command(
- f"{self._path_to_app} {self._eal_flags} -- -i",
+ f"{self._startup_command} {self._app_args} -- -i",
)
- def send_command(self, command: str, prompt: str = expected_prompt) -> str:
+ def send_command(self, command: str, prompt: str = _default_prompt) -> str:
"""Specific way of handling the command for testpmd
An extra newline character is consumed in order to force the current line into
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index e09931cedf..f70e4d5ce6 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -7,7 +7,7 @@
A node is a generic host that DTS connects to and manages.
"""
-from typing import Any, Callable
+from typing import Any, Callable, Type
from framework.config import (
BuildTargetConfiguration,
@@ -15,7 +15,7 @@
NodeConfiguration,
)
from framework.logger import DTSLOG, getLogger
-from framework.remote_session import OSSession, create_session
+from framework.remote_session import InteractiveShellType, OSSession, create_session
from framework.settings import SETTINGS
from .hw import (
@@ -138,6 +138,37 @@ def create_session(self, name: str) -> OSSession:
self._other_sessions.append(connection)
return connection
+ def create_interactive_shell(
+ self,
+ shell_cls: Type[InteractiveShellType],
+ timeout: float = SETTINGS.timeout,
+ privileged: bool = False,
+ app_args: str = "",
+ ) -> InteractiveShellType:
+ """Create a handler for an interactive session.
+
+ Instantiate shell_cls according to the remote OS specifics.
+
+ Args:
+ shell_cls: The class of the shell.
+ timeout: Timeout for reading output from the SSH channel. If you are
+ reading from the buffer and don't receive any data within the timeout
+ it will throw an error.
+ privileged: Whether to run the shell with administrative privileges.
+ app_args: The arguments to be passed to the application.
+ Returns:
+ Instance of the desired interactive application.
+ """
+ if not shell_cls.dpdk_app:
+ shell_cls.path = self.main_session.join_remote_path(shell_cls.path)
+
+ return self.main_session.create_interactive_shell(
+ shell_cls,
+ app_args,
+ timeout,
+ privileged,
+ )
+
def filter_lcores(
self,
filter_specifier: LogicalCoreCount | LogicalCoreList,
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index bcad364435..f0b017a383 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -7,21 +7,15 @@
import tarfile
import time
from pathlib import PurePath
-from typing import Union
+from typing import Type
from framework.config import (
BuildTargetConfiguration,
BuildTargetInfo,
- InteractiveApp,
NodeInfo,
SutNodeConfiguration,
)
-from framework.remote_session import (
- CommandResult,
- InteractiveShell,
- OSSession,
- TestPmdShell,
-)
+from framework.remote_session import CommandResult, InteractiveShellType, OSSession
from framework.settings import SETTINGS
from framework.utils import MesonArgs
@@ -359,23 +353,24 @@ def run_dpdk_app(
def create_interactive_shell(
self,
- shell_type: InteractiveApp,
+ shell_cls: Type[InteractiveShellType],
timeout: float = SETTINGS.timeout,
- eal_parameters: EalParameters | None = None,
- ) -> Union[InteractiveShell, TestPmdShell]:
- """Create a handler for an interactive session.
+ privileged: bool = False,
+ eal_parameters: EalParameters | str | None = None,
+ ) -> InteractiveShellType:
+ """Factory method for creating a handler for an interactive session.
- This method is a factory that calls a method in OSSession to create shells for
- different DPDK applications.
+ Instantiate shell_cls according to the remote OS specifics.
Args:
- shell_type: Enum value representing the desired application.
+ shell_cls: The class of the shell.
timeout: Timeout for reading output from the SSH channel. If you are
reading from the buffer and don't receive any data within the timeout
it will throw an error.
+ privileged: Whether to run the shell with administrative privileges.
eal_parameters: List of EAL parameters to use to launch the app. If this
- isn't provided, it will default to calling create_eal_parameters().
- This is ignored for base "shell" types.
+ isn't provided or an empty string is passed, it will default to calling
+ create_eal_parameters().
Returns:
Instance of the desired interactive application.
"""
@@ -383,11 +378,11 @@ def create_interactive_shell(
eal_parameters = self.create_eal_parameters()
# We need to append the build directory for DPDK apps
- shell_type.path = self.remote_dpdk_build_dir.joinpath(shell_type.path)
- default_path = self.main_session.join_remote_path(shell_type.path)
- return self.main_session.create_interactive_shell(
- shell_type,
- default_path,
- str(eal_parameters),
- timeout,
+ if shell_cls.dpdk_app:
+ shell_cls.path = self.main_session.join_remote_path(
+ self.remote_dpdk_build_dir, shell_cls.path
+ )
+
+ return super().create_interactive_shell(
+ shell_cls, timeout, privileged, str(eal_parameters)
)
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 9cf547205f..e73d015bc7 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -3,7 +3,7 @@
import re
-from framework.config import InteractiveApp, PortConfig
+from framework.config import PortConfig
from framework.remote_session import TestPmdDevice, TestPmdShell
from framework.settings import SETTINGS
from framework.test_suite import TestSuite
@@ -67,9 +67,7 @@ def test_devices_listed_in_testpmd(self) -> None:
Test:
Uses testpmd driver to verify that devices have been found by testpmd.
"""
- testpmd_driver = self.sut_node.create_interactive_shell(InteractiveApp.testpmd)
- # We know it should always be a TestPmdShell but mypy doesn't
- assert isinstance(testpmd_driver, TestPmdShell)
+ testpmd_driver = self.sut_node.create_interactive_shell(TestPmdShell)
dev_list: list[TestPmdDevice] = testpmd_driver.get_devices()
for nic in self.nics_in_node:
self.verify(
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v2 5/6] dts: scapy traffic generator implementation
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
` (3 preceding siblings ...)
2023-07-17 11:07 ` [PATCH v2 4/6] dts: add python remote interactive shell Juraj Linkeš
@ 2023-07-17 11:07 ` Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 6/6] dts: add basic UDP test case Juraj Linkeš
` (2 subsequent siblings)
7 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-17 11:07 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, jspewock, probb
Cc: dev, Juraj Linkeš
Scapy is a traffic generator capable of sending and receiving traffic.
Since it's a software traffic generator, it's not suitable for
performance testing, but it is suitable for functional testing.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/__init__.py | 1 +
.../remote_session/remote/__init__.py | 1 +
dts/framework/testbed_model/scapy.py | 224 +++++++++++++++++-
3 files changed, 222 insertions(+), 4 deletions(-)
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 1155dd8318..00b6d1f03a 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -22,6 +22,7 @@
CommandResult,
InteractiveRemoteSession,
InteractiveShell,
+ PythonShell,
RemoteSession,
SSHSession,
TestPmdDevice,
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
index 03fd309f2b..075f52b646 100644
--- a/dts/framework/remote_session/remote/__init__.py
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -9,6 +9,7 @@
from .interactive_remote_session import InteractiveRemoteSession
from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
from .remote_session import CommandResult, RemoteSession
from .ssh_session import SSHSession
from .testpmd_shell import TestPmdDevice, TestPmdShell
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/scapy.py
index 1a23dc9fa3..af0d4dbb25 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/scapy.py
@@ -12,10 +12,21 @@
a local server proxy.
"""
+import inspect
+import marshal
+import time
+import types
+import xmlrpc.client
+from xmlrpc.server import SimpleXMLRPCServer
+
+import scapy.all # type: ignore[import]
+from scapy.layers.l2 import Ether # type: ignore[import]
from scapy.packet import Packet # type: ignore[import]
from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import getLogger
+from framework.logger import DTSLOG, getLogger
+from framework.remote_session import PythonShell
+from framework.settings import SETTINGS
from .capturing_traffic_generator import (
CapturingTrafficGenerator,
@@ -24,6 +35,134 @@
from .hw.port import Port
from .tg_node import TGNode
+"""
+========= BEGIN RPC FUNCTIONS =========
+
+All of the functions in this section are intended to be exported to a python
+shell which runs a scapy RPC server. These functions are made available via that
+RPC server to the packet generator. To add a new function to the RPC server,
+first write the function in this section. Then, if you need any imports, make sure to
+add them to SCAPY_RPC_SERVER_IMPORTS as well. After that, add the function to the list
+in EXPORTED_FUNCTIONS. Note that kwargs (keyword arguments) do not work via xmlrpc,
+so you may need to construct wrapper functions around many scapy types.
+"""
+
+"""
+Add the line needed to import something in a normal python environment
+as an entry to this array. It will be imported before any functions are
+sent to the server.
+"""
+SCAPY_RPC_SERVER_IMPORTS = [
+ "from scapy.all import *",
+ "import xmlrpc",
+ "import sys",
+ "from xmlrpc.server import SimpleXMLRPCServer",
+ "import marshal",
+ "import pickle",
+ "import types",
+ "import time",
+]
+
+
+def scapy_send_packets_and_capture(
+ xmlrpc_packets: list[xmlrpc.client.Binary],
+ send_iface: str,
+ recv_iface: str,
+ duration: float,
+) -> list[bytes]:
+ """RPC function to send and capture packets.
+
+ The function is meant to be executed on the remote TG node.
+
+ Args:
+ xmlrpc_packets: The packets to send. These need to be converted to
+ xmlrpc.client.Binary before sending to the remote server.
+ send_iface: The logical name of the egress interface.
+ recv_iface: The logical name of the ingress interface.
+ duration: Capture for this amount of time, in seconds.
+
+ Returns:
+ A list of bytes. Each item in the list represents one packet, which needs
+ to be converted back upon transfer from the remote node.
+ """
+ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
+ sniffer = scapy.all.AsyncSniffer(
+ iface=recv_iface,
+ store=True,
+ started_callback=lambda *args: scapy.all.sendp(scapy_packets, iface=send_iface),
+ )
+ sniffer.start()
+ time.sleep(duration)
+ return [scapy_packet.build() for scapy_packet in sniffer.stop(join=True)]
+
+
+def scapy_send_packets(
+ xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
+) -> None:
+ """RPC function to send packets.
+
+ The function is meant to be executed on the remote TG node.
+ It doesn't return anything, only sends packets.
+
+ Args:
+ xmlrpc_packets: The packets to send. These need to be converted to
+ xmlrpc.client.Binary before sending to the remote server.
+ send_iface: The logical name of the egress interface.
+
+ Returns:
+ A list of bytes. Each item in the list represents one packet, which needs
+ to be converted back upon transfer from the remote node.
+ """
+ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
+ scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
+
+
+"""
+Functions to be exposed by the scapy RPC server.
+"""
+RPC_FUNCTIONS = [
+ scapy_send_packets,
+ scapy_send_packets_and_capture,
+]
+
+"""
+========= END RPC FUNCTIONS =========
+"""
+
+
+class QuittableXMLRPCServer(SimpleXMLRPCServer):
+ """Basic XML-RPC server that may be extended
+ by functions serializable by the marshal module.
+ """
+
+ def __init__(self, *args, **kwargs):
+ kwargs["allow_none"] = True
+ super().__init__(*args, **kwargs)
+ self.register_introspection_functions()
+ self.register_function(self.quit)
+ self.register_function(self.add_rpc_function)
+
+ def quit(self) -> None:
+ self._BaseServer__shutdown_request = True
+ return None
+
+ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+ """Add a function to the server.
+
+ This is meant to be executed remotely.
+
+ Args:
+ name: The name of the function.
+ function_bytes: The code of the function.
+ """
+ function_code = marshal.loads(function_bytes.data)
+ function = types.FunctionType(function_code, globals(), name)
+ self.register_function(function)
+
+ def serve_forever(self, poll_interval: float = 0.5) -> None:
+ print("XMLRPC OK")
+ super().serve_forever(poll_interval)
+
class ScapyTrafficGenerator(CapturingTrafficGenerator):
"""Provides access to scapy functions via an RPC interface.
@@ -41,10 +180,19 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
Arguments:
tg_node: The node where the traffic generator resides.
config: The user configuration of the traffic generator.
+
+ Attributes:
+ session: The exclusive interactive remote session created by the Scapy
+ traffic generator where the XML-RPC server runs.
+ rpc_server_proxy: The object used by clients to execute functions
+ on the XML-RPC server.
"""
+ session: PythonShell
+ rpc_server_proxy: xmlrpc.client.ServerProxy
_config: ScapyTrafficGeneratorConfig
_tg_node: TGNode
+ _logger: DTSLOG
def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
self._config = config
@@ -57,8 +205,58 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
self._tg_node.config.os == OS.linux
), "Linux is the only supported OS for scapy traffic generation"
+ self.session = self._tg_node.create_interactive_shell(
+ PythonShell, timeout=5, privileged=True
+ )
+
+ # import libs in remote python console
+ for import_statement in SCAPY_RPC_SERVER_IMPORTS:
+ self.session.send_command(import_statement)
+
+ # start the server
+ xmlrpc_server_listen_port = 8000
+ self._start_xmlrpc_server_in_remote_python(xmlrpc_server_listen_port)
+
+ # connect to the server
+ server_url = (
+ f"http://{self._tg_node.config.hostname}:{xmlrpc_server_listen_port}"
+ )
+ self.rpc_server_proxy = xmlrpc.client.ServerProxy(
+ server_url, allow_none=True, verbose=SETTINGS.verbose
+ )
+
+ # add functions to the server
+ for function in RPC_FUNCTIONS:
+ # A slightly hacky way to move a function to the remote server.
+ # It is constructed from the name and code on the other side.
+ # Pickle cannot handle functions, nor can any of the other serialization
+ # frameworks aside from the libraries used to generate pyc files, which
+ # are even more messy to work with.
+ function_bytes = marshal.dumps(function.__code__)
+ self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
+
+ def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
+ # load the source of the function
+ src = inspect.getsource(QuittableXMLRPCServer)
+ # Lines with only whitespace break the repl if in the middle of a function
+ # or class, so strip all lines containing only whitespace
+ src = "\n".join(
+ [line for line in src.splitlines() if not line.isspace() and line != ""]
+ )
+
+ spacing = "\n" * 4
+
+ # execute it in the python terminal
+ self.session.send_command(spacing + src + spacing)
+ self.session.send_command(
+ f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
+ f"server.serve_forever()",
+ "XMLRPC OK",
+ )
+
def _send_packets(self, packets: list[Packet], port: Port) -> None:
- raise NotImplementedError()
+ packets = [packet.build() for packet in packets]
+ self.rpc_server_proxy.scapy_send_packets(packets, port.logical_name)
def _send_packets_and_capture(
self,
@@ -68,7 +266,25 @@ def _send_packets_and_capture(
duration: float,
capture_name: str = _get_default_capture_name(),
) -> list[Packet]:
- raise NotImplementedError()
+ binary_packets = [packet.build() for packet in packets]
+
+ xmlrpc_packets: list[
+ xmlrpc.client.Binary
+ ] = self.rpc_server_proxy.scapy_send_packets_and_capture(
+ binary_packets,
+ send_port.logical_name,
+ receive_port.logical_name,
+ duration,
+ ) # type: ignore[assignment]
+
+ scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
+ return scapy_packets
def close(self):
- pass
+ try:
+ self.rpc_server_proxy.quit()
+ except ConnectionRefusedError:
+ # Because the python instance closes, we get no RPC response.
+ # Thus, this error is expected
+ pass
+ self.session.close()
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v2 6/6] dts: add basic UDP test case
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
` (4 preceding siblings ...)
2023-07-17 11:07 ` [PATCH v2 5/6] dts: scapy traffic generator implementation Juraj Linkeš
@ 2023-07-17 11:07 ` Juraj Linkeš
2023-07-18 21:04 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Jeremy Spewock
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
7 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-17 11:07 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, jspewock, probb
Cc: dev, Juraj Linkeš
The test cases showcases the scapy traffic generator code.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 1 +
dts/framework/config/conf_yaml_schema.json | 3 +-
dts/framework/remote_session/linux_session.py | 20 +-
dts/framework/remote_session/os_session.py | 20 +-
dts/framework/test_suite.py | 217 +++++++++++++++++-
dts/framework/testbed_model/node.py | 14 +-
dts/framework/testbed_model/sut_node.py | 3 +
dts/tests/TestSuite_os_udp.py | 45 ++++
8 files changed, 315 insertions(+), 8 deletions(-)
create mode 100644 dts/tests/TestSuite_os_udp.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 7f089022ba..ba228c5ab2 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,6 +13,7 @@ executions:
skip_smoke_tests: false # optional flag that allow you to skip smoke tests
test_suites:
- hello_world
+ - os_udp
system_under_test_node:
node_name: "SUT 1"
vdevs: # optional; if removed, vdevs won't be used in the execution
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 76df84840a..a2f14f0e52 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -185,7 +185,8 @@
"test_suite": {
"type": "string",
"enum": [
- "hello_world"
+ "hello_world",
+ "os_udp"
]
},
"test_target": {
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index 284c74795d..94023920a7 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -3,7 +3,8 @@
# Copyright(c) 2023 University of New Hampshire
import json
-from typing import TypedDict
+from ipaddress import IPv4Interface, IPv6Interface
+from typing import TypedDict, Union
from typing_extensions import NotRequired
@@ -180,3 +181,20 @@ def configure_port_state(self, port: Port, enable: bool) -> None:
self.send_command(
f"ip link set dev {port.logical_name} {state}", privileged=True
)
+
+ def configure_port_ip_address(
+ self,
+ address: Union[IPv4Interface, IPv6Interface],
+ port: Port,
+ delete: bool,
+ ) -> None:
+ command = "del" if delete else "add"
+ self.send_command(
+ f"ip address {command} {address} dev {port.logical_name}",
+ privileged=True,
+ verify=True,
+ )
+
+ def configure_ipv4_forwarding(self, enable: bool) -> None:
+ state = 1 if enable else 0
+ self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index c17a17a267..ad06c1dcad 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -4,8 +4,9 @@
from abc import ABC, abstractmethod
from collections.abc import Iterable
+from ipaddress import IPv4Interface, IPv6Interface
from pathlib import PurePath
-from typing import Type, TypeVar
+from typing import Type, TypeVar, Union
from framework.config import Architecture, NodeConfiguration, NodeInfo
from framework.logger import DTSLOG
@@ -268,3 +269,20 @@ def configure_port_state(self, port: Port, enable: bool) -> None:
"""
Enable/disable port.
"""
+
+ @abstractmethod
+ def configure_port_ip_address(
+ self,
+ address: Union[IPv4Interface, IPv6Interface],
+ port: Port,
+ delete: bool,
+ ) -> None:
+ """
+ Configure (add or delete) an IP address of the input port.
+ """
+
+ @abstractmethod
+ def configure_ipv4_forwarding(self, enable: bool) -> None:
+ """
+ Enable IPv4 forwarding in the underlying OS.
+ """
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 056460dd05..3b890c0451 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -9,7 +9,13 @@
import importlib
import inspect
import re
+from ipaddress import IPv4Interface, IPv6Interface, ip_interface
from types import MethodType
+from typing import Union
+
+from scapy.layers.inet import IP # type: ignore[import]
+from scapy.layers.l2 import Ether # type: ignore[import]
+from scapy.packet import Packet, Padding # type: ignore[import]
from .exception import (
BlockingTestSuiteError,
@@ -21,6 +27,8 @@
from .settings import SETTINGS
from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
from .testbed_model import SutNode, TGNode
+from .testbed_model.hw.port import Port, PortLink
+from .utils import get_packet_summaries
class TestSuite(object):
@@ -47,6 +55,15 @@ class TestSuite(object):
_test_cases_to_run: list[str]
_func: bool
_result: TestSuiteResult
+ _port_links: list[PortLink]
+ _sut_port_ingress: Port
+ _sut_port_egress: Port
+ _sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
+ _sut_ip_address_egress: Union[IPv4Interface, IPv6Interface]
+ _tg_port_ingress: Port
+ _tg_port_egress: Port
+ _tg_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
+ _tg_ip_address_egress: Union[IPv4Interface, IPv6Interface]
def __init__(
self,
@@ -63,6 +80,31 @@ def __init__(
self._test_cases_to_run.extend(SETTINGS.test_cases)
self._func = func
self._result = build_target_result.add_test_suite(self.__class__.__name__)
+ self._port_links = []
+ self._process_links()
+ self._sut_port_ingress, self._tg_port_egress = (
+ self._port_links[0].sut_port,
+ self._port_links[0].tg_port,
+ )
+ self._sut_port_egress, self._tg_port_ingress = (
+ self._port_links[1].sut_port,
+ self._port_links[1].tg_port,
+ )
+ self._sut_ip_address_ingress = ip_interface("192.168.100.2/24")
+ self._sut_ip_address_egress = ip_interface("192.168.101.2/24")
+ self._tg_ip_address_egress = ip_interface("192.168.100.3/24")
+ self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
+
+ def _process_links(self) -> None:
+ for sut_port in self.sut_node.ports:
+ for tg_port in self.tg_node.ports:
+ if (sut_port.identifier, sut_port.peer) == (
+ tg_port.peer,
+ tg_port.identifier,
+ ):
+ self._port_links.append(
+ PortLink(sut_port=sut_port, tg_port=tg_port)
+ )
def set_up_suite(self) -> None:
"""
@@ -85,14 +127,181 @@ def tear_down_test_case(self) -> None:
Tear down the previously created test fixtures after each test case.
"""
+ def configure_testbed_ipv4(self, restore: bool = False) -> None:
+ delete = True if restore else False
+ enable = False if restore else True
+ self._configure_ipv4_forwarding(enable)
+ self.sut_node.configure_port_ip_address(
+ self._sut_ip_address_egress, self._sut_port_egress, delete
+ )
+ self.sut_node.configure_port_state(self._sut_port_egress, enable)
+ self.sut_node.configure_port_ip_address(
+ self._sut_ip_address_ingress, self._sut_port_ingress, delete
+ )
+ self.sut_node.configure_port_state(self._sut_port_ingress, enable)
+ self.tg_node.configure_port_ip_address(
+ self._tg_ip_address_ingress, self._tg_port_ingress, delete
+ )
+ self.tg_node.configure_port_state(self._tg_port_ingress, enable)
+ self.tg_node.configure_port_ip_address(
+ self._tg_ip_address_egress, self._tg_port_egress, delete
+ )
+ self.tg_node.configure_port_state(self._tg_port_egress, enable)
+
+ def _configure_ipv4_forwarding(self, enable: bool) -> None:
+ self.sut_node.configure_ipv4_forwarding(enable)
+
+ def send_packet_and_capture(
+ self, packet: Packet, duration: float = 1
+ ) -> list[Packet]:
+ """
+ Send a packet through the appropriate interface and
+ receive on the appropriate interface.
+ Modify the packet with l3/l2 addresses corresponding
+ to the testbed and desired traffic.
+ """
+ packet = self._adjust_addresses(packet)
+ return self.tg_node.send_packet_and_capture(
+ packet, self._tg_port_egress, self._tg_port_ingress, duration
+ )
+
+ def get_expected_packet(self, packet: Packet) -> Packet:
+ return self._adjust_addresses(packet, expected=True)
+
+ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
+ """
+ Assumptions:
+ Two links between SUT and TG, one link is TG -> SUT,
+ the other SUT -> TG.
+ """
+ if expected:
+ # The packet enters the TG from SUT
+ # update l2 addresses
+ packet.src = self._sut_port_egress.mac_address
+ packet.dst = self._tg_port_ingress.mac_address
+
+ # The packet is routed from TG egress to TG ingress
+ # update l3 addresses
+ packet.payload.src = self._tg_ip_address_egress.ip.exploded
+ packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
+ else:
+ # The packet leaves TG towards SUT
+ # update l2 addresses
+ packet.src = self._tg_port_egress.mac_address
+ packet.dst = self._sut_port_ingress.mac_address
+
+ # The packet is routed from TG egress to TG ingress
+ # update l3 addresses
+ packet.payload.src = self._tg_ip_address_egress.ip.exploded
+ packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
+
+ return Ether(packet.build())
+
def verify(self, condition: bool, failure_description: str) -> None:
if not condition:
+ self._fail_test_case_verify(failure_description)
+
+ def _fail_test_case_verify(self, failure_description: str) -> None:
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on SUT:"
+ )
+ for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on TG:"
+ )
+ for command_res in self.tg_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ raise TestCaseVerifyError(failure_description)
+
+ def verify_packets(
+ self, expected_packet: Packet, received_packets: list[Packet]
+ ) -> None:
+ for received_packet in received_packets:
+ if self._compare_packets(expected_packet, received_packet):
+ break
+ else:
+ self._logger.debug(
+ f"The expected packet {get_packet_summaries(expected_packet)} "
+ f"not found among received {get_packet_summaries(received_packets)}"
+ )
+ self._fail_test_case_verify(
+ "An expected packet not found among received packets."
+ )
+
+ def _compare_packets(
+ self, expected_packet: Packet, received_packet: Packet
+ ) -> bool:
+ self._logger.debug(
+ "Comparing packets: \n"
+ f"{expected_packet.summary()}\n"
+ f"{received_packet.summary()}"
+ )
+
+ l3 = IP in expected_packet.layers()
+ self._logger.debug("Found l3 layer")
+
+ received_payload = received_packet
+ expected_payload = expected_packet
+ while received_payload and expected_payload:
+ self._logger.debug("Comparing payloads:")
+ self._logger.debug(f"Received: {received_payload}")
+ self._logger.debug(f"Expected: {expected_payload}")
+ if received_payload.__class__ == expected_payload.__class__:
+ self._logger.debug("The layers are the same.")
+ if received_payload.__class__ == Ether:
+ if not self._verify_l2_frame(received_payload, l3):
+ return False
+ elif received_payload.__class__ == IP:
+ if not self._verify_l3_packet(received_payload, expected_payload):
+ return False
+ else:
+ # Different layers => different packets
+ return False
+ received_payload = received_payload.payload
+ expected_payload = expected_payload.payload
+
+ if expected_payload:
self._logger.debug(
- "A test case failed, showing the last 10 commands executed on SUT:"
+ f"The expected packet did not contain {expected_payload}."
)
- for command_res in self.sut_node.main_session.remote_session.history[-10:]:
- self._logger.debug(command_res.command)
- raise TestCaseVerifyError(failure_description)
+ return False
+ if received_payload and received_payload.__class__ != Padding:
+ self._logger.debug(
+ "The received payload had extra layers which were not padding."
+ )
+ return False
+ return True
+
+ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
+ self._logger.debug("Looking at the Ether layer.")
+ self._logger.debug(
+ f"Comparing received dst mac '{received_packet.dst}' "
+ f"with expected '{self._tg_port_ingress.mac_address}'."
+ )
+ if received_packet.dst != self._tg_port_ingress.mac_address:
+ return False
+
+ expected_src_mac = self._tg_port_egress.mac_address
+ if l3:
+ expected_src_mac = self._sut_port_egress.mac_address
+ self._logger.debug(
+ f"Comparing received src mac '{received_packet.src}' "
+ f"with expected '{expected_src_mac}'."
+ )
+ if received_packet.src != expected_src_mac:
+ return False
+
+ return True
+
+ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
+ self._logger.debug("Looking at the IP layer.")
+ if (
+ received_packet.src != expected_packet.src
+ or received_packet.dst != expected_packet.dst
+ ):
+ return False
+ return True
def run(self) -> None:
"""
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index f70e4d5ce6..b45fea6bbf 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -7,7 +7,8 @@
A node is a generic host that DTS connects to and manages.
"""
-from typing import Any, Callable, Type
+from ipaddress import IPv4Interface, IPv6Interface
+from typing import Any, Callable, Type, Union
from framework.config import (
BuildTargetConfiguration,
@@ -214,6 +215,17 @@ def configure_port_state(self, port: Port, enable: bool = True) -> None:
"""
self.main_session.configure_port_state(port, enable)
+ def configure_port_ip_address(
+ self,
+ address: Union[IPv4Interface, IPv6Interface],
+ port: Port,
+ delete: bool = False,
+ ) -> None:
+ """
+ Configure the IP address of a port on this node.
+ """
+ self.main_session.configure_port_ip_address(address, port, delete)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index f0b017a383..202aebfd06 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -351,6 +351,9 @@ def run_dpdk_app(
f"{app_path} {eal_args}", timeout, privileged=True, verify=True
)
+ def configure_ipv4_forwarding(self, enable: bool) -> None:
+ self.main_session.configure_ipv4_forwarding(enable)
+
def create_interactive_shell(
self,
shell_cls: Type[InteractiveShellType],
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
new file mode 100644
index 0000000000..9b5f39711d
--- /dev/null
+++ b/dts/tests/TestSuite_os_udp.py
@@ -0,0 +1,45 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Configure SUT node to route traffic from if1 to if2.
+Send a packet to the SUT node, verify it comes back on the second port on the TG node.
+"""
+
+from scapy.layers.inet import IP, UDP # type: ignore[import]
+from scapy.layers.l2 import Ether # type: ignore[import]
+
+from framework.test_suite import TestSuite
+
+
+class TestOSUdp(TestSuite):
+ def set_up_suite(self) -> None:
+ """
+ Setup:
+ Configure SUT ports and SUT to route traffic from if1 to if2.
+ """
+
+ self.configure_testbed_ipv4()
+
+ def test_os_udp(self) -> None:
+ """
+ Steps:
+ Send a UDP packet.
+ Verify:
+ The packet with proper addresses arrives at the other TG port.
+ """
+
+ packet = Ether() / IP() / UDP()
+
+ received_packets = self.send_packet_and_capture(packet)
+
+ expected_packet = self.get_expected_packet(packet)
+
+ self.verify_packets(expected_packet, received_packets)
+
+ def tear_down_suite(self) -> None:
+ """
+ Teardown:
+ Remove the SUT port configuration configured in setup.
+ """
+ self.configure_testbed_ipv4(restore=True)
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v2 2/6] dts: add traffic generator config
2023-07-17 11:07 ` [PATCH v2 2/6] dts: add traffic generator config Juraj Linkeš
@ 2023-07-18 15:55 ` Jeremy Spewock
2023-07-19 12:57 ` Juraj Linkeš
0 siblings, 1 reply; 29+ messages in thread
From: Jeremy Spewock @ 2023-07-18 15:55 UTC (permalink / raw)
To: Juraj Linkeš; +Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, probb, dev
[-- Attachment #1: Type: text/plain, Size: 16003 bytes --]
Hey Juraj,
These changes look good to me, I just had one question. Is the plan to make
specifying a TG always required or optional for cases where it isn't
required? It seems like it is written now as something that is required for
every execution. I don't think this is necessarily a bad thing, I just
wonder if it would be beneficial at all to make it optional to allow for
executions that don't utilize a TG to just not specify it.
On Mon, Jul 17, 2023 at 7:07 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Node configuration - where to connect, what ports to use and what TG to
> use.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> dts/conf.yaml | 26 ++++++-
> dts/framework/config/__init__.py | 87 +++++++++++++++++++---
> dts/framework/config/conf_yaml_schema.json | 29 +++++++-
> dts/framework/dts.py | 12 +--
> dts/framework/testbed_model/node.py | 4 +-
> dts/framework/testbed_model/sut_node.py | 6 +-
> 6 files changed, 141 insertions(+), 23 deletions(-)
>
> diff --git a/dts/conf.yaml b/dts/conf.yaml
> index 3a5d87cb49..7f089022ba 100644
> --- a/dts/conf.yaml
> +++ b/dts/conf.yaml
> @@ -13,10 +13,11 @@ executions:
> skip_smoke_tests: false # optional flag that allow you to skip smoke
> tests
> test_suites:
> - hello_world
> - system_under_test:
> + system_under_test_node:
> node_name: "SUT 1"
> vdevs: # optional; if removed, vdevs won't be used in the execution
> - "crypto_openssl"
> + traffic_generator_node: "TG 1"
> nodes:
> - name: "SUT 1"
> hostname: sut1.change.me.localhost
> @@ -40,3 +41,26 @@ nodes:
> os_driver: i40e
> peer_node: "TG 1"
> peer_pci: "0000:00:08.1"
> + - name: "TG 1"
> + hostname: tg1.change.me.localhost
> + user: dtsuser
> + arch: x86_64
> + os: linux
> + lcores: ""
> + ports:
> + - pci: "0000:00:08.0"
> + os_driver_for_dpdk: rdma
> + os_driver: rdma
> + peer_node: "SUT 1"
> + peer_pci: "0000:00:08.0"
> + - pci: "0000:00:08.1"
> + os_driver_for_dpdk: rdma
> + os_driver: rdma
> + peer_node: "SUT 1"
> + peer_pci: "0000:00:08.1"
> + use_first_core: false
> + hugepages: # optional; if removed, will use system hugepage
> configuration
> + amount: 256
> + force_first_numa: false
> + traffic_generator:
> + type: SCAPY
> diff --git a/dts/framework/config/__init__.py
> b/dts/framework/config/__init__.py
> index fad56cc520..72aa021b97 100644
> --- a/dts/framework/config/__init__.py
> +++ b/dts/framework/config/__init__.py
> @@ -13,7 +13,7 @@
> from dataclasses import dataclass
> from enum import Enum, auto, unique
> from pathlib import PurePath
> -from typing import Any, TypedDict
> +from typing import Any, TypedDict, Union
>
> import warlock # type: ignore
> import yaml
> @@ -55,6 +55,11 @@ class Compiler(StrEnum):
> msvc = auto()
>
>
> +@unique
> +class TrafficGeneratorType(StrEnum):
> + SCAPY = auto()
> +
> +
> # Slots enables some optimizations, by pre-allocating space for the
> defined
> # attributes in the underlying data structure.
> #
> @@ -79,6 +84,29 @@ class PortConfig:
> def from_dict(node: str, d: dict) -> "PortConfig":
> return PortConfig(node=node, **d)
>
> + def __str__(self) -> str:
> + return f"Port {self.pci} on node {self.node}"
> +
> +
> +@dataclass(slots=True, frozen=True)
> +class TrafficGeneratorConfig:
> + traffic_generator_type: TrafficGeneratorType
> +
> + @staticmethod
> + def from_dict(d: dict):
> + # This looks useless now, but is designed to allow expansion to
> traffic
> + # generators that require more configuration later.
> + match TrafficGeneratorType(d["type"]):
> + case TrafficGeneratorType.SCAPY:
> + return ScapyTrafficGeneratorConfig(
> + traffic_generator_type=TrafficGeneratorType.SCAPY
> + )
> +
> +
> +@dataclass(slots=True, frozen=True)
> +class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
> + pass
> +
>
> @dataclass(slots=True, frozen=True)
> class NodeConfiguration:
> @@ -90,17 +118,17 @@ class NodeConfiguration:
> os: OS
> lcores: str
> use_first_core: bool
> - memory_channels: int
> hugepages: HugepageConfiguration | None
> ports: list[PortConfig]
>
> @staticmethod
> - def from_dict(d: dict) -> "NodeConfiguration":
> + def from_dict(d: dict) -> Union["SutNodeConfiguration",
> "TGNodeConfiguration"]:
> hugepage_config = d.get("hugepages")
> if hugepage_config:
> if "force_first_numa" not in hugepage_config:
> hugepage_config["force_first_numa"] = False
> hugepage_config = HugepageConfiguration(**hugepage_config)
> +
> common_config = {
> "name": d["name"],
> "hostname": d["hostname"],
> @@ -110,12 +138,31 @@ def from_dict(d: dict) -> "NodeConfiguration":
> "os": OS(d["os"]),
> "lcores": d.get("lcores", "1"),
> "use_first_core": d.get("use_first_core", False),
> - "memory_channels": d.get("memory_channels", 1),
> "hugepages": hugepage_config,
> "ports": [PortConfig.from_dict(d["name"], port) for port in
> d["ports"]],
> }
>
> - return NodeConfiguration(**common_config)
> + if "traffic_generator" in d:
> + return TGNodeConfiguration(
> + traffic_generator=TrafficGeneratorConfig.from_dict(
> + d["traffic_generator"]
> + ),
> + **common_config,
> + )
> + else:
> + return SutNodeConfiguration(
> + memory_channels=d.get("memory_channels", 1),
> **common_config
> + )
> +
> +
> +@dataclass(slots=True, frozen=True)
> +class SutNodeConfiguration(NodeConfiguration):
> + memory_channels: int
> +
> +
> +@dataclass(slots=True, frozen=True)
> +class TGNodeConfiguration(NodeConfiguration):
> + traffic_generator: ScapyTrafficGeneratorConfig
>
>
> @dataclass(slots=True, frozen=True)
> @@ -194,23 +241,40 @@ class ExecutionConfiguration:
> perf: bool
> func: bool
> test_suites: list[TestSuiteConfig]
> - system_under_test: NodeConfiguration
> + system_under_test_node: SutNodeConfiguration
> + traffic_generator_node: TGNodeConfiguration
> vdevs: list[str]
> skip_smoke_tests: bool
>
> @staticmethod
> - def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
> + def from_dict(
> + d: dict, node_map: dict[str, Union[SutNodeConfiguration |
> TGNodeConfiguration]]
> + ) -> "ExecutionConfiguration":
> build_targets: list[BuildTargetConfiguration] = list(
> map(BuildTargetConfiguration.from_dict, d["build_targets"])
> )
> test_suites: list[TestSuiteConfig] = list(
> map(TestSuiteConfig.from_dict, d["test_suites"])
> )
> - sut_name = d["system_under_test"]["node_name"]
> + sut_name = d["system_under_test_node"]["node_name"]
> skip_smoke_tests = d.get("skip_smoke_tests", False)
> assert sut_name in node_map, f"Unknown SUT {sut_name} in
> execution {d}"
> + system_under_test_node = node_map[sut_name]
> + assert isinstance(
> + system_under_test_node, SutNodeConfiguration
> + ), f"Invalid SUT configuration {system_under_test_node}"
> +
> + tg_name = d["traffic_generator_node"]
> + assert tg_name in node_map, f"Unknown TG {tg_name} in execution
> {d}"
> + traffic_generator_node = node_map[tg_name]
> + assert isinstance(
> + traffic_generator_node, TGNodeConfiguration
> + ), f"Invalid TG configuration {traffic_generator_node}"
> +
>
vdevs = (
> - d["system_under_test"]["vdevs"] if "vdevs" in
> d["system_under_test"] else []
> + d["system_under_test_node"]["vdevs"]
> + if "vdevs" in d["system_under_test_node"]
> + else []
> )
> return ExecutionConfiguration(
> build_targets=build_targets,
> @@ -218,7 +282,8 @@ def from_dict(d: dict, node_map: dict) ->
> "ExecutionConfiguration":
> func=d["func"],
> skip_smoke_tests=skip_smoke_tests,
> test_suites=test_suites,
> - system_under_test=node_map[sut_name],
> + system_under_test_node=system_under_test_node,
> + traffic_generator_node=traffic_generator_node,
> vdevs=vdevs,
> )
>
> @@ -229,7 +294,7 @@ class Configuration:
>
> @staticmethod
> def from_dict(d: dict) -> "Configuration":
> - nodes: list[NodeConfiguration] = list(
> + nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] =
> list(
> map(NodeConfiguration.from_dict, d["nodes"])
> )
> assert len(nodes) > 0, "There must be a node to test"
> diff --git a/dts/framework/config/conf_yaml_schema.json
> b/dts/framework/config/conf_yaml_schema.json
> index 61f52b4365..76df84840a 100644
> --- a/dts/framework/config/conf_yaml_schema.json
> +++ b/dts/framework/config/conf_yaml_schema.json
> @@ -164,6 +164,11 @@
> "amount"
> ]
> },
> + "mac_address": {
> + "type": "string",
> + "description": "A MAC address",
> + "pattern": "^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$"
> + },
> "pci_address": {
> "type": "string",
> "pattern":
> "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$"
> @@ -286,6 +291,22 @@
> ]
> },
> "minimum": 1
> + },
> + "traffic_generator": {
> + "oneOf": [
> + {
> + "type": "object",
> + "description": "Scapy traffic generator. Used for
> functional testing.",
> + "properties": {
> + "type": {
> + "type": "string",
> + "enum": [
> + "SCAPY"
> + ]
> + }
> + }
> + }
> + ]
> }
> },
> "additionalProperties": false,
> @@ -336,7 +357,7 @@
> "description": "Optional field that allows you to skip smoke
> testing",
> "type": "boolean"
> },
> - "system_under_test": {
> + "system_under_test_node": {
> "type":"object",
> "properties": {
> "node_name": {
> @@ -353,6 +374,9 @@
> "required": [
> "node_name"
> ]
> + },
> + "traffic_generator_node": {
> + "$ref": "#/definitions/node_name"
> }
> },
> "additionalProperties": false,
> @@ -361,7 +385,8 @@
> "perf",
> "func",
> "test_suites",
> - "system_under_test"
> + "system_under_test_node",
> + "traffic_generator_node"
> ]
> },
> "minimum": 1
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index 7b09d8fba8..372bc72787 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -38,17 +38,17 @@ def run_all() -> None:
> # for all Execution sections
> for execution in CONFIGURATION.executions:
> sut_node = None
> - if execution.system_under_test.name in nodes:
> + if execution.system_under_test_node.name in nodes:
> # a Node with the same name already exists
> - sut_node = nodes[execution.system_under_test.name]
> + sut_node = nodes[execution.system_under_test_node.name]
> else:
> # the SUT has not been initialized yet
> try:
> - sut_node = SutNode(execution.system_under_test)
> + sut_node = SutNode(execution.system_under_test_node)
> result.update_setup(Result.PASS)
> except Exception as e:
> dts_logger.exception(
> - f"Connection to node
> {execution.system_under_test} failed."
> + f"Connection to node
> {execution.system_under_test_node} failed."
> )
> result.update_setup(Result.FAIL, e)
> else:
> @@ -87,7 +87,9 @@ def _run_execution(
> Run the given execution. This involves running the execution setup as
> well as
> running all build targets in the given execution.
> """
> - dts_logger.info(f"Running execution with SUT '{
> execution.system_under_test.name}'.")
> + dts_logger.info(
> + f"Running execution with SUT '{
> execution.system_under_test_node.name}'."
> + )
> execution_result = result.add_execution(sut_node.config,
> sut_node.node_info)
>
> try:
> diff --git a/dts/framework/testbed_model/node.py
> b/dts/framework/testbed_model/node.py
> index c5147e0ee6..d2d55d904e 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -48,6 +48,8 @@ def __init__(self, node_config: NodeConfiguration):
> self._logger = getLogger(self.name)
> self.main_session = create_session(self.config, self.name,
> self._logger)
>
> + self._logger.info(f"Connected to node: {self.name}")
> +
> self._get_remote_cpus()
> # filter the node lcores according to user config
> self.lcores = LogicalCoreListFilter(
> @@ -56,8 +58,6 @@ def __init__(self, node_config: NodeConfiguration):
>
> self._other_sessions = []
>
> - self._logger.info(f"Created node: {self.name}")
> -
> def set_up_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> """
> Perform the execution setup that will be done for each execution
> diff --git a/dts/framework/testbed_model/sut_node.py
> b/dts/framework/testbed_model/sut_node.py
> index 53953718a1..bcad364435 100644
> --- a/dts/framework/testbed_model/sut_node.py
> +++ b/dts/framework/testbed_model/sut_node.py
> @@ -13,8 +13,8 @@
> BuildTargetConfiguration,
> BuildTargetInfo,
> InteractiveApp,
> - NodeConfiguration,
> NodeInfo,
> + SutNodeConfiguration,
> )
> from framework.remote_session import (
> CommandResult,
> @@ -83,6 +83,7 @@ class SutNode(Node):
> Another key capability is building DPDK according to given build
> target.
> """
>
> + config: SutNodeConfiguration
> _dpdk_prefix_list: list[str]
> _dpdk_timestamp: str
> _build_target_config: BuildTargetConfiguration | None
> @@ -95,7 +96,7 @@ class SutNode(Node):
> _node_info: NodeInfo | None
> _compiler_version: str | None
>
> - def __init__(self, node_config: NodeConfiguration):
> + def __init__(self, node_config: SutNodeConfiguration):
> super(SutNode, self).__init__(node_config)
> self._dpdk_prefix_list = []
> self._build_target_config = None
> @@ -110,6 +111,7 @@ def __init__(self, node_config: NodeConfiguration):
> self._dpdk_version = None
> self._node_info = None
> self._compiler_version = None
> + self._logger.info(f"Created node: {self.name}")
>
> @property
> def _remote_dpdk_dir(self) -> PurePath:
> --
> 2.34.1
>
>
[-- Attachment #2: Type: text/html, Size: 21341 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v2 3/6] dts: traffic generator abstractions
2023-07-17 11:07 ` [PATCH v2 3/6] dts: traffic generator abstractions Juraj Linkeš
@ 2023-07-18 19:56 ` Jeremy Spewock
2023-07-19 13:23 ` Juraj Linkeš
0 siblings, 1 reply; 29+ messages in thread
From: Jeremy Spewock @ 2023-07-18 19:56 UTC (permalink / raw)
To: Juraj Linkeš; +Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, probb, dev
[-- Attachment #1: Type: text/plain, Size: 36952 bytes --]
Hey Juraj,
Just a couple of comments below, but very minor stuff. Just a few docstring
that I commented on and one question about the factory for traffic
generators that I was wondering what you thought about. More below:
On Mon, Jul 17, 2023 at 7:07 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> There are traffic abstractions for all traffic generators and for
> traffic generators that can capture (not just count) packets.
>
> There also related abstractions, such as TGNode where the traffic
> generators reside and some related code.
>
> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> ---
> doc/guides/tools/dts.rst | 31 ++++
> dts/framework/dts.py | 61 ++++----
> dts/framework/remote_session/linux_session.py | 78 ++++++++++
> dts/framework/remote_session/os_session.py | 15 ++
> dts/framework/test_suite.py | 4 +-
> dts/framework/testbed_model/__init__.py | 1 +
> .../capturing_traffic_generator.py | 135 ++++++++++++++++++
> dts/framework/testbed_model/hw/port.py | 60 ++++++++
> dts/framework/testbed_model/node.py | 15 ++
> dts/framework/testbed_model/scapy.py | 74 ++++++++++
> dts/framework/testbed_model/tg_node.py | 99 +++++++++++++
> .../testbed_model/traffic_generator.py | 72 ++++++++++
> dts/framework/utils.py | 13 ++
> 13 files changed, 632 insertions(+), 26 deletions(-)
> create mode 100644
> dts/framework/testbed_model/capturing_traffic_generator.py
> create mode 100644 dts/framework/testbed_model/hw/port.py
> create mode 100644 dts/framework/testbed_model/scapy.py
> create mode 100644 dts/framework/testbed_model/tg_node.py
> create mode 100644 dts/framework/testbed_model/traffic_generator.py
>
> diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
> index c7b31623e4..2f97d1df6e 100644
> --- a/doc/guides/tools/dts.rst
> +++ b/doc/guides/tools/dts.rst
> @@ -153,6 +153,37 @@ There are two areas that need to be set up on a
> System Under Test:
>
> sudo usermod -aG sudo <sut_user>
>
> +
> +Setting up Traffic Generator Node
> +---------------------------------
> +
> +These need to be set up on a Traffic Generator Node:
> +
> +#. **Traffic generator dependencies**
> +
> + The traffic generator running on the traffic generator node must be
> installed beforehand.
> + For Scapy traffic generator, only a few Python libraries need to be
> installed:
> +
> + .. code-block:: console
> +
> + sudo apt install python3-pip
> + sudo pip install --upgrade pip
> + sudo pip install scapy==2.5.0
> +
> +#. **Hardware dependencies**
> +
> + The traffic generators, like DPDK, need a proper driver and firmware.
> + The Scapy traffic generator doesn't have strict requirements - the
> drivers that come
> + with most OS distributions will be satisfactory.
> +
> +
> +#. **User with administrator privileges**
> +
> + Similarly to the System Under Test, traffic generators need
> administrator privileges
> + to be able to use the devices.
> + Refer to the `System Under Test section <sut_admin_user>` for details.
> +
> +
> Running DTS
> -----------
>
> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> index 372bc72787..265ed7fd5b 100644
> --- a/dts/framework/dts.py
> +++ b/dts/framework/dts.py
> @@ -15,7 +15,7 @@
> from .logger import DTSLOG, getLogger
> from .test_result import BuildTargetResult, DTSResult, ExecutionResult,
> Result
> from .test_suite import get_test_suites
> -from .testbed_model import SutNode
> +from .testbed_model import SutNode, TGNode
> from .utils import check_dts_python_version
>
> dts_logger: DTSLOG = getLogger("DTSRunner")
> @@ -33,29 +33,31 @@ def run_all() -> None:
> # check the python version of the server that run dts
> check_dts_python_version()
>
> - nodes: dict[str, SutNode] = {}
> + sut_nodes: dict[str, SutNode] = {}
> + tg_nodes: dict[str, TGNode] = {}
> try:
> # for all Execution sections
> for execution in CONFIGURATION.executions:
> - sut_node = None
> - if execution.system_under_test_node.name in nodes:
> - # a Node with the same name already exists
> - sut_node = nodes[execution.system_under_test_node.name]
> - else:
> - # the SUT has not been initialized yet
> - try:
> + sut_node = sut_nodes.get(
> execution.system_under_test_node.name)
> + tg_node = tg_nodes.get(execution.traffic_generator_node.name)
> +
> + try:
> + if not sut_node:
> sut_node = SutNode(execution.system_under_test_node)
> - result.update_setup(Result.PASS)
> - except Exception as e:
> - dts_logger.exception(
> - f"Connection to node
> {execution.system_under_test_node} failed."
> - )
> - result.update_setup(Result.FAIL, e)
> - else:
> - nodes[sut_node.name] = sut_node
> -
> - if sut_node:
> - _run_execution(sut_node, execution, result)
> + sut_nodes[sut_node.name] = sut_node
> + if not tg_node:
> + tg_node = TGNode(execution.traffic_generator_node)
> + tg_nodes[tg_node.name] = tg_node
> + result.update_setup(Result.PASS)
> + except Exception as e:
> + failed_node = execution.system_under_test_node.name
> + if sut_node:
> + failed_node = execution.traffic_generator_node.name
> + dts_logger.exception(f"Creation of node {failed_node}
> failed.")
> + result.update_setup(Result.FAIL, e)
> +
> + else:
> + _run_execution(sut_node, tg_node, execution, result)
>
> except Exception as e:
> dts_logger.exception("An unexpected error has occurred.")
> @@ -64,7 +66,7 @@ def run_all() -> None:
>
> finally:
> try:
> - for node in nodes.values():
> + for node in (sut_nodes | tg_nodes).values():
> node.close()
> result.update_teardown(Result.PASS)
> except Exception as e:
> @@ -81,7 +83,10 @@ def run_all() -> None:
>
>
> def _run_execution(
> - sut_node: SutNode, execution: ExecutionConfiguration, result:
> DTSResult
> + sut_node: SutNode,
> + tg_node: TGNode,
> + execution: ExecutionConfiguration,
> + result: DTSResult,
> ) -> None:
> """
> Run the given execution. This involves running the execution setup as
> well as
> @@ -101,7 +106,9 @@ def _run_execution(
>
> else:
> for build_target in execution.build_targets:
> - _run_build_target(sut_node, build_target, execution,
> execution_result)
> + _run_build_target(
> + sut_node, tg_node, build_target, execution,
> execution_result
> + )
>
> finally:
> try:
> @@ -114,6 +121,7 @@ def _run_execution(
>
> def _run_build_target(
> sut_node: SutNode,
> + tg_node: TGNode,
> build_target: BuildTargetConfiguration,
> execution: ExecutionConfiguration,
> execution_result: ExecutionResult,
> @@ -134,7 +142,7 @@ def _run_build_target(
> build_target_result.update_setup(Result.FAIL, e)
>
> else:
> - _run_all_suites(sut_node, execution, build_target_result)
> + _run_all_suites(sut_node, tg_node, execution, build_target_result)
>
> finally:
> try:
> @@ -147,6 +155,7 @@ def _run_build_target(
>
> def _run_all_suites(
> sut_node: SutNode,
> + tg_node: TGNode,
> execution: ExecutionConfiguration,
> build_target_result: BuildTargetResult,
> ) -> None:
> @@ -161,7 +170,7 @@ def _run_all_suites(
> for test_suite_config in execution.test_suites:
> try:
> _run_single_suite(
> - sut_node, execution, build_target_result,
> test_suite_config
> + sut_node, tg_node, execution, build_target_result,
> test_suite_config
> )
> except BlockingTestSuiteError as e:
> dts_logger.exception(
> @@ -177,6 +186,7 @@ def _run_all_suites(
>
> def _run_single_suite(
> sut_node: SutNode,
> + tg_node: TGNode,
> execution: ExecutionConfiguration,
> build_target_result: BuildTargetResult,
> test_suite_config: TestSuiteConfig,
> @@ -205,6 +215,7 @@ def _run_single_suite(
> for test_suite_class in test_suite_classes:
> test_suite = test_suite_class(
> sut_node,
> + tg_node,
> test_suite_config.test_cases,
> execution.func,
> build_target_result,
> diff --git a/dts/framework/remote_session/linux_session.py
> b/dts/framework/remote_session/linux_session.py
> index f13f399121..284c74795d 100644
> --- a/dts/framework/remote_session/linux_session.py
> +++ b/dts/framework/remote_session/linux_session.py
> @@ -2,13 +2,47 @@
> # Copyright(c) 2023 PANTHEON.tech s.r.o.
> # Copyright(c) 2023 University of New Hampshire
>
> +import json
> +from typing import TypedDict
> +
> +from typing_extensions import NotRequired
> +
> from framework.exception import RemoteCommandExecutionError
> from framework.testbed_model import LogicalCore
> +from framework.testbed_model.hw.port import Port
> from framework.utils import expand_range
>
> from .posix_session import PosixSession
>
>
> +class LshwConfigurationOutput(TypedDict):
> + link: str
> +
> +
> +class LshwOutput(TypedDict):
> + """
> + A model of the relevant information from json lshw output, e.g.:
> + {
> + ...
> + "businfo" : "pci@0000:08:00.0",
> + "logicalname" : "enp8s0",
> + "version" : "00",
> + "serial" : "52:54:00:59:e1:ac",
> + ...
> + "configuration" : {
> + ...
> + "link" : "yes",
> + ...
> + },
> + ...
> + """
> +
> + businfo: str
> + logicalname: NotRequired[str]
> + serial: NotRequired[str]
> + configuration: LshwConfigurationOutput
> +
> +
> class LinuxSession(PosixSession):
> """
> The implementation of non-Posix compliant parts of Linux remote
> sessions.
> @@ -102,3 +136,47 @@ def _configure_huge_pages(
> self.send_command(
> f"echo {amount} | tee {hugepage_config_path}", privileged=True
> )
> +
> + def update_ports(self, ports: list[Port]) -> None:
> + self._logger.debug("Gathering port info.")
> + for port in ports:
> + assert (
> + port.node == self.name
> + ), "Attempted to gather port info on the wrong node"
> +
> + port_info_list = self._get_lshw_info()
> + for port in ports:
> + for port_info in port_info_list:
> + if f"pci@{port.pci}" == port_info.get("businfo"):
> + self._update_port_attr(
> + port, port_info.get("logicalname"), "logical_name"
> + )
> + self._update_port_attr(port, port_info.get("serial"),
> "mac_address")
> + port_info_list.remove(port_info)
> + break
> + else:
> + self._logger.warning(f"No port at pci address {port.pci}
> found.")
> +
> + def _get_lshw_info(self) -> list[LshwOutput]:
> + output = self.send_command("lshw -quiet -json -C network",
> verify=True)
> + return json.loads(output.stdout)
> +
> + def _update_port_attr(
> + self, port: Port, attr_value: str | None, attr_name: str
> + ) -> None:
> + if attr_value:
> + setattr(port, attr_name, attr_value)
> + self._logger.debug(
> + f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
> + )
> + else:
> + self._logger.warning(
> + f"Attempted to get '{attr_name}' of port {port.pci}, "
> + f"but it doesn't exist."
> + )
> +
> + def configure_port_state(self, port: Port, enable: bool) -> None:
> + state = "up" if enable else "down"
> + self.send_command(
> + f"ip link set dev {port.logical_name} {state}",
> privileged=True
> + )
> diff --git a/dts/framework/remote_session/os_session.py
> b/dts/framework/remote_session/os_session.py
> index cc13b02f16..633d06eb5d 100644
> --- a/dts/framework/remote_session/os_session.py
> +++ b/dts/framework/remote_session/os_session.py
> @@ -12,6 +12,7 @@
> from framework.remote_session.remote import InteractiveShell, TestPmdShell
> from framework.settings import SETTINGS
> from framework.testbed_model import LogicalCore
> +from framework.testbed_model.hw.port import Port
> from framework.utils import MesonArgs
>
> from .remote import (
> @@ -255,3 +256,17 @@ def get_node_info(self) -> NodeInfo:
> """
> Collect information about the node
> """
> +
> + @abstractmethod
> + def update_ports(self, ports: list[Port]) -> None:
> + """
> + Get additional information about ports:
> + Logical name (e.g. enp7s0) if applicable
> + Mac address
> + """
> +
> + @abstractmethod
> + def configure_port_state(self, port: Port, enable: bool) -> None:
> + """
> + Enable/disable port.
> + """
> diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
> index de94c9332d..056460dd05 100644
> --- a/dts/framework/test_suite.py
> +++ b/dts/framework/test_suite.py
> @@ -20,7 +20,7 @@
> from .logger import DTSLOG, getLogger
> from .settings import SETTINGS
> from .test_result import BuildTargetResult, Result, TestCaseResult,
> TestSuiteResult
> -from .testbed_model import SutNode
> +from .testbed_model import SutNode, TGNode
>
>
> class TestSuite(object):
> @@ -51,11 +51,13 @@ class TestSuite(object):
> def __init__(
> self,
> sut_node: SutNode,
> + tg_node: TGNode,
> test_cases: list[str],
> func: bool,
> build_target_result: BuildTargetResult,
> ):
> self.sut_node = sut_node
> + self.tg_node = tg_node
> self._logger = getLogger(self.__class__.__name__)
> self._test_cases_to_run = test_cases
> self._test_cases_to_run.extend(SETTINGS.test_cases)
> diff --git a/dts/framework/testbed_model/__init__.py
> b/dts/framework/testbed_model/__init__.py
> index f54a947051..5cbb859e47 100644
> --- a/dts/framework/testbed_model/__init__.py
> +++ b/dts/framework/testbed_model/__init__.py
> @@ -20,3 +20,4 @@
> )
> from .node import Node
> from .sut_node import SutNode
> +from .tg_node import TGNode
> diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py
> b/dts/framework/testbed_model/capturing_traffic_generator.py
> new file mode 100644
> index 0000000000..1130d87f1e
> --- /dev/null
> +++ b/dts/framework/testbed_model/capturing_traffic_generator.py
> @@ -0,0 +1,135 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 University of New Hampshire
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +"""Traffic generator that can capture packets.
> +
> +In functional testing, we need to interrogate received packets to check
> their validity.
> +Here we define the interface common to all
> +traffic generators capable of capturing traffic.
>
Is there a reason for the line break here? Just to keep things consistent I
think it might make sense to extend this line to be the same length as the
one above.
> +"""
> +
> +import uuid
> +from abc import abstractmethod
> +
> +import scapy.utils # type: ignore[import]
> +from scapy.packet import Packet # type: ignore[import]
> +
> +from framework.settings import SETTINGS
> +from framework.utils import get_packet_summaries
> +
> +from .hw.port import Port
> +from .traffic_generator import TrafficGenerator
> +
> +
> +def _get_default_capture_name() -> str:
> + """
> + This is the function used for the default implementation of capture
> names.
> + """
> + return str(uuid.uuid4())
> +
> +
> +class CapturingTrafficGenerator(TrafficGenerator):
> + """
> + A mixin interface which enables a packet generator to declare that
> it can capture
> + packets and return them to the user.
>
This is missing the one line summary at the top of the comment. Obviously
this is not a big issue, but we likely would want this to be uniform with
the rest of the module which does have the summary at the top.
> +
> + The methods of capturing traffic generators obey the following
> workflow:
> + 1. send packets
> + 2. capture packets
> + 3. write the capture to a .pcap file
> + 4. return the received packets
> + """
> +
> + @property
> + def is_capturing(self) -> bool:
> + return True
> +
> + def send_packet_and_capture(
> + self,
> + packet: Packet,
> + send_port: Port,
> + receive_port: Port,
> + duration: float,
> + capture_name: str = _get_default_capture_name(),
> + ) -> list[Packet]:
> + """Send a packet, return received traffic.
> +
> + Send a packet on the send_port and then return all traffic
> captured
> + on the receive_port for the given duration. Also record the
> captured traffic
> + in a pcap file.
> +
>
+ Args:
> + packet: The packet to send.
> + send_port: The egress port on the TG node.
> + receive_port: The ingress port in the TG node.
> + duration: Capture traffic for this amount of time after
> sending the packet.
> + capture_name: The name of the .pcap file where to store the
> capture.
> +
> + Returns:
> + A list of received packets. May be empty if no packets are
> captured.
> + """
> + return self.send_packets_and_capture(
> + [packet], send_port, receive_port, duration, capture_name
> + )
> +
> + def send_packets_and_capture(
> + self,
> + packets: list[Packet],
> + send_port: Port,
> + receive_port: Port,
> + duration: float,
> + capture_name: str = _get_default_capture_name(),
> + ) -> list[Packet]:
> + """Send packets, return received traffic.
> +
> + Send packets on the send_port and then return all traffic captured
> + on the receive_port for the given duration. Also record the
> captured traffic
> + in a pcap file.
> +
> + Args:
> + packets: The packets to send.
> + send_port: The egress port on the TG node.
> + receive_port: The ingress port in the TG node.
> + duration: Capture traffic for this amount of time after
> sending the packets.
> + capture_name: The name of the .pcap file where to store the
> capture.
> +
> + Returns:
> + A list of received packets. May be empty if no packets are
> captured.
> + """
> + self._logger.debug(get_packet_summaries(packets))
> + self._logger.debug(
> + f"Sending packet on {send_port.logical_name}, "
> + f"receiving on {receive_port.logical_name}."
> + )
> + received_packets = self._send_packets_and_capture(
> + packets,
> + send_port,
> + receive_port,
> + duration,
> + )
> +
> + self._logger.debug(
> + f"Received packets: {get_packet_summaries(received_packets)}"
> + )
> + self._write_capture_from_packets(capture_name, received_packets)
> + return received_packets
> +
> + @abstractmethod
> + def _send_packets_and_capture(
> + self,
> + packets: list[Packet],
> + send_port: Port,
> + receive_port: Port,
> + duration: float,
> + ) -> list[Packet]:
> + """
> + The extended classes must implement this method which
> + sends packets on send_port and receives packets on the
> receive_port
> + for the specified duration. It must be able to handle no received
> packets.
> + """
> +
> + def _write_capture_from_packets(self, capture_name: str, packets:
> list[Packet]):
> + file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
> + self._logger.debug(f"Writing packets to {file_name}.")
> + scapy.utils.wrpcap(file_name, packets)
> diff --git a/dts/framework/testbed_model/hw/port.py
> b/dts/framework/testbed_model/hw/port.py
> new file mode 100644
> index 0000000000..680c29bfe3
> --- /dev/null
> +++ b/dts/framework/testbed_model/hw/port.py
> @@ -0,0 +1,60 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 University of New Hampshire
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +from dataclasses import dataclass
> +
> +from framework.config import PortConfig
> +
> +
> +@dataclass(slots=True, frozen=True)
> +class PortIdentifier:
> + node: str
> + pci: str
> +
> +
> +@dataclass(slots=True)
> +class Port:
> + """
> + identifier: The PCI address of the port on a node.
> +
> + os_driver: The driver used by this port when the OS is controlling it.
> + Example: i40e
> + os_driver_for_dpdk: The driver the device must be bound to for DPDK
> to use it,
> + Example: vfio-pci.
> +
> + Note: os_driver and os_driver_for_dpdk may be the same thing.
> + Example: mlx5_core
> +
> + peer: The identifier of a port this port is connected with.
> + """
> +
> + identifier: PortIdentifier
> + os_driver: str
> + os_driver_for_dpdk: str
> + peer: PortIdentifier
> + mac_address: str = ""
> + logical_name: str = ""
> +
> + def __init__(self, node_name: str, config: PortConfig):
> + self.identifier = PortIdentifier(
> + node=node_name,
> + pci=config.pci,
> + )
> + self.os_driver = config.os_driver
> + self.os_driver_for_dpdk = config.os_driver_for_dpdk
> + self.peer = PortIdentifier(node=config.peer_node,
> pci=config.peer_pci)
> +
> + @property
> + def node(self) -> str:
> + return self.identifier.node
> +
> + @property
> + def pci(self) -> str:
> + return self.identifier.pci
> +
> +
> +@dataclass(slots=True, frozen=True)
> +class PortLink:
> + sut_port: Port
> + tg_port: Port
> diff --git a/dts/framework/testbed_model/node.py
> b/dts/framework/testbed_model/node.py
> index d2d55d904e..e09931cedf 100644
> --- a/dts/framework/testbed_model/node.py
> +++ b/dts/framework/testbed_model/node.py
> @@ -25,6 +25,7 @@
> LogicalCoreListFilter,
> lcore_filter,
> )
> +from .hw.port import Port
>
>
> class Node(object):
> @@ -38,6 +39,7 @@ class Node(object):
> config: NodeConfiguration
> name: str
> lcores: list[LogicalCore]
> + ports: list[Port]
> _logger: DTSLOG
> _other_sessions: list[OSSession]
> _execution_config: ExecutionConfiguration
> @@ -57,6 +59,13 @@ def __init__(self, node_config: NodeConfiguration):
> ).filter()
>
> self._other_sessions = []
> + self._init_ports()
> +
> + def _init_ports(self) -> None:
> + self.ports = [Port(self.name, port_config) for port_config in
> self.config.ports]
> + self.main_session.update_ports(self.ports)
> + for port in self.ports:
> + self.configure_port_state(port)
>
> def set_up_execution(self, execution_config: ExecutionConfiguration)
> -> None:
> """
> @@ -168,6 +177,12 @@ def _setup_hugepages(self):
> self.config.hugepages.amount,
> self.config.hugepages.force_first_numa
> )
>
> + def configure_port_state(self, port: Port, enable: bool = True) ->
> None:
> + """
> + Enable/disable port.
> + """
> + self.main_session.configure_port_state(port, enable)
> +
> def close(self) -> None:
> """
> Close all connections and free other resources.
> diff --git a/dts/framework/testbed_model/scapy.py
> b/dts/framework/testbed_model/scapy.py
> new file mode 100644
> index 0000000000..1a23dc9fa3
> --- /dev/null
> +++ b/dts/framework/testbed_model/scapy.py
> @@ -0,0 +1,74 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 University of New Hampshire
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +"""Scapy traffic generator.
> +
> +Traffic generator used for functional testing, implemented using the
> Scapy library.
> +The traffic generator uses an XML-RPC server to run Scapy on the remote
> TG node.
> +
> +The XML-RPC server runs in an interactive remote SSH session running
> Python console,
> +where we start the server. The communication with the server is
> facilitated with
> +a local server proxy.
> +"""
> +
> +from scapy.packet import Packet # type: ignore[import]
> +
> +from framework.config import OS, ScapyTrafficGeneratorConfig
> +from framework.logger import getLogger
> +
> +from .capturing_traffic_generator import (
> + CapturingTrafficGenerator,
> + _get_default_capture_name,
> +)
> +from .hw.port import Port
> +from .tg_node import TGNode
> +
> +
> +class ScapyTrafficGenerator(CapturingTrafficGenerator):
> + """Provides access to scapy functions via an RPC interface.
> +
> + The traffic generator first starts an XML-RPC on the remote TG node.
> + Then it populates the server with functions which use the Scapy
> library
> + to send/receive traffic.
> +
> + Any packets sent to the remote server are first converted to bytes.
> + They are received as xmlrpc.client.Binary objects on the server side.
> + When the server sends the packets back, they are also received as
> + xmlrpc.client.Binary object on the client side, are converted back to
> Scapy
> + packets and only then returned from the methods.
> +
> + Arguments:
> + tg_node: The node where the traffic generator resides.
> + config: The user configuration of the traffic generator.
> + """
> +
> + _config: ScapyTrafficGeneratorConfig
> + _tg_node: TGNode
> +
> + def __init__(self, tg_node: TGNode, config:
> ScapyTrafficGeneratorConfig):
> + self._config = config
> + self._tg_node = tg_node
> + self._logger = getLogger(
> + f"{self._tg_node.name} {self._config.traffic_generator_type}"
> + )
> +
> + assert (
> + self._tg_node.config.os == OS.linux
> + ), "Linux is the only supported OS for scapy traffic generation"
> +
> + def _send_packets(self, packets: list[Packet], port: Port) -> None:
> + raise NotImplementedError()
> +
> + def _send_packets_and_capture(
> + self,
> + packets: list[Packet],
> + send_port: Port,
> + receive_port: Port,
> + duration: float,
> + capture_name: str = _get_default_capture_name(),
> + ) -> list[Packet]:
> + raise NotImplementedError()
> +
> + def close(self):
> + pass
> diff --git a/dts/framework/testbed_model/tg_node.py
> b/dts/framework/testbed_model/tg_node.py
> new file mode 100644
> index 0000000000..27025cfa31
> --- /dev/null
> +++ b/dts/framework/testbed_model/tg_node.py
> @@ -0,0 +1,99 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2010-2014 Intel Corporation
> +# Copyright(c) 2022 University of New Hampshire
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +"""Traffic generator node.
> +
> +This is the node where the traffic generator resides.
> +The distinction between a node and a traffic generator is as follows:
> +A node is a host that DTS connects to. It could be a baremetal server,
> +a VM or a container.
> +A traffic generator is software running on the node.
> +A traffic generator node is a node running a traffic generator.
> +A node can be a traffic generator node as well as system under test node.
> +"""
> +
> +from scapy.packet import Packet # type: ignore[import]
> +
> +from framework.config import (
> + ScapyTrafficGeneratorConfig,
> + TGNodeConfiguration,
> + TrafficGeneratorType,
> +)
> +from framework.exception import ConfigurationError
> +
> +from .capturing_traffic_generator import CapturingTrafficGenerator
> +from .hw.port import Port
> +from .node import Node
> +
> +
> +class TGNode(Node):
> + """Manage connections to a node with a traffic generator.
> +
> + Apart from basic node management capabilities, the Traffic Generator
> node has
> + specialized methods for handling the traffic generator running on it.
> +
> + Arguments:
> + node_config: The user configuration of the traffic generator node.
> +
> + Attributes:
> + traffic_generator: The traffic generator running on the node.
> + """
> +
> + traffic_generator: CapturingTrafficGenerator
> +
> + def __init__(self, node_config: TGNodeConfiguration):
> + super(TGNode, self).__init__(node_config)
> + self.traffic_generator = create_traffic_generator(
> + self, node_config.traffic_generator
> + )
> + self._logger.info(f"Created node: {self.name}")
> +
> + def send_packet_and_capture(
> + self,
> + packet: Packet,
> + send_port: Port,
> + receive_port: Port,
> + duration: float = 1,
> + ) -> list[Packet]:
> + """Send a packet, return received traffic.
> +
> + Send a packet on the send_port and then return all traffic
> captured
> + on the receive_port for the given duration. Also record the
> captured traffic
> + in a pcap file.
> +
> + Args:
> + packet: The packet to send.
> + send_port: The egress port on the TG node.
> + receive_port: The ingress port in the TG node.
> + duration: Capture traffic for this amount of time after
> sending the packet.
> +
> + Returns:
> + A list of received packets. May be empty if no packets are
> captured.
> + """
> + return self.traffic_generator.send_packet_and_capture(
> + packet, send_port, receive_port, duration
> + )
> +
> + def close(self) -> None:
> + """Free all resources used by the node"""
> + self.traffic_generator.close()
> + super(TGNode, self).close()
> +
> +
> +def create_traffic_generator(
> + tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
> +) -> CapturingTrafficGenerator:
> + """A factory function for creating traffic generator object from user
> config."""
> +
> + from .scapy import ScapyTrafficGenerator
> +
> + match traffic_generator_config.traffic_generator_type:
> + case TrafficGeneratorType.SCAPY:
> + return ScapyTrafficGenerator(tg_node,
> traffic_generator_config)
> + case _:
> + raise ConfigurationError(
> + "Unknown traffic generator: "
> + f"{traffic_generator_config.traffic_generator_type}"
> + )
Would it be possible here to do something like what we did in
create_interactive_shell with a TypeVar where we can initialize it
directly? It would change from using the enum to setting the
traffic_generator_config.traffic_generator_type to a specific class in the
config (in this case, ScapyTrafficGenerator), but I think it would be
possible to change in the from_dict method where we could set this type to
the class directly instead of the enum (or maybe had the enum relate it's
values to the classes themselves).
I think this would make some things slightly more complicated (like how we
would map from conf.yaml to one of the classes and all of those needing to
be imported in config/__init__.py) but it would save developers in the
future from having to add to two different places (the enum in
config/__init__.py and this match statement) and save this list from being
arbitrarily long. I think this is fine for this patch but maybe when we
expand the traffic generator or the scapy generator it could be worth
thinking about.
Do you think it would make sense to change it in this way or would that be
somewhat unnecessary in your eyes?
> diff --git a/dts/framework/testbed_model/traffic_generator.py
> b/dts/framework/testbed_model/traffic_generator.py
> new file mode 100644
> index 0000000000..28c35d3ce4
> --- /dev/null
> +++ b/dts/framework/testbed_model/traffic_generator.py
> @@ -0,0 +1,72 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2022 University of New Hampshire
> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
> +
> +"""The base traffic generator.
> +
> +These traffic generators can't capture received traffic,
> +only count the number of received packets.
> +"""
> +
> +from abc import ABC, abstractmethod
> +
> +from scapy.packet import Packet # type: ignore[import]
> +
> +from framework.logger import DTSLOG
> +from framework.utils import get_packet_summaries
> +
> +from .hw.port import Port
> +
> +
> +class TrafficGenerator(ABC):
> + """The base traffic generator.
> +
> + Defines the few basic methods that each traffic generator must
> implement.
> + """
> +
> + _logger: DTSLOG
> +
> + def send_packet(self, packet: Packet, port: Port) -> None:
> + """Send a packet and block until it is fully sent.
> +
> + What fully sent means is defined by the traffic generator.
> +
> + Args:
> + packet: The packet to send.
> + port: The egress port on the TG node.
> + """
> + self.send_packets([packet], port)
> +
> + def send_packets(self, packets: list[Packet], port: Port) -> None:
> + """Send packets and block until they are fully sent.
> +
> + What fully sent means is defined by the traffic generator.
> +
> + Args:
> + packets: The packets to send.
> + port: The egress port on the TG node.
> + """
> + self._logger.info(f"Sending packet{'s' if len(packets) > 1 else
> ''}.")
> + self._logger.debug(get_packet_summaries(packets))
> + self._send_packets(packets, port)
> +
> + @abstractmethod
> + def _send_packets(self, packets: list[Packet], port: Port) -> None:
> + """
> + The extended classes must implement this method which
> + sends packets on send_port. The method should block until all
> packets
> + are fully sent.
> + """
> +
> + @property
> + def is_capturing(self) -> bool:
> + """Whether this traffic generator can capture traffic.
> +
> + Returns:
> + True if the traffic generator can capture traffic, False
> otherwise.
> + """
> + return False
> +
> + @abstractmethod
> + def close(self) -> None:
> + """Free all resources used by the traffic generator."""
> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
> index 60abe46edf..d27c2c5b5f 100644
> --- a/dts/framework/utils.py
> +++ b/dts/framework/utils.py
> @@ -4,6 +4,7 @@
> # Copyright(c) 2022-2023 University of New Hampshire
>
> import atexit
> +import json
> import os
> import subprocess
> import sys
> @@ -11,6 +12,8 @@
> from pathlib import Path
> from subprocess import SubprocessError
>
> +from scapy.packet import Packet # type: ignore[import]
> +
> from .exception import ConfigurationError
>
>
> @@ -64,6 +67,16 @@ def expand_range(range_str: str) -> list[int]:
> return expanded_range
>
>
> +def get_packet_summaries(packets: list[Packet]):
> + if len(packets) == 1:
> + packet_summaries = packets[0].summary()
> + else:
> + packet_summaries = json.dumps(
> + list(map(lambda pkt: pkt.summary(), packets)), indent=4
> + )
> + return f"Packet contents: \n{packet_summaries}"
> +
> +
> def RED(text: str) -> str:
> return f"\u001B[31;1m{str(text)}\u001B[0m"
>
> --
> 2.34.1
>
>
[-- Attachment #2: Type: text/html, Size: 45296 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v2 0/6] dts: tg abstractions and scapy tg
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
` (5 preceding siblings ...)
2023-07-17 11:07 ` [PATCH v2 6/6] dts: add basic UDP test case Juraj Linkeš
@ 2023-07-18 21:04 ` Jeremy Spewock
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
7 siblings, 0 replies; 29+ messages in thread
From: Jeremy Spewock @ 2023-07-18 21:04 UTC (permalink / raw)
To: Juraj Linkeš; +Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, probb, dev
[-- Attachment #1: Type: text/plain, Size: 3254 bytes --]
This looked good to me, just a couple of minor comments (about docstrings
and their format) and one question that I left. Once this is rebased on the
newer version of my patch it should be good.
On Mon, Jul 17, 2023 at 7:07 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:
> Add abstractions for traffic generator split into those that can and
> can't capture traffic.
>
> The Scapy implementation uses an XML-RPC server for remote control. This
> requires an interactive session to add Scapy funcions to the server. The
> interactive session code is based on another patch [0].
>
> The basic test case is there to showcase the Scapy implementation - it
> sends just one UDP packet and verifies it on the other end.
>
> [0]:
> http://patches.dpdk.org/project/dpdk/patch/20230713165347.21997-3-jspewock@iol.unh.edu/
>
> Juraj Linkeš (6):
> dts: add scapy dependency
> dts: add traffic generator config
> dts: traffic generator abstractions
> dts: add python remote interactive shell
> dts: scapy traffic generator implementation
> dts: add basic UDP test case
>
> doc/guides/tools/dts.rst | 31 ++
> dts/conf.yaml | 27 +-
> dts/framework/config/__init__.py | 115 ++++---
> dts/framework/config/conf_yaml_schema.json | 32 +-
> dts/framework/dts.py | 65 ++--
> dts/framework/remote_session/__init__.py | 3 +-
> dts/framework/remote_session/linux_session.py | 96 ++++++
> dts/framework/remote_session/os_session.py | 75 +++--
> .../remote_session/remote/__init__.py | 1 +
> .../remote/interactive_shell.py | 18 +-
> .../remote_session/remote/python_shell.py | 24 ++
> .../remote_session/remote/testpmd_shell.py | 33 +-
> dts/framework/test_suite.py | 221 ++++++++++++-
> dts/framework/testbed_model/__init__.py | 1 +
> .../capturing_traffic_generator.py | 135 ++++++++
> dts/framework/testbed_model/hw/port.py | 60 ++++
> dts/framework/testbed_model/node.py | 64 +++-
> dts/framework/testbed_model/scapy.py | 290 ++++++++++++++++++
> dts/framework/testbed_model/sut_node.py | 52 ++--
> dts/framework/testbed_model/tg_node.py | 99 ++++++
> .../testbed_model/traffic_generator.py | 72 +++++
> dts/framework/utils.py | 13 +
> dts/poetry.lock | 21 +-
> dts/pyproject.toml | 1 +
> dts/tests/TestSuite_os_udp.py | 45 +++
> dts/tests/TestSuite_smoke_tests.py | 6 +-
> 26 files changed, 1434 insertions(+), 166 deletions(-)
> create mode 100644 dts/framework/remote_session/remote/python_shell.py
> create mode 100644
> dts/framework/testbed_model/capturing_traffic_generator.py
> create mode 100644 dts/framework/testbed_model/hw/port.py
> create mode 100644 dts/framework/testbed_model/scapy.py
> create mode 100644 dts/framework/testbed_model/tg_node.py
> create mode 100644 dts/framework/testbed_model/traffic_generator.py
> create mode 100644 dts/tests/TestSuite_os_udp.py
>
> --
> 2.34.1
>
>
[-- Attachment #2: Type: text/html, Size: 4053 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v2 2/6] dts: add traffic generator config
2023-07-18 15:55 ` Jeremy Spewock
@ 2023-07-19 12:57 ` Juraj Linkeš
2023-07-19 13:18 ` Jeremy Spewock
0 siblings, 1 reply; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 12:57 UTC (permalink / raw)
To: Jeremy Spewock; +Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, probb, dev
On Tue, Jul 18, 2023 at 5:55 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
> Hey Juraj,
>
> These changes look good to me, I just had one question. Is the plan to make specifying a TG always required or optional for cases where it isn't required? It seems like it is written now as something that is required for every execution. I don't think this is necessarily a bad thing, I just wonder if it would be beneficial at all to make it optional to allow for executions that don't utilize a TG to just not specify it.
>
If there's a legitimate use case (or we can imagine one) for this in
CI (or anywhere else), then we'd want to make a TG node optional.
Let's leave the implementation as is and make the TG node optional
once we discuss it further in the CI call. We talked about a mechanism
to skip test cases based on the testbed capabilities and I think this
would fit into that (as TG could be viewed as a testbed capability).
> On Mon, Jul 17, 2023 at 7:07 AM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>>
>> Node configuration - where to connect, what ports to use and what TG to
>> use.
>>
>> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
>> ---
>> dts/conf.yaml | 26 ++++++-
>> dts/framework/config/__init__.py | 87 +++++++++++++++++++---
>> dts/framework/config/conf_yaml_schema.json | 29 +++++++-
>> dts/framework/dts.py | 12 +--
>> dts/framework/testbed_model/node.py | 4 +-
>> dts/framework/testbed_model/sut_node.py | 6 +-
>> 6 files changed, 141 insertions(+), 23 deletions(-)
>>
>> diff --git a/dts/conf.yaml b/dts/conf.yaml
>> index 3a5d87cb49..7f089022ba 100644
>> --- a/dts/conf.yaml
>> +++ b/dts/conf.yaml
>> @@ -13,10 +13,11 @@ executions:
>> skip_smoke_tests: false # optional flag that allow you to skip smoke tests
>> test_suites:
>> - hello_world
>> - system_under_test:
>> + system_under_test_node:
>> node_name: "SUT 1"
>> vdevs: # optional; if removed, vdevs won't be used in the execution
>> - "crypto_openssl"
>> + traffic_generator_node: "TG 1"
>> nodes:
>> - name: "SUT 1"
>> hostname: sut1.change.me.localhost
>> @@ -40,3 +41,26 @@ nodes:
>> os_driver: i40e
>> peer_node: "TG 1"
>> peer_pci: "0000:00:08.1"
>> + - name: "TG 1"
>> + hostname: tg1.change.me.localhost
>> + user: dtsuser
>> + arch: x86_64
>> + os: linux
>> + lcores: ""
>> + ports:
>> + - pci: "0000:00:08.0"
>> + os_driver_for_dpdk: rdma
>> + os_driver: rdma
>> + peer_node: "SUT 1"
>> + peer_pci: "0000:00:08.0"
>> + - pci: "0000:00:08.1"
>> + os_driver_for_dpdk: rdma
>> + os_driver: rdma
>> + peer_node: "SUT 1"
>> + peer_pci: "0000:00:08.1"
>> + use_first_core: false
>> + hugepages: # optional; if removed, will use system hugepage configuration
>> + amount: 256
>> + force_first_numa: false
>> + traffic_generator:
>> + type: SCAPY
>> diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
>> index fad56cc520..72aa021b97 100644
>> --- a/dts/framework/config/__init__.py
>> +++ b/dts/framework/config/__init__.py
>> @@ -13,7 +13,7 @@
>> from dataclasses import dataclass
>> from enum import Enum, auto, unique
>> from pathlib import PurePath
>> -from typing import Any, TypedDict
>> +from typing import Any, TypedDict, Union
>>
>> import warlock # type: ignore
>> import yaml
>> @@ -55,6 +55,11 @@ class Compiler(StrEnum):
>> msvc = auto()
>>
>>
>> +@unique
>> +class TrafficGeneratorType(StrEnum):
>> + SCAPY = auto()
>> +
>> +
>> # Slots enables some optimizations, by pre-allocating space for the defined
>> # attributes in the underlying data structure.
>> #
>> @@ -79,6 +84,29 @@ class PortConfig:
>> def from_dict(node: str, d: dict) -> "PortConfig":
>> return PortConfig(node=node, **d)
>>
>> + def __str__(self) -> str:
>> + return f"Port {self.pci} on node {self.node}"
>> +
>> +
>> +@dataclass(slots=True, frozen=True)
>> +class TrafficGeneratorConfig:
>> + traffic_generator_type: TrafficGeneratorType
>> +
>> + @staticmethod
>> + def from_dict(d: dict):
>> + # This looks useless now, but is designed to allow expansion to traffic
>> + # generators that require more configuration later.
>> + match TrafficGeneratorType(d["type"]):
>> + case TrafficGeneratorType.SCAPY:
>> + return ScapyTrafficGeneratorConfig(
>> + traffic_generator_type=TrafficGeneratorType.SCAPY
>> + )
>> +
>> +
>> +@dataclass(slots=True, frozen=True)
>> +class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
>> + pass
>> +
>>
>> @dataclass(slots=True, frozen=True)
>> class NodeConfiguration:
>> @@ -90,17 +118,17 @@ class NodeConfiguration:
>> os: OS
>> lcores: str
>> use_first_core: bool
>> - memory_channels: int
>> hugepages: HugepageConfiguration | None
>> ports: list[PortConfig]
>>
>> @staticmethod
>> - def from_dict(d: dict) -> "NodeConfiguration":
>> + def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
>> hugepage_config = d.get("hugepages")
>> if hugepage_config:
>> if "force_first_numa" not in hugepage_config:
>> hugepage_config["force_first_numa"] = False
>> hugepage_config = HugepageConfiguration(**hugepage_config)
>> +
>> common_config = {
>> "name": d["name"],
>> "hostname": d["hostname"],
>> @@ -110,12 +138,31 @@ def from_dict(d: dict) -> "NodeConfiguration":
>> "os": OS(d["os"]),
>> "lcores": d.get("lcores", "1"),
>> "use_first_core": d.get("use_first_core", False),
>> - "memory_channels": d.get("memory_channels", 1),
>> "hugepages": hugepage_config,
>> "ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
>> }
>>
>> - return NodeConfiguration(**common_config)
>> + if "traffic_generator" in d:
>> + return TGNodeConfiguration(
>> + traffic_generator=TrafficGeneratorConfig.from_dict(
>> + d["traffic_generator"]
>> + ),
>> + **common_config,
>> + )
>> + else:
>> + return SutNodeConfiguration(
>> + memory_channels=d.get("memory_channels", 1), **common_config
>> + )
>> +
>> +
>> +@dataclass(slots=True, frozen=True)
>> +class SutNodeConfiguration(NodeConfiguration):
>> + memory_channels: int
>> +
>> +
>> +@dataclass(slots=True, frozen=True)
>> +class TGNodeConfiguration(NodeConfiguration):
>> + traffic_generator: ScapyTrafficGeneratorConfig
>>
>>
>> @dataclass(slots=True, frozen=True)
>> @@ -194,23 +241,40 @@ class ExecutionConfiguration:
>> perf: bool
>> func: bool
>> test_suites: list[TestSuiteConfig]
>> - system_under_test: NodeConfiguration
>> + system_under_test_node: SutNodeConfiguration
>> + traffic_generator_node: TGNodeConfiguration
>> vdevs: list[str]
>> skip_smoke_tests: bool
>>
>> @staticmethod
>> - def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
>> + def from_dict(
>> + d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
>> + ) -> "ExecutionConfiguration":
>> build_targets: list[BuildTargetConfiguration] = list(
>> map(BuildTargetConfiguration.from_dict, d["build_targets"])
>> )
>> test_suites: list[TestSuiteConfig] = list(
>> map(TestSuiteConfig.from_dict, d["test_suites"])
>> )
>> - sut_name = d["system_under_test"]["node_name"]
>> + sut_name = d["system_under_test_node"]["node_name"]
>> skip_smoke_tests = d.get("skip_smoke_tests", False)
>> assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
>> + system_under_test_node = node_map[sut_name]
>> + assert isinstance(
>> + system_under_test_node, SutNodeConfiguration
>> + ), f"Invalid SUT configuration {system_under_test_node}"
>> +
>> + tg_name = d["traffic_generator_node"]
>> + assert tg_name in node_map, f"Unknown TG {tg_name} in execution {d}"
>> + traffic_generator_node = node_map[tg_name]
>> + assert isinstance(
>> + traffic_generator_node, TGNodeConfiguration
>> + ), f"Invalid TG configuration {traffic_generator_node}"
>> +
>>
>> vdevs = (
>> - d["system_under_test"]["vdevs"] if "vdevs" in d["system_under_test"] else []
>> + d["system_under_test_node"]["vdevs"]
>> + if "vdevs" in d["system_under_test_node"]
>> + else []
>> )
>> return ExecutionConfiguration(
>> build_targets=build_targets,
>> @@ -218,7 +282,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
>> func=d["func"],
>> skip_smoke_tests=skip_smoke_tests,
>> test_suites=test_suites,
>> - system_under_test=node_map[sut_name],
>> + system_under_test_node=system_under_test_node,
>> + traffic_generator_node=traffic_generator_node,
>> vdevs=vdevs,
>> )
>>
>> @@ -229,7 +294,7 @@ class Configuration:
>>
>> @staticmethod
>> def from_dict(d: dict) -> "Configuration":
>> - nodes: list[NodeConfiguration] = list(
>> + nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
>> map(NodeConfiguration.from_dict, d["nodes"])
>> )
>> assert len(nodes) > 0, "There must be a node to test"
>> diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
>> index 61f52b4365..76df84840a 100644
>> --- a/dts/framework/config/conf_yaml_schema.json
>> +++ b/dts/framework/config/conf_yaml_schema.json
>> @@ -164,6 +164,11 @@
>> "amount"
>> ]
>> },
>> + "mac_address": {
>> + "type": "string",
>> + "description": "A MAC address",
>> + "pattern": "^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$"
>> + },
>> "pci_address": {
>> "type": "string",
>> "pattern": "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$"
>> @@ -286,6 +291,22 @@
>> ]
>> },
>> "minimum": 1
>> + },
>> + "traffic_generator": {
>> + "oneOf": [
>> + {
>> + "type": "object",
>> + "description": "Scapy traffic generator. Used for functional testing.",
>> + "properties": {
>> + "type": {
>> + "type": "string",
>> + "enum": [
>> + "SCAPY"
>> + ]
>> + }
>> + }
>> + }
>> + ]
>> }
>> },
>> "additionalProperties": false,
>> @@ -336,7 +357,7 @@
>> "description": "Optional field that allows you to skip smoke testing",
>> "type": "boolean"
>> },
>> - "system_under_test": {
>> + "system_under_test_node": {
>> "type":"object",
>> "properties": {
>> "node_name": {
>> @@ -353,6 +374,9 @@
>> "required": [
>> "node_name"
>> ]
>> + },
>> + "traffic_generator_node": {
>> + "$ref": "#/definitions/node_name"
>> }
>> },
>> "additionalProperties": false,
>> @@ -361,7 +385,8 @@
>> "perf",
>> "func",
>> "test_suites",
>> - "system_under_test"
>> + "system_under_test_node",
>> + "traffic_generator_node"
>> ]
>> },
>> "minimum": 1
>> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
>> index 7b09d8fba8..372bc72787 100644
>> --- a/dts/framework/dts.py
>> +++ b/dts/framework/dts.py
>> @@ -38,17 +38,17 @@ def run_all() -> None:
>> # for all Execution sections
>> for execution in CONFIGURATION.executions:
>> sut_node = None
>> - if execution.system_under_test.name in nodes:
>> + if execution.system_under_test_node.name in nodes:
>> # a Node with the same name already exists
>> - sut_node = nodes[execution.system_under_test.name]
>> + sut_node = nodes[execution.system_under_test_node.name]
>> else:
>> # the SUT has not been initialized yet
>> try:
>> - sut_node = SutNode(execution.system_under_test)
>> + sut_node = SutNode(execution.system_under_test_node)
>> result.update_setup(Result.PASS)
>> except Exception as e:
>> dts_logger.exception(
>> - f"Connection to node {execution.system_under_test} failed."
>> + f"Connection to node {execution.system_under_test_node} failed."
>> )
>> result.update_setup(Result.FAIL, e)
>> else:
>> @@ -87,7 +87,9 @@ def _run_execution(
>> Run the given execution. This involves running the execution setup as well as
>> running all build targets in the given execution.
>> """
>> - dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
>> + dts_logger.info(
>> + f"Running execution with SUT '{execution.system_under_test_node.name}'."
>> + )
>> execution_result = result.add_execution(sut_node.config, sut_node.node_info)
>>
>> try:
>> diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
>> index c5147e0ee6..d2d55d904e 100644
>> --- a/dts/framework/testbed_model/node.py
>> +++ b/dts/framework/testbed_model/node.py
>> @@ -48,6 +48,8 @@ def __init__(self, node_config: NodeConfiguration):
>> self._logger = getLogger(self.name)
>> self.main_session = create_session(self.config, self.name, self._logger)
>>
>> + self._logger.info(f"Connected to node: {self.name}")
>> +
>> self._get_remote_cpus()
>> # filter the node lcores according to user config
>> self.lcores = LogicalCoreListFilter(
>> @@ -56,8 +58,6 @@ def __init__(self, node_config: NodeConfiguration):
>>
>> self._other_sessions = []
>>
>> - self._logger.info(f"Created node: {self.name}")
>> -
>> def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
>> """
>> Perform the execution setup that will be done for each execution
>> diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
>> index 53953718a1..bcad364435 100644
>> --- a/dts/framework/testbed_model/sut_node.py
>> +++ b/dts/framework/testbed_model/sut_node.py
>> @@ -13,8 +13,8 @@
>> BuildTargetConfiguration,
>> BuildTargetInfo,
>> InteractiveApp,
>> - NodeConfiguration,
>> NodeInfo,
>> + SutNodeConfiguration,
>> )
>> from framework.remote_session import (
>> CommandResult,
>> @@ -83,6 +83,7 @@ class SutNode(Node):
>> Another key capability is building DPDK according to given build target.
>> """
>>
>> + config: SutNodeConfiguration
>> _dpdk_prefix_list: list[str]
>> _dpdk_timestamp: str
>> _build_target_config: BuildTargetConfiguration | None
>> @@ -95,7 +96,7 @@ class SutNode(Node):
>> _node_info: NodeInfo | None
>> _compiler_version: str | None
>>
>> - def __init__(self, node_config: NodeConfiguration):
>> + def __init__(self, node_config: SutNodeConfiguration):
>> super(SutNode, self).__init__(node_config)
>> self._dpdk_prefix_list = []
>> self._build_target_config = None
>> @@ -110,6 +111,7 @@ def __init__(self, node_config: NodeConfiguration):
>> self._dpdk_version = None
>> self._node_info = None
>> self._compiler_version = None
>> + self._logger.info(f"Created node: {self.name}")
>>
>> @property
>> def _remote_dpdk_dir(self) -> PurePath:
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v2 2/6] dts: add traffic generator config
2023-07-19 12:57 ` Juraj Linkeš
@ 2023-07-19 13:18 ` Jeremy Spewock
0 siblings, 0 replies; 29+ messages in thread
From: Jeremy Spewock @ 2023-07-19 13:18 UTC (permalink / raw)
To: Juraj Linkeš
Cc: Thomas Monjalon, Honnappa Nagarahalli, Tu, Lijuan, Patrick Robb, dev
[-- Attachment #1: Type: text/plain, Size: 18056 bytes --]
On Wed, Jul 19, 2023, 08:57 Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
> On Tue, Jul 18, 2023 at 5:55 PM Jeremy Spewock <jspewock@iol.unh.edu>
> wrote:
> >
> > Hey Juraj,
> >
> > These changes look good to me, I just had one question. Is the plan to
> make specifying a TG always required or optional for cases where it isn't
> required? It seems like it is written now as something that is required for
> every execution. I don't think this is necessarily a bad thing, I just
> wonder if it would be beneficial at all to make it optional to allow for
> executions that don't utilize a TG to just not specify it.
> >
>
> If there's a legitimate use case (or we can imagine one) for this in
> CI (or anywhere else), then we'd want to make a TG node optional.
> Let's leave the implementation as is and make the TG node optional
> once we discuss it further in the CI call. We talked about a mechanism
> to skip test cases based on the testbed capabilities and I think this
> would fit into that (as TG could be viewed as a testbed capability).
>
I like that idea of thinking of it as a testbed capability and skipping
based on that. I think that would work nicely to address this if there was
a use case for it. Thanks for the input!
> > On Mon, Jul 17, 2023 at 7:07 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
> wrote:
> >>
> >> Node configuration - where to connect, what ports to use and what TG to
> >> use.
> >>
> >> Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> >> ---
> >> dts/conf.yaml | 26 ++++++-
> >> dts/framework/config/__init__.py | 87 +++++++++++++++++++---
> >> dts/framework/config/conf_yaml_schema.json | 29 +++++++-
> >> dts/framework/dts.py | 12 +--
> >> dts/framework/testbed_model/node.py | 4 +-
> >> dts/framework/testbed_model/sut_node.py | 6 +-
> >> 6 files changed, 141 insertions(+), 23 deletions(-)
> >>
> >> diff --git a/dts/conf.yaml b/dts/conf.yaml
> >> index 3a5d87cb49..7f089022ba 100644
> >> --- a/dts/conf.yaml
> >> +++ b/dts/conf.yaml
> >> @@ -13,10 +13,11 @@ executions:
> >> skip_smoke_tests: false # optional flag that allow you to skip
> smoke tests
> >> test_suites:
> >> - hello_world
> >> - system_under_test:
> >> + system_under_test_node:
> >> node_name: "SUT 1"
> >> vdevs: # optional; if removed, vdevs won't be used in the
> execution
> >> - "crypto_openssl"
> >> + traffic_generator_node: "TG 1"
> >> nodes:
> >> - name: "SUT 1"
> >> hostname: sut1.change.me.localhost
> >> @@ -40,3 +41,26 @@ nodes:
> >> os_driver: i40e
> >> peer_node: "TG 1"
> >> peer_pci: "0000:00:08.1"
> >> + - name: "TG 1"
> >> + hostname: tg1.change.me.localhost
> >> + user: dtsuser
> >> + arch: x86_64
> >> + os: linux
> >> + lcores: ""
> >> + ports:
> >> + - pci: "0000:00:08.0"
> >> + os_driver_for_dpdk: rdma
> >> + os_driver: rdma
> >> + peer_node: "SUT 1"
> >> + peer_pci: "0000:00:08.0"
> >> + - pci: "0000:00:08.1"
> >> + os_driver_for_dpdk: rdma
> >> + os_driver: rdma
> >> + peer_node: "SUT 1"
> >> + peer_pci: "0000:00:08.1"
> >> + use_first_core: false
> >> + hugepages: # optional; if removed, will use system hugepage
> configuration
> >> + amount: 256
> >> + force_first_numa: false
> >> + traffic_generator:
> >> + type: SCAPY
> >> diff --git a/dts/framework/config/__init__.py
> b/dts/framework/config/__init__.py
> >> index fad56cc520..72aa021b97 100644
> >> --- a/dts/framework/config/__init__.py
> >> +++ b/dts/framework/config/__init__.py
> >> @@ -13,7 +13,7 @@
> >> from dataclasses import dataclass
> >> from enum import Enum, auto, unique
> >> from pathlib import PurePath
> >> -from typing import Any, TypedDict
> >> +from typing import Any, TypedDict, Union
> >>
> >> import warlock # type: ignore
> >> import yaml
> >> @@ -55,6 +55,11 @@ class Compiler(StrEnum):
> >> msvc = auto()
> >>
> >>
> >> +@unique
> >> +class TrafficGeneratorType(StrEnum):
> >> + SCAPY = auto()
> >> +
> >> +
> >> # Slots enables some optimizations, by pre-allocating space for the
> defined
> >> # attributes in the underlying data structure.
> >> #
> >> @@ -79,6 +84,29 @@ class PortConfig:
> >> def from_dict(node: str, d: dict) -> "PortConfig":
> >> return PortConfig(node=node, **d)
> >>
> >> + def __str__(self) -> str:
> >> + return f"Port {self.pci} on node {self.node}"
> >> +
> >> +
> >> +@dataclass(slots=True, frozen=True)
> >> +class TrafficGeneratorConfig:
> >> + traffic_generator_type: TrafficGeneratorType
> >> +
> >> + @staticmethod
> >> + def from_dict(d: dict):
> >> + # This looks useless now, but is designed to allow expansion
> to traffic
> >> + # generators that require more configuration later.
> >> + match TrafficGeneratorType(d["type"]):
> >> + case TrafficGeneratorType.SCAPY:
> >> + return ScapyTrafficGeneratorConfig(
> >> + traffic_generator_type=TrafficGeneratorType.SCAPY
> >> + )
> >> +
> >> +
> >> +@dataclass(slots=True, frozen=True)
> >> +class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
> >> + pass
> >> +
> >>
> >> @dataclass(slots=True, frozen=True)
> >> class NodeConfiguration:
> >> @@ -90,17 +118,17 @@ class NodeConfiguration:
> >> os: OS
> >> lcores: str
> >> use_first_core: bool
> >> - memory_channels: int
> >> hugepages: HugepageConfiguration | None
> >> ports: list[PortConfig]
> >>
> >> @staticmethod
> >> - def from_dict(d: dict) -> "NodeConfiguration":
> >> + def from_dict(d: dict) -> Union["SutNodeConfiguration",
> "TGNodeConfiguration"]:
> >> hugepage_config = d.get("hugepages")
> >> if hugepage_config:
> >> if "force_first_numa" not in hugepage_config:
> >> hugepage_config["force_first_numa"] = False
> >> hugepage_config = HugepageConfiguration(**hugepage_config)
> >> +
> >> common_config = {
> >> "name": d["name"],
> >> "hostname": d["hostname"],
> >> @@ -110,12 +138,31 @@ def from_dict(d: dict) -> "NodeConfiguration":
> >> "os": OS(d["os"]),
> >> "lcores": d.get("lcores", "1"),
> >> "use_first_core": d.get("use_first_core", False),
> >> - "memory_channels": d.get("memory_channels", 1),
> >> "hugepages": hugepage_config,
> >> "ports": [PortConfig.from_dict(d["name"], port) for port
> in d["ports"]],
> >> }
> >>
> >> - return NodeConfiguration(**common_config)
> >> + if "traffic_generator" in d:
> >> + return TGNodeConfiguration(
> >> + traffic_generator=TrafficGeneratorConfig.from_dict(
> >> + d["traffic_generator"]
> >> + ),
> >> + **common_config,
> >> + )
> >> + else:
> >> + return SutNodeConfiguration(
> >> + memory_channels=d.get("memory_channels", 1),
> **common_config
> >> + )
> >> +
> >> +
> >> +@dataclass(slots=True, frozen=True)
> >> +class SutNodeConfiguration(NodeConfiguration):
> >> + memory_channels: int
> >> +
> >> +
> >> +@dataclass(slots=True, frozen=True)
> >> +class TGNodeConfiguration(NodeConfiguration):
> >> + traffic_generator: ScapyTrafficGeneratorConfig
> >>
> >>
> >> @dataclass(slots=True, frozen=True)
> >> @@ -194,23 +241,40 @@ class ExecutionConfiguration:
> >> perf: bool
> >> func: bool
> >> test_suites: list[TestSuiteConfig]
> >> - system_under_test: NodeConfiguration
> >> + system_under_test_node: SutNodeConfiguration
> >> + traffic_generator_node: TGNodeConfiguration
> >> vdevs: list[str]
> >> skip_smoke_tests: bool
> >>
> >> @staticmethod
> >> - def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
> >> + def from_dict(
> >> + d: dict, node_map: dict[str, Union[SutNodeConfiguration |
> TGNodeConfiguration]]
> >> + ) -> "ExecutionConfiguration":
> >> build_targets: list[BuildTargetConfiguration] = list(
> >> map(BuildTargetConfiguration.from_dict, d["build_targets"])
> >> )
> >> test_suites: list[TestSuiteConfig] = list(
> >> map(TestSuiteConfig.from_dict, d["test_suites"])
> >> )
> >> - sut_name = d["system_under_test"]["node_name"]
> >> + sut_name = d["system_under_test_node"]["node_name"]
> >> skip_smoke_tests = d.get("skip_smoke_tests", False)
> >> assert sut_name in node_map, f"Unknown SUT {sut_name} in
> execution {d}"
> >> + system_under_test_node = node_map[sut_name]
> >> + assert isinstance(
> >> + system_under_test_node, SutNodeConfiguration
> >> + ), f"Invalid SUT configuration {system_under_test_node}"
> >> +
> >> + tg_name = d["traffic_generator_node"]
> >> + assert tg_name in node_map, f"Unknown TG {tg_name} in
> execution {d}"
> >> + traffic_generator_node = node_map[tg_name]
> >> + assert isinstance(
> >> + traffic_generator_node, TGNodeConfiguration
> >> + ), f"Invalid TG configuration {traffic_generator_node}"
> >> +
> >>
> >> vdevs = (
> >> - d["system_under_test"]["vdevs"] if "vdevs" in
> d["system_under_test"] else []
> >> + d["system_under_test_node"]["vdevs"]
> >> + if "vdevs" in d["system_under_test_node"]
> >> + else []
> >> )
> >> return ExecutionConfiguration(
> >> build_targets=build_targets,
> >> @@ -218,7 +282,8 @@ def from_dict(d: dict, node_map: dict) ->
> "ExecutionConfiguration":
> >> func=d["func"],
> >> skip_smoke_tests=skip_smoke_tests,
> >> test_suites=test_suites,
> >> - system_under_test=node_map[sut_name],
> >> + system_under_test_node=system_under_test_node,
> >> + traffic_generator_node=traffic_generator_node,
> >> vdevs=vdevs,
> >> )
> >>
> >> @@ -229,7 +294,7 @@ class Configuration:
> >>
> >> @staticmethod
> >> def from_dict(d: dict) -> "Configuration":
> >> - nodes: list[NodeConfiguration] = list(
> >> + nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]]
> = list(
> >> map(NodeConfiguration.from_dict, d["nodes"])
> >> )
> >> assert len(nodes) > 0, "There must be a node to test"
> >> diff --git a/dts/framework/config/conf_yaml_schema.json
> b/dts/framework/config/conf_yaml_schema.json
> >> index 61f52b4365..76df84840a 100644
> >> --- a/dts/framework/config/conf_yaml_schema.json
> >> +++ b/dts/framework/config/conf_yaml_schema.json
> >> @@ -164,6 +164,11 @@
> >> "amount"
> >> ]
> >> },
> >> + "mac_address": {
> >> + "type": "string",
> >> + "description": "A MAC address",
> >> + "pattern": "^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$"
> >> + },
> >> "pci_address": {
> >> "type": "string",
> >> "pattern":
> "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$"
> >> @@ -286,6 +291,22 @@
> >> ]
> >> },
> >> "minimum": 1
> >> + },
> >> + "traffic_generator": {
> >> + "oneOf": [
> >> + {
> >> + "type": "object",
> >> + "description": "Scapy traffic generator. Used for
> functional testing.",
> >> + "properties": {
> >> + "type": {
> >> + "type": "string",
> >> + "enum": [
> >> + "SCAPY"
> >> + ]
> >> + }
> >> + }
> >> + }
> >> + ]
> >> }
> >> },
> >> "additionalProperties": false,
> >> @@ -336,7 +357,7 @@
> >> "description": "Optional field that allows you to skip
> smoke testing",
> >> "type": "boolean"
> >> },
> >> - "system_under_test": {
> >> + "system_under_test_node": {
> >> "type":"object",
> >> "properties": {
> >> "node_name": {
> >> @@ -353,6 +374,9 @@
> >> "required": [
> >> "node_name"
> >> ]
> >> + },
> >> + "traffic_generator_node": {
> >> + "$ref": "#/definitions/node_name"
> >> }
> >> },
> >> "additionalProperties": false,
> >> @@ -361,7 +385,8 @@
> >> "perf",
> >> "func",
> >> "test_suites",
> >> - "system_under_test"
> >> + "system_under_test_node",
> >> + "traffic_generator_node"
> >> ]
> >> },
> >> "minimum": 1
> >> diff --git a/dts/framework/dts.py b/dts/framework/dts.py
> >> index 7b09d8fba8..372bc72787 100644
> >> --- a/dts/framework/dts.py
> >> +++ b/dts/framework/dts.py
> >> @@ -38,17 +38,17 @@ def run_all() -> None:
> >> # for all Execution sections
> >> for execution in CONFIGURATION.executions:
> >> sut_node = None
> >> - if execution.system_under_test.name in nodes:
> >> + if execution.system_under_test_node.name in nodes:
> >> # a Node with the same name already exists
> >> - sut_node = nodes[execution.system_under_test.name]
> >> + sut_node = nodes[execution.system_under_test_node.name
> ]
> >> else:
> >> # the SUT has not been initialized yet
> >> try:
> >> - sut_node = SutNode(execution.system_under_test)
> >> + sut_node =
> SutNode(execution.system_under_test_node)
> >> result.update_setup(Result.PASS)
> >> except Exception as e:
> >> dts_logger.exception(
> >> - f"Connection to node
> {execution.system_under_test} failed."
> >> + f"Connection to node
> {execution.system_under_test_node} failed."
> >> )
> >> result.update_setup(Result.FAIL, e)
> >> else:
> >> @@ -87,7 +87,9 @@ def _run_execution(
> >> Run the given execution. This involves running the execution setup
> as well as
> >> running all build targets in the given execution.
> >> """
> >> - dts_logger.info(f"Running execution with SUT '{
> execution.system_under_test.name}'.")
> >> + dts_logger.info(
> >> + f"Running execution with SUT '{
> execution.system_under_test_node.name}'."
> >> + )
> >> execution_result = result.add_execution(sut_node.config,
> sut_node.node_info)
> >>
> >> try:
> >> diff --git a/dts/framework/testbed_model/node.py
> b/dts/framework/testbed_model/node.py
> >> index c5147e0ee6..d2d55d904e 100644
> >> --- a/dts/framework/testbed_model/node.py
> >> +++ b/dts/framework/testbed_model/node.py
> >> @@ -48,6 +48,8 @@ def __init__(self, node_config: NodeConfiguration):
> >> self._logger = getLogger(self.name)
> >> self.main_session = create_session(self.config, self.name,
> self._logger)
> >>
> >> + self._logger.info(f"Connected to node: {self.name}")
> >> +
> >> self._get_remote_cpus()
> >> # filter the node lcores according to user config
> >> self.lcores = LogicalCoreListFilter(
> >> @@ -56,8 +58,6 @@ def __init__(self, node_config: NodeConfiguration):
> >>
> >> self._other_sessions = []
> >>
> >> - self._logger.info(f"Created node: {self.name}")
> >> -
> >> def set_up_execution(self, execution_config:
> ExecutionConfiguration) -> None:
> >> """
> >> Perform the execution setup that will be done for each
> execution
> >> diff --git a/dts/framework/testbed_model/sut_node.py
> b/dts/framework/testbed_model/sut_node.py
> >> index 53953718a1..bcad364435 100644
> >> --- a/dts/framework/testbed_model/sut_node.py
> >> +++ b/dts/framework/testbed_model/sut_node.py
> >> @@ -13,8 +13,8 @@
> >> BuildTargetConfiguration,
> >> BuildTargetInfo,
> >> InteractiveApp,
> >> - NodeConfiguration,
> >> NodeInfo,
> >> + SutNodeConfiguration,
> >> )
> >> from framework.remote_session import (
> >> CommandResult,
> >> @@ -83,6 +83,7 @@ class SutNode(Node):
> >> Another key capability is building DPDK according to given build
> target.
> >> """
> >>
> >> + config: SutNodeConfiguration
> >> _dpdk_prefix_list: list[str]
> >> _dpdk_timestamp: str
> >> _build_target_config: BuildTargetConfiguration | None
> >> @@ -95,7 +96,7 @@ class SutNode(Node):
> >> _node_info: NodeInfo | None
> >> _compiler_version: str | None
> >>
> >> - def __init__(self, node_config: NodeConfiguration):
> >> + def __init__(self, node_config: SutNodeConfiguration):
> >> super(SutNode, self).__init__(node_config)
> >> self._dpdk_prefix_list = []
> >> self._build_target_config = None
> >> @@ -110,6 +111,7 @@ def __init__(self, node_config: NodeConfiguration):
> >> self._dpdk_version = None
> >> self._node_info = None
> >> self._compiler_version = None
> >> + self._logger.info(f"Created node: {self.name}")
> >>
> >> @property
> >> def _remote_dpdk_dir(self) -> PurePath:
> >> --
> >> 2.34.1
> >>
>
[-- Attachment #2: Type: text/html, Size: 25944 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v2 3/6] dts: traffic generator abstractions
2023-07-18 19:56 ` Jeremy Spewock
@ 2023-07-19 13:23 ` Juraj Linkeš
0 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 13:23 UTC (permalink / raw)
To: Jeremy Spewock; +Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, probb, dev
On Tue, Jul 18, 2023 at 9:56 PM Jeremy Spewock <jspewock@iol.unh.edu> wrote:
>
>
> Hey Juraj,
>
> Just a couple of comments below, but very minor stuff. Just a few docstring that I commented on and one question about the factory for traffic generators that I was wondering what you thought about. More below:
>
<snip>
>> diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/capturing_traffic_generator.py
>> new file mode 100644
>> index 0000000000..1130d87f1e
>> --- /dev/null
>> +++ b/dts/framework/testbed_model/capturing_traffic_generator.py
>> @@ -0,0 +1,135 @@
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2022 University of New Hampshire
>> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
>> +
>> +"""Traffic generator that can capture packets.
>> +
>> +In functional testing, we need to interrogate received packets to check their validity.
>> +Here we define the interface common to all
>> +traffic generators capable of capturing traffic.
>
>
> Is there a reason for the line break here? Just to keep things consistent I think it might make sense to extend this line to be the same length as the one above.
>
The reason is the full line is above the character limit. I was using
Thomas's heuristic of splitting the lines where it makes logical
sense, but it does look kinda wonky, so I'll try to reformat it more
sensibly.
>>
>> +"""
>> +
>> +import uuid
>> +from abc import abstractmethod
>> +
>> +import scapy.utils # type: ignore[import]
>> +from scapy.packet import Packet # type: ignore[import]
>> +
>> +from framework.settings import SETTINGS
>> +from framework.utils import get_packet_summaries
>> +
>> +from .hw.port import Port
>> +from .traffic_generator import TrafficGenerator
>> +
>> +
>> +def _get_default_capture_name() -> str:
>> + """
>> + This is the function used for the default implementation of capture names.
>> + """
>> + return str(uuid.uuid4())
>> +
>> +
>> +class CapturingTrafficGenerator(TrafficGenerator):
>> + """
>> + A mixin interface which enables a packet generator to declare that it can capture
>> + packets and return them to the user.
>
>
> This is missing the one line summary at the top of the comment. Obviously this is not a big issue, but we likely would want this to be uniform with the rest of the module which does have the summary at the top.
>
Good catch. I plan on reviewing all of the docstring when updating the
format of the docstrings so this means less work in the future. :-)
>>
>> +
>> + The methods of capturing traffic generators obey the following workflow:
>> + 1. send packets
>> + 2. capture packets
>> + 3. write the capture to a .pcap file
>> + 4. return the received packets
>> + """
>> +
>> + @property
>> + def is_capturing(self) -> bool:
>> + return True
>> +
>> + def send_packet_and_capture(
>> + self,
>> + packet: Packet,
>> + send_port: Port,
>> + receive_port: Port,
>> + duration: float,
>> + capture_name: str = _get_default_capture_name(),
>> + ) -> list[Packet]:
>> + """Send a packet, return received traffic.
>> +
>> + Send a packet on the send_port and then return all traffic captured
>> + on the receive_port for the given duration. Also record the captured traffic
>> + in a pcap file.
>> +
>>
>> + Args:
>> + packet: The packet to send.
>> + send_port: The egress port on the TG node.
>> + receive_port: The ingress port in the TG node.
>> + duration: Capture traffic for this amount of time after sending the packet.
>> + capture_name: The name of the .pcap file where to store the capture.
>> +
>> + Returns:
>> + A list of received packets. May be empty if no packets are captured.
>> + """
>> + return self.send_packets_and_capture(
>> + [packet], send_port, receive_port, duration, capture_name
>> + )
>> +
>> + def send_packets_and_capture(
>> + self,
>> + packets: list[Packet],
>> + send_port: Port,
>> + receive_port: Port,
>> + duration: float,
>> + capture_name: str = _get_default_capture_name(),
>> + ) -> list[Packet]:
>> + """Send packets, return received traffic.
>> +
>> + Send packets on the send_port and then return all traffic captured
>> + on the receive_port for the given duration. Also record the captured traffic
>> + in a pcap file.
>> +
>> + Args:
>> + packets: The packets to send.
>> + send_port: The egress port on the TG node.
>> + receive_port: The ingress port in the TG node.
>> + duration: Capture traffic for this amount of time after sending the packets.
>> + capture_name: The name of the .pcap file where to store the capture.
>> +
>> + Returns:
>> + A list of received packets. May be empty if no packets are captured.
>> + """
>> + self._logger.debug(get_packet_summaries(packets))
>> + self._logger.debug(
>> + f"Sending packet on {send_port.logical_name}, "
>> + f"receiving on {receive_port.logical_name}."
>> + )
>> + received_packets = self._send_packets_and_capture(
>> + packets,
>> + send_port,
>> + receive_port,
>> + duration,
>> + )
>> +
>> + self._logger.debug(
>> + f"Received packets: {get_packet_summaries(received_packets)}"
>> + )
>> + self._write_capture_from_packets(capture_name, received_packets)
>> + return received_packets
>> +
>> + @abstractmethod
>> + def _send_packets_and_capture(
>> + self,
>> + packets: list[Packet],
>> + send_port: Port,
>> + receive_port: Port,
>> + duration: float,
>> + ) -> list[Packet]:
>> + """
>> + The extended classes must implement this method which
>> + sends packets on send_port and receives packets on the receive_port
>> + for the specified duration. It must be able to handle no received packets.
>> + """
>> +
>> + def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
>> + file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
>> + self._logger.debug(f"Writing packets to {file_name}.")
>> + scapy.utils.wrpcap(file_name, packets)
<snip>
>> diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
>> new file mode 100644
>> index 0000000000..27025cfa31
>> --- /dev/null
>> +++ b/dts/framework/testbed_model/tg_node.py
>> @@ -0,0 +1,99 @@
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2010-2014 Intel Corporation
>> +# Copyright(c) 2022 University of New Hampshire
>> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
>> +
>> +"""Traffic generator node.
>> +
>> +This is the node where the traffic generator resides.
>> +The distinction between a node and a traffic generator is as follows:
>> +A node is a host that DTS connects to. It could be a baremetal server,
>> +a VM or a container.
>> +A traffic generator is software running on the node.
>> +A traffic generator node is a node running a traffic generator.
>> +A node can be a traffic generator node as well as system under test node.
>> +"""
>> +
>> +from scapy.packet import Packet # type: ignore[import]
>> +
>> +from framework.config import (
>> + ScapyTrafficGeneratorConfig,
>> + TGNodeConfiguration,
>> + TrafficGeneratorType,
>> +)
>> +from framework.exception import ConfigurationError
>> +
>> +from .capturing_traffic_generator import CapturingTrafficGenerator
>> +from .hw.port import Port
>> +from .node import Node
>> +
>> +
>> +class TGNode(Node):
>> + """Manage connections to a node with a traffic generator.
>> +
>> + Apart from basic node management capabilities, the Traffic Generator node has
>> + specialized methods for handling the traffic generator running on it.
>> +
>> + Arguments:
>> + node_config: The user configuration of the traffic generator node.
>> +
>> + Attributes:
>> + traffic_generator: The traffic generator running on the node.
>> + """
>> +
>> + traffic_generator: CapturingTrafficGenerator
>> +
>> + def __init__(self, node_config: TGNodeConfiguration):
>> + super(TGNode, self).__init__(node_config)
>> + self.traffic_generator = create_traffic_generator(
>> + self, node_config.traffic_generator
>> + )
>> + self._logger.info(f"Created node: {self.name}")
>> +
>> + def send_packet_and_capture(
>> + self,
>> + packet: Packet,
>> + send_port: Port,
>> + receive_port: Port,
>> + duration: float = 1,
>> + ) -> list[Packet]:
>> + """Send a packet, return received traffic.
>> +
>> + Send a packet on the send_port and then return all traffic captured
>> + on the receive_port for the given duration. Also record the captured traffic
>> + in a pcap file.
>> +
>> + Args:
>> + packet: The packet to send.
>> + send_port: The egress port on the TG node.
>> + receive_port: The ingress port in the TG node.
>> + duration: Capture traffic for this amount of time after sending the packet.
>> +
>> + Returns:
>> + A list of received packets. May be empty if no packets are captured.
>> + """
>> + return self.traffic_generator.send_packet_and_capture(
>> + packet, send_port, receive_port, duration
>> + )
>> +
>> + def close(self) -> None:
>> + """Free all resources used by the node"""
>> + self.traffic_generator.close()
>> + super(TGNode, self).close()
>> +
>> +
>> +def create_traffic_generator(
>> + tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
>> +) -> CapturingTrafficGenerator:
>> + """A factory function for creating traffic generator object from user config."""
>> +
>> + from .scapy import ScapyTrafficGenerator
>> +
>> + match traffic_generator_config.traffic_generator_type:
>> + case TrafficGeneratorType.SCAPY:
>> + return ScapyTrafficGenerator(tg_node, traffic_generator_config)
>> + case _:
>> + raise ConfigurationError(
>> + "Unknown traffic generator: "
>> + f"{traffic_generator_config.traffic_generator_type}"
>> + )
>
>
> Would it be possible here to do something like what we did in create_interactive_shell with a TypeVar where we can initialize it directly? It would change from using the enum to setting the traffic_generator_config.traffic_generator_type to a specific class in the config (in this case, ScapyTrafficGenerator), but I think it would be possible to change in the from_dict method where we could set this type to the class directly instead of the enum (or maybe had the enum relate it's values to the classes themselves).
>
> I think this would make some things slightly more complicated (like how we would map from conf.yaml to one of the classes and all of those needing to be imported in config/__init__.py) but it would save developers in the future from having to add to two different places (the enum in config/__init__.py and this match statement) and save this list from being arbitrarily long. I think this is fine for this patch but maybe when we expand the traffic generator or the scapy generator it could be worth thinking about.
>
> Do you think it would make sense to change it in this way or would that be somewhat unnecessary in your eyes?
>
That would be great, but the problem with using any object from the
framework in config is cyclical imports. Almost everything imports
config, so config can't import anything from the framework, at least
on the top level.
But it looks like it can be done with imports in the from_dict method.
This is an interesting approach, I'll think about it. The best would
probably be to leave this as is and use this approach for more things
than just the traffic generator.
There's one more thing to consider and that is will the imports work
with the constraints imposed by documentation generation. I'm thinking
let's leave this as is and try this when updating the code for doc's
needs.
>>
>> diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator.py
>> new file mode 100644
>> index 0000000000..28c35d3ce4
>> --- /dev/null
>> +++ b/dts/framework/testbed_model/traffic_generator.py
>> @@ -0,0 +1,72 @@
>> +# SPDX-License-Identifier: BSD-3-Clause
>> +# Copyright(c) 2022 University of New Hampshire
>> +# Copyright(c) 2023 PANTHEON.tech s.r.o.
>> +
>> +"""The base traffic generator.
>> +
>> +These traffic generators can't capture received traffic,
>> +only count the number of received packets.
>> +"""
>> +
>> +from abc import ABC, abstractmethod
>> +
>> +from scapy.packet import Packet # type: ignore[import]
>> +
>> +from framework.logger import DTSLOG
>> +from framework.utils import get_packet_summaries
>> +
>> +from .hw.port import Port
>> +
>> +
>> +class TrafficGenerator(ABC):
>> + """The base traffic generator.
>> +
>> + Defines the few basic methods that each traffic generator must implement.
>> + """
>> +
>> + _logger: DTSLOG
>> +
>> + def send_packet(self, packet: Packet, port: Port) -> None:
>> + """Send a packet and block until it is fully sent.
>> +
>> + What fully sent means is defined by the traffic generator.
>> +
>> + Args:
>> + packet: The packet to send.
>> + port: The egress port on the TG node.
>> + """
>> + self.send_packets([packet], port)
>> +
>> + def send_packets(self, packets: list[Packet], port: Port) -> None:
>> + """Send packets and block until they are fully sent.
>> +
>> + What fully sent means is defined by the traffic generator.
>> +
>> + Args:
>> + packets: The packets to send.
>> + port: The egress port on the TG node.
>> + """
>> + self._logger.info(f"Sending packet{'s' if len(packets) > 1 else ''}.")
>> + self._logger.debug(get_packet_summaries(packets))
>> + self._send_packets(packets, port)
>> +
>> + @abstractmethod
>> + def _send_packets(self, packets: list[Packet], port: Port) -> None:
>> + """
>> + The extended classes must implement this method which
>> + sends packets on send_port. The method should block until all packets
>> + are fully sent.
>> + """
>> +
>> + @property
>> + def is_capturing(self) -> bool:
>> + """Whether this traffic generator can capture traffic.
>> +
>> + Returns:
>> + True if the traffic generator can capture traffic, False otherwise.
>> + """
>> + return False
>> +
>> + @abstractmethod
>> + def close(self) -> None:
>> + """Free all resources used by the traffic generator."""
>> diff --git a/dts/framework/utils.py b/dts/framework/utils.py
>> index 60abe46edf..d27c2c5b5f 100644
>> --- a/dts/framework/utils.py
>> +++ b/dts/framework/utils.py
>> @@ -4,6 +4,7 @@
>> # Copyright(c) 2022-2023 University of New Hampshire
>>
>> import atexit
>> +import json
>> import os
>> import subprocess
>> import sys
>> @@ -11,6 +12,8 @@
>> from pathlib import Path
>> from subprocess import SubprocessError
>>
>> +from scapy.packet import Packet # type: ignore[import]
>> +
>> from .exception import ConfigurationError
>>
>>
>> @@ -64,6 +67,16 @@ def expand_range(range_str: str) -> list[int]:
>> return expanded_range
>>
>>
>> +def get_packet_summaries(packets: list[Packet]):
>> + if len(packets) == 1:
>> + packet_summaries = packets[0].summary()
>> + else:
>> + packet_summaries = json.dumps(
>> + list(map(lambda pkt: pkt.summary(), packets)), indent=4
>> + )
>> + return f"Packet contents: \n{packet_summaries}"
>> +
>> +
>> def RED(text: str) -> str:
>> return f"\u001B[31;1m{str(text)}\u001B[0m"
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 0/6] dts: tg abstractions and scapy tg
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
` (6 preceding siblings ...)
2023-07-18 21:04 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Jeremy Spewock
@ 2023-07-19 14:12 ` Juraj Linkeš
2023-07-19 14:12 ` [PATCH v3 1/6] dts: add scapy dependency Juraj Linkeš
` (6 more replies)
7 siblings, 7 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 14:12 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
jspewock, probb
Cc: dev, Juraj Linkeš
Add abstractions for traffic generator split into those that can and
can't capture traffic.
The Scapy implementation uses an XML-RPC server for remote control. This
requires an interactive session to add Scapy functions to the server. The
interactive session code is based on another patch [0].
The basic test case is there to showcase the Scapy implementation - it
sends just one UDP packet and verifies it on the other end.
[0]: http://patches.dpdk.org/project/dpdk/patch/20230718214802.3214-3-jspewock@iol.unh.edu/
Juraj Linkeš (6):
dts: add scapy dependency
dts: add traffic generator config
dts: traffic generator abstractions
dts: add python remote interactive shell
dts: scapy traffic generator implementation
dts: add basic UDP test case
doc/guides/tools/dts.rst | 31 ++
dts/conf.yaml | 27 +-
dts/framework/config/__init__.py | 84 ++++-
dts/framework/config/conf_yaml_schema.json | 32 +-
dts/framework/dts.py | 65 ++--
dts/framework/remote_session/__init__.py | 1 +
dts/framework/remote_session/linux_session.py | 96 ++++++
dts/framework/remote_session/os_session.py | 35 ++-
.../remote_session/remote/__init__.py | 1 +
.../remote_session/remote/python_shell.py | 12 +
dts/framework/test_suite.py | 221 ++++++++++++-
dts/framework/testbed_model/__init__.py | 1 +
.../capturing_traffic_generator.py | 136 ++++++++
dts/framework/testbed_model/hw/port.py | 60 ++++
dts/framework/testbed_model/node.py | 34 +-
dts/framework/testbed_model/scapy.py | 290 ++++++++++++++++++
dts/framework/testbed_model/sut_node.py | 9 +-
dts/framework/testbed_model/tg_node.py | 99 ++++++
.../testbed_model/traffic_generator.py | 72 +++++
dts/framework/utils.py | 13 +
dts/poetry.lock | 21 +-
dts/pyproject.toml | 1 +
dts/tests/TestSuite_os_udp.py | 45 +++
23 files changed, 1329 insertions(+), 57 deletions(-)
create mode 100644 dts/framework/remote_session/remote/python_shell.py
create mode 100644 dts/framework/testbed_model/capturing_traffic_generator.py
create mode 100644 dts/framework/testbed_model/hw/port.py
create mode 100644 dts/framework/testbed_model/scapy.py
create mode 100644 dts/framework/testbed_model/tg_node.py
create mode 100644 dts/framework/testbed_model/traffic_generator.py
create mode 100644 dts/tests/TestSuite_os_udp.py
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 1/6] dts: add scapy dependency
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
@ 2023-07-19 14:12 ` Juraj Linkeš
2023-07-19 14:12 ` [PATCH v3 2/6] dts: add traffic generator config Juraj Linkeš
` (5 subsequent siblings)
6 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 14:12 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
jspewock, probb
Cc: dev, Juraj Linkeš
Required for scapy traffic generator.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/poetry.lock | 21 ++++++++++++++++-----
dts/pyproject.toml | 1 +
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/dts/poetry.lock b/dts/poetry.lock
index 2438f337cd..8cb9920ec7 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -346,6 +346,19 @@ category = "main"
optional = false
python-versions = ">=3.6"
+[[package]]
+name = "scapy"
+version = "2.5.0"
+description = "Scapy: interactive packet manipulation tool"
+category = "main"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4"
+
+[package.extras]
+basic = ["ipython"]
+complete = ["ipython", "pyx", "cryptography (>=2.0)", "matplotlib"]
+docs = ["sphinx (>=3.0.0)", "sphinx_rtd_theme (>=0.4.3)", "tox (>=3.0.0)"]
+
[[package]]
name = "six"
version = "1.16.0"
@@ -409,7 +422,7 @@ jsonschema = ">=4,<5"
[metadata]
lock-version = "1.1"
python-versions = "^3.10"
-content-hash = "719c43bcaa5d181921debda884f8f714063df0b2336d61e9f64ecab034e8b139"
+content-hash = "907bf4ae92b05bbdb7cf2f37fc63e530702f1fff9990afa1f8e6c369b97ba592"
[metadata.files]
attrs = []
@@ -431,10 +444,7 @@ mypy-extensions = []
paramiko = []
pathlib2 = []
pathspec = []
-platformdirs = [
- {file = "platformdirs-2.5.2-py3-none-any.whl", hash = "sha256:027d8e83a2d7de06bbac4e5ef7e023c02b863d7ea5d079477e722bb41ab25788"},
- {file = "platformdirs-2.5.2.tar.gz", hash = "sha256:58c8abb07dcb441e6ee4b11d8df0ac856038f944ab98b7be6b27b2a3c7feef19"},
-]
+platformdirs = []
pycodestyle = []
pycparser = []
pydocstyle = []
@@ -443,6 +453,7 @@ pylama = []
pynacl = []
pyrsistent = []
pyyaml = []
+scapy = []
six = []
snowballstemmer = []
toml = []
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 50bcdb327a..bd7591f7fb 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -13,6 +13,7 @@ warlock = "^2.0.1"
PyYAML = "^6.0"
types-PyYAML = "^6.0.8"
fabric = "^2.7.1"
+scapy = "^2.5.0"
[tool.poetry.dev-dependencies]
mypy = "^0.961"
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 2/6] dts: add traffic generator config
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
2023-07-19 14:12 ` [PATCH v3 1/6] dts: add scapy dependency Juraj Linkeš
@ 2023-07-19 14:12 ` Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 3/6] dts: traffic generator abstractions Juraj Linkeš
` (4 subsequent siblings)
6 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 14:12 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
jspewock, probb
Cc: dev, Juraj Linkeš
Node configuration - where to connect, what ports to use and what TG to
use.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 26 ++++++-
dts/framework/config/__init__.py | 84 +++++++++++++++++++---
dts/framework/config/conf_yaml_schema.json | 29 +++++++-
dts/framework/dts.py | 12 ++--
dts/framework/testbed_model/sut_node.py | 5 +-
5 files changed, 135 insertions(+), 21 deletions(-)
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 0825d958a6..0440d1d20a 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,10 +13,11 @@ executions:
skip_smoke_tests: false # optional flag that allows you to skip smoke tests
test_suites:
- hello_world
- system_under_test:
+ system_under_test_node:
node_name: "SUT 1"
vdevs: # optional; if removed, vdevs won't be used in the execution
- "crypto_openssl"
+ traffic_generator_node: "TG 1"
nodes:
- name: "SUT 1"
hostname: sut1.change.me.localhost
@@ -40,3 +41,26 @@ nodes:
os_driver: i40e
peer_node: "TG 1"
peer_pci: "0000:00:08.1"
+ - name: "TG 1"
+ hostname: tg1.change.me.localhost
+ user: dtsuser
+ arch: x86_64
+ os: linux
+ lcores: ""
+ ports:
+ - pci: "0000:00:08.0"
+ os_driver_for_dpdk: rdma
+ os_driver: rdma
+ peer_node: "SUT 1"
+ peer_pci: "0000:00:08.0"
+ - pci: "0000:00:08.1"
+ os_driver_for_dpdk: rdma
+ os_driver: rdma
+ peer_node: "SUT 1"
+ peer_pci: "0000:00:08.1"
+ use_first_core: false
+ hugepages: # optional; if removed, will use system hugepage configuration
+ amount: 256
+ force_first_numa: false
+ traffic_generator:
+ type: SCAPY
diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index aa7ff358a2..9b5aff8509 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -12,7 +12,7 @@
import pathlib
from dataclasses import dataclass
from enum import auto, unique
-from typing import Any, TypedDict
+from typing import Any, TypedDict, Union
import warlock # type: ignore
import yaml
@@ -54,6 +54,11 @@ class Compiler(StrEnum):
msvc = auto()
+@unique
+class TrafficGeneratorType(StrEnum):
+ SCAPY = auto()
+
+
# Slots enables some optimizations, by pre-allocating space for the defined
# attributes in the underlying data structure.
#
@@ -79,6 +84,26 @@ def from_dict(node: str, d: dict) -> "PortConfig":
return PortConfig(node=node, **d)
+@dataclass(slots=True, frozen=True)
+class TrafficGeneratorConfig:
+ traffic_generator_type: TrafficGeneratorType
+
+ @staticmethod
+ def from_dict(d: dict):
+ # This looks useless now, but is designed to allow expansion to traffic
+ # generators that require more configuration later.
+ match TrafficGeneratorType(d["type"]):
+ case TrafficGeneratorType.SCAPY:
+ return ScapyTrafficGeneratorConfig(
+ traffic_generator_type=TrafficGeneratorType.SCAPY
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class ScapyTrafficGeneratorConfig(TrafficGeneratorConfig):
+ pass
+
+
@dataclass(slots=True, frozen=True)
class NodeConfiguration:
name: str
@@ -89,17 +114,17 @@ class NodeConfiguration:
os: OS
lcores: str
use_first_core: bool
- memory_channels: int
hugepages: HugepageConfiguration | None
ports: list[PortConfig]
@staticmethod
- def from_dict(d: dict) -> "NodeConfiguration":
+ def from_dict(d: dict) -> Union["SutNodeConfiguration", "TGNodeConfiguration"]:
hugepage_config = d.get("hugepages")
if hugepage_config:
if "force_first_numa" not in hugepage_config:
hugepage_config["force_first_numa"] = False
hugepage_config = HugepageConfiguration(**hugepage_config)
+
common_config = {
"name": d["name"],
"hostname": d["hostname"],
@@ -109,12 +134,31 @@ def from_dict(d: dict) -> "NodeConfiguration":
"os": OS(d["os"]),
"lcores": d.get("lcores", "1"),
"use_first_core": d.get("use_first_core", False),
- "memory_channels": d.get("memory_channels", 1),
"hugepages": hugepage_config,
"ports": [PortConfig.from_dict(d["name"], port) for port in d["ports"]],
}
- return NodeConfiguration(**common_config)
+ if "traffic_generator" in d:
+ return TGNodeConfiguration(
+ traffic_generator=TrafficGeneratorConfig.from_dict(
+ d["traffic_generator"]
+ ),
+ **common_config,
+ )
+ else:
+ return SutNodeConfiguration(
+ memory_channels=d.get("memory_channels", 1), **common_config
+ )
+
+
+@dataclass(slots=True, frozen=True)
+class SutNodeConfiguration(NodeConfiguration):
+ memory_channels: int
+
+
+@dataclass(slots=True, frozen=True)
+class TGNodeConfiguration(NodeConfiguration):
+ traffic_generator: ScapyTrafficGeneratorConfig
@dataclass(slots=True, frozen=True)
@@ -193,23 +237,40 @@ class ExecutionConfiguration:
perf: bool
func: bool
test_suites: list[TestSuiteConfig]
- system_under_test: NodeConfiguration
+ system_under_test_node: SutNodeConfiguration
+ traffic_generator_node: TGNodeConfiguration
vdevs: list[str]
skip_smoke_tests: bool
@staticmethod
- def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
+ def from_dict(
+ d: dict, node_map: dict[str, Union[SutNodeConfiguration | TGNodeConfiguration]]
+ ) -> "ExecutionConfiguration":
build_targets: list[BuildTargetConfiguration] = list(
map(BuildTargetConfiguration.from_dict, d["build_targets"])
)
test_suites: list[TestSuiteConfig] = list(
map(TestSuiteConfig.from_dict, d["test_suites"])
)
- sut_name = d["system_under_test"]["node_name"]
+ sut_name = d["system_under_test_node"]["node_name"]
skip_smoke_tests = d.get("skip_smoke_tests", False)
assert sut_name in node_map, f"Unknown SUT {sut_name} in execution {d}"
+ system_under_test_node = node_map[sut_name]
+ assert isinstance(
+ system_under_test_node, SutNodeConfiguration
+ ), f"Invalid SUT configuration {system_under_test_node}"
+
+ tg_name = d["traffic_generator_node"]
+ assert tg_name in node_map, f"Unknown TG {tg_name} in execution {d}"
+ traffic_generator_node = node_map[tg_name]
+ assert isinstance(
+ traffic_generator_node, TGNodeConfiguration
+ ), f"Invalid TG configuration {traffic_generator_node}"
+
vdevs = (
- d["system_under_test"]["vdevs"] if "vdevs" in d["system_under_test"] else []
+ d["system_under_test_node"]["vdevs"]
+ if "vdevs" in d["system_under_test_node"]
+ else []
)
return ExecutionConfiguration(
build_targets=build_targets,
@@ -217,7 +278,8 @@ def from_dict(d: dict, node_map: dict) -> "ExecutionConfiguration":
func=d["func"],
skip_smoke_tests=skip_smoke_tests,
test_suites=test_suites,
- system_under_test=node_map[sut_name],
+ system_under_test_node=system_under_test_node,
+ traffic_generator_node=traffic_generator_node,
vdevs=vdevs,
)
@@ -228,7 +290,7 @@ class Configuration:
@staticmethod
def from_dict(d: dict) -> "Configuration":
- nodes: list[NodeConfiguration] = list(
+ nodes: list[Union[SutNodeConfiguration | TGNodeConfiguration]] = list(
map(NodeConfiguration.from_dict, d["nodes"])
)
assert len(nodes) > 0, "There must be a node to test"
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 67722213dd..936a4bac5b 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -164,6 +164,11 @@
"amount"
]
},
+ "mac_address": {
+ "type": "string",
+ "description": "A MAC address",
+ "pattern": "^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$"
+ },
"pci_address": {
"type": "string",
"pattern": "^[\\da-fA-F]{4}:[\\da-fA-F]{2}:[\\da-fA-F]{2}.\\d:?\\w*$"
@@ -286,6 +291,22 @@
]
},
"minimum": 1
+ },
+ "traffic_generator": {
+ "oneOf": [
+ {
+ "type": "object",
+ "description": "Scapy traffic generator. Used for functional testing.",
+ "properties": {
+ "type": {
+ "type": "string",
+ "enum": [
+ "SCAPY"
+ ]
+ }
+ }
+ }
+ ]
}
},
"additionalProperties": false,
@@ -336,7 +357,7 @@
"description": "Optional field that allows you to skip smoke testing",
"type": "boolean"
},
- "system_under_test": {
+ "system_under_test_node": {
"type":"object",
"properties": {
"node_name": {
@@ -353,6 +374,9 @@
"required": [
"node_name"
]
+ },
+ "traffic_generator_node": {
+ "$ref": "#/definitions/node_name"
}
},
"additionalProperties": false,
@@ -361,7 +385,8 @@
"perf",
"func",
"test_suites",
- "system_under_test"
+ "system_under_test_node",
+ "traffic_generator_node"
]
},
"minimum": 1
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 3ba7fd2478..1c4a637fbd 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -38,17 +38,17 @@ def run_all() -> None:
# for all Execution sections
for execution in CONFIGURATION.executions:
sut_node = None
- if execution.system_under_test.name in nodes:
+ if execution.system_under_test_node.name in nodes:
# a Node with the same name already exists
- sut_node = nodes[execution.system_under_test.name]
+ sut_node = nodes[execution.system_under_test_node.name]
else:
# the SUT has not been initialized yet
try:
- sut_node = SutNode(execution.system_under_test)
+ sut_node = SutNode(execution.system_under_test_node)
result.update_setup(Result.PASS)
except Exception as e:
dts_logger.exception(
- f"Connection to node {execution.system_under_test} failed."
+ f"Connection to node {execution.system_under_test_node} failed."
)
result.update_setup(Result.FAIL, e)
else:
@@ -87,7 +87,9 @@ def _run_execution(
Run the given execution. This involves running the execution setup as well as
running all build targets in the given execution.
"""
- dts_logger.info(f"Running execution with SUT '{execution.system_under_test.name}'.")
+ dts_logger.info(
+ f"Running execution with SUT '{execution.system_under_test_node.name}'."
+ )
execution_result = result.add_execution(sut_node.config)
execution_result.add_sut_info(sut_node.node_info)
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index e3227d9bc2..ad3bffd9d3 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -12,8 +12,8 @@
from framework.config import (
BuildTargetConfiguration,
BuildTargetInfo,
- NodeConfiguration,
NodeInfo,
+ SutNodeConfiguration,
)
from framework.remote_session import CommandResult, InteractiveShellType, OSSession
from framework.settings import SETTINGS
@@ -77,6 +77,7 @@ class SutNode(Node):
Another key capability is building DPDK according to given build target.
"""
+ config: SutNodeConfiguration
_dpdk_prefix_list: list[str]
_dpdk_timestamp: str
_build_target_config: BuildTargetConfiguration | None
@@ -89,7 +90,7 @@ class SutNode(Node):
_node_info: NodeInfo | None
_compiler_version: str | None
- def __init__(self, node_config: NodeConfiguration):
+ def __init__(self, node_config: SutNodeConfiguration):
super(SutNode, self).__init__(node_config)
self._dpdk_prefix_list = []
self._build_target_config = None
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 3/6] dts: traffic generator abstractions
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
2023-07-19 14:12 ` [PATCH v3 1/6] dts: add scapy dependency Juraj Linkeš
2023-07-19 14:12 ` [PATCH v3 2/6] dts: add traffic generator config Juraj Linkeš
@ 2023-07-19 14:13 ` Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 4/6] dts: add python remote interactive shell Juraj Linkeš
` (3 subsequent siblings)
6 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 14:13 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
jspewock, probb
Cc: dev, Juraj Linkeš
There are traffic abstractions for all traffic generators and for
traffic generators that can capture (not just count) packets.
There also related abstractions, such as TGNode where the traffic
generators reside and some related code.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
doc/guides/tools/dts.rst | 31 ++++
dts/framework/dts.py | 61 ++++----
dts/framework/remote_session/linux_session.py | 78 ++++++++++
dts/framework/remote_session/os_session.py | 15 ++
dts/framework/test_suite.py | 4 +-
dts/framework/testbed_model/__init__.py | 1 +
.../capturing_traffic_generator.py | 136 ++++++++++++++++++
dts/framework/testbed_model/hw/port.py | 60 ++++++++
dts/framework/testbed_model/node.py | 20 ++-
dts/framework/testbed_model/scapy.py | 74 ++++++++++
dts/framework/testbed_model/sut_node.py | 1 +
dts/framework/testbed_model/tg_node.py | 99 +++++++++++++
.../testbed_model/traffic_generator.py | 72 ++++++++++
dts/framework/utils.py | 13 ++
14 files changed, 637 insertions(+), 28 deletions(-)
create mode 100644 dts/framework/testbed_model/capturing_traffic_generator.py
create mode 100644 dts/framework/testbed_model/hw/port.py
create mode 100644 dts/framework/testbed_model/scapy.py
create mode 100644 dts/framework/testbed_model/tg_node.py
create mode 100644 dts/framework/testbed_model/traffic_generator.py
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index c7b31623e4..2f97d1df6e 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -153,6 +153,37 @@ There are two areas that need to be set up on a System Under Test:
sudo usermod -aG sudo <sut_user>
+
+Setting up Traffic Generator Node
+---------------------------------
+
+These need to be set up on a Traffic Generator Node:
+
+#. **Traffic generator dependencies**
+
+ The traffic generator running on the traffic generator node must be installed beforehand.
+ For Scapy traffic generator, only a few Python libraries need to be installed:
+
+ .. code-block:: console
+
+ sudo apt install python3-pip
+ sudo pip install --upgrade pip
+ sudo pip install scapy==2.5.0
+
+#. **Hardware dependencies**
+
+ The traffic generators, like DPDK, need a proper driver and firmware.
+ The Scapy traffic generator doesn't have strict requirements - the drivers that come
+ with most OS distributions will be satisfactory.
+
+
+#. **User with administrator privileges**
+
+ Similarly to the System Under Test, traffic generators need administrator privileges
+ to be able to use the devices.
+ Refer to the `System Under Test section <sut_admin_user>` for details.
+
+
Running DTS
-----------
diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 1c4a637fbd..f773f0c38d 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -15,7 +15,7 @@
from .logger import DTSLOG, getLogger
from .test_result import BuildTargetResult, DTSResult, ExecutionResult, Result
from .test_suite import get_test_suites
-from .testbed_model import SutNode
+from .testbed_model import SutNode, TGNode
from .utils import check_dts_python_version
dts_logger: DTSLOG = getLogger("DTSRunner")
@@ -33,29 +33,31 @@ def run_all() -> None:
# check the python version of the server that run dts
check_dts_python_version()
- nodes: dict[str, SutNode] = {}
+ sut_nodes: dict[str, SutNode] = {}
+ tg_nodes: dict[str, TGNode] = {}
try:
# for all Execution sections
for execution in CONFIGURATION.executions:
- sut_node = None
- if execution.system_under_test_node.name in nodes:
- # a Node with the same name already exists
- sut_node = nodes[execution.system_under_test_node.name]
- else:
- # the SUT has not been initialized yet
- try:
+ sut_node = sut_nodes.get(execution.system_under_test_node.name)
+ tg_node = tg_nodes.get(execution.traffic_generator_node.name)
+
+ try:
+ if not sut_node:
sut_node = SutNode(execution.system_under_test_node)
- result.update_setup(Result.PASS)
- except Exception as e:
- dts_logger.exception(
- f"Connection to node {execution.system_under_test_node} failed."
- )
- result.update_setup(Result.FAIL, e)
- else:
- nodes[sut_node.name] = sut_node
-
- if sut_node:
- _run_execution(sut_node, execution, result)
+ sut_nodes[sut_node.name] = sut_node
+ if not tg_node:
+ tg_node = TGNode(execution.traffic_generator_node)
+ tg_nodes[tg_node.name] = tg_node
+ result.update_setup(Result.PASS)
+ except Exception as e:
+ failed_node = execution.system_under_test_node.name
+ if sut_node:
+ failed_node = execution.traffic_generator_node.name
+ dts_logger.exception(f"Creation of node {failed_node} failed.")
+ result.update_setup(Result.FAIL, e)
+
+ else:
+ _run_execution(sut_node, tg_node, execution, result)
except Exception as e:
dts_logger.exception("An unexpected error has occurred.")
@@ -64,7 +66,7 @@ def run_all() -> None:
finally:
try:
- for node in nodes.values():
+ for node in (sut_nodes | tg_nodes).values():
node.close()
result.update_teardown(Result.PASS)
except Exception as e:
@@ -81,7 +83,10 @@ def run_all() -> None:
def _run_execution(
- sut_node: SutNode, execution: ExecutionConfiguration, result: DTSResult
+ sut_node: SutNode,
+ tg_node: TGNode,
+ execution: ExecutionConfiguration,
+ result: DTSResult,
) -> None:
"""
Run the given execution. This involves running the execution setup as well as
@@ -102,7 +107,9 @@ def _run_execution(
else:
for build_target in execution.build_targets:
- _run_build_target(sut_node, build_target, execution, execution_result)
+ _run_build_target(
+ sut_node, tg_node, build_target, execution, execution_result
+ )
finally:
try:
@@ -115,6 +122,7 @@ def _run_execution(
def _run_build_target(
sut_node: SutNode,
+ tg_node: TGNode,
build_target: BuildTargetConfiguration,
execution: ExecutionConfiguration,
execution_result: ExecutionResult,
@@ -135,7 +143,7 @@ def _run_build_target(
build_target_result.update_setup(Result.FAIL, e)
else:
- _run_all_suites(sut_node, execution, build_target_result)
+ _run_all_suites(sut_node, tg_node, execution, build_target_result)
finally:
try:
@@ -148,6 +156,7 @@ def _run_build_target(
def _run_all_suites(
sut_node: SutNode,
+ tg_node: TGNode,
execution: ExecutionConfiguration,
build_target_result: BuildTargetResult,
) -> None:
@@ -162,7 +171,7 @@ def _run_all_suites(
for test_suite_config in execution.test_suites:
try:
_run_single_suite(
- sut_node, execution, build_target_result, test_suite_config
+ sut_node, tg_node, execution, build_target_result, test_suite_config
)
except BlockingTestSuiteError as e:
dts_logger.exception(
@@ -178,6 +187,7 @@ def _run_all_suites(
def _run_single_suite(
sut_node: SutNode,
+ tg_node: TGNode,
execution: ExecutionConfiguration,
build_target_result: BuildTargetResult,
test_suite_config: TestSuiteConfig,
@@ -206,6 +216,7 @@ def _run_single_suite(
for test_suite_class in test_suite_classes:
test_suite = test_suite_class(
sut_node,
+ tg_node,
test_suite_config.test_cases,
execution.func,
build_target_result,
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index f64aa8efb0..decce4039c 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -2,13 +2,47 @@
# Copyright(c) 2023 PANTHEON.tech s.r.o.
# Copyright(c) 2023 University of New Hampshire
+import json
+from typing import TypedDict
+
+from typing_extensions import NotRequired
+
from framework.exception import RemoteCommandExecutionError
from framework.testbed_model import LogicalCore
+from framework.testbed_model.hw.port import Port
from framework.utils import expand_range
from .posix_session import PosixSession
+class LshwConfigurationOutput(TypedDict):
+ link: str
+
+
+class LshwOutput(TypedDict):
+ """
+ A model of the relevant information from json lshw output, e.g.:
+ {
+ ...
+ "businfo" : "pci@0000:08:00.0",
+ "logicalname" : "enp8s0",
+ "version" : "00",
+ "serial" : "52:54:00:59:e1:ac",
+ ...
+ "configuration" : {
+ ...
+ "link" : "yes",
+ ...
+ },
+ ...
+ """
+
+ businfo: str
+ logicalname: NotRequired[str]
+ serial: NotRequired[str]
+ configuration: LshwConfigurationOutput
+
+
class LinuxSession(PosixSession):
"""
The implementation of non-Posix compliant parts of Linux remote sessions.
@@ -103,3 +137,47 @@ def _configure_huge_pages(
self.send_command(
f"echo {amount} | tee {hugepage_config_path}", privileged=True
)
+
+ def update_ports(self, ports: list[Port]) -> None:
+ self._logger.debug("Gathering port info.")
+ for port in ports:
+ assert (
+ port.node == self.name
+ ), "Attempted to gather port info on the wrong node"
+
+ port_info_list = self._get_lshw_info()
+ for port in ports:
+ for port_info in port_info_list:
+ if f"pci@{port.pci}" == port_info.get("businfo"):
+ self._update_port_attr(
+ port, port_info.get("logicalname"), "logical_name"
+ )
+ self._update_port_attr(port, port_info.get("serial"), "mac_address")
+ port_info_list.remove(port_info)
+ break
+ else:
+ self._logger.warning(f"No port at pci address {port.pci} found.")
+
+ def _get_lshw_info(self) -> list[LshwOutput]:
+ output = self.send_command("lshw -quiet -json -C network", verify=True)
+ return json.loads(output.stdout)
+
+ def _update_port_attr(
+ self, port: Port, attr_value: str | None, attr_name: str
+ ) -> None:
+ if attr_value:
+ setattr(port, attr_name, attr_value)
+ self._logger.debug(
+ f"Found '{attr_name}' of port {port.pci}: '{attr_value}'."
+ )
+ else:
+ self._logger.warning(
+ f"Attempted to get '{attr_name}' of port {port.pci}, "
+ f"but it doesn't exist."
+ )
+
+ def configure_port_state(self, port: Port, enable: bool) -> None:
+ state = "up" if enable else "down"
+ self.send_command(
+ f"ip link set dev {port.logical_name} {state}", privileged=True
+ )
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index f543ce3acc..ab4bfbfe4c 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -12,6 +12,7 @@
from framework.remote_session.remote import InteractiveShell
from framework.settings import SETTINGS
from framework.testbed_model import LogicalCore
+from framework.testbed_model.hw.port import Port
from framework.utils import MesonArgs
from .remote import (
@@ -249,3 +250,17 @@ def get_node_info(self) -> NodeInfo:
"""
Collect information about the node
"""
+
+ @abstractmethod
+ def update_ports(self, ports: list[Port]) -> None:
+ """
+ Get additional information about ports:
+ Logical name (e.g. enp7s0) if applicable
+ Mac address
+ """
+
+ @abstractmethod
+ def configure_port_state(self, port: Port, enable: bool) -> None:
+ """
+ Enable/disable port.
+ """
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index de94c9332d..056460dd05 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -20,7 +20,7 @@
from .logger import DTSLOG, getLogger
from .settings import SETTINGS
from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
-from .testbed_model import SutNode
+from .testbed_model import SutNode, TGNode
class TestSuite(object):
@@ -51,11 +51,13 @@ class TestSuite(object):
def __init__(
self,
sut_node: SutNode,
+ tg_node: TGNode,
test_cases: list[str],
func: bool,
build_target_result: BuildTargetResult,
):
self.sut_node = sut_node
+ self.tg_node = tg_node
self._logger = getLogger(self.__class__.__name__)
self._test_cases_to_run = test_cases
self._test_cases_to_run.extend(SETTINGS.test_cases)
diff --git a/dts/framework/testbed_model/__init__.py b/dts/framework/testbed_model/__init__.py
index f54a947051..5cbb859e47 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -20,3 +20,4 @@
)
from .node import Node
from .sut_node import SutNode
+from .tg_node import TGNode
diff --git a/dts/framework/testbed_model/capturing_traffic_generator.py b/dts/framework/testbed_model/capturing_traffic_generator.py
new file mode 100644
index 0000000000..ab98987f8e
--- /dev/null
+++ b/dts/framework/testbed_model/capturing_traffic_generator.py
@@ -0,0 +1,136 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Traffic generator that can capture packets.
+
+In functional testing, we need to interrogate received packets to check their validity.
+The module defines the interface common to all traffic generators capable of capturing
+traffic.
+"""
+
+import uuid
+from abc import abstractmethod
+
+import scapy.utils # type: ignore[import]
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.settings import SETTINGS
+from framework.utils import get_packet_summaries
+
+from .hw.port import Port
+from .traffic_generator import TrafficGenerator
+
+
+def _get_default_capture_name() -> str:
+ """
+ This is the function used for the default implementation of capture names.
+ """
+ return str(uuid.uuid4())
+
+
+class CapturingTrafficGenerator(TrafficGenerator):
+ """Capture packets after sending traffic.
+
+ A mixin interface which enables a packet generator to declare that it can capture
+ packets and return them to the user.
+
+ The methods of capturing traffic generators obey the following workflow:
+ 1. send packets
+ 2. capture packets
+ 3. write the capture to a .pcap file
+ 4. return the received packets
+ """
+
+ @property
+ def is_capturing(self) -> bool:
+ return True
+
+ def send_packet_and_capture(
+ self,
+ packet: Packet,
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ """Send a packet, return received traffic.
+
+ Send a packet on the send_port and then return all traffic captured
+ on the receive_port for the given duration. Also record the captured traffic
+ in a pcap file.
+
+ Args:
+ packet: The packet to send.
+ send_port: The egress port on the TG node.
+ receive_port: The ingress port in the TG node.
+ duration: Capture traffic for this amount of time after sending the packet.
+ capture_name: The name of the .pcap file where to store the capture.
+
+ Returns:
+ A list of received packets. May be empty if no packets are captured.
+ """
+ return self.send_packets_and_capture(
+ [packet], send_port, receive_port, duration, capture_name
+ )
+
+ def send_packets_and_capture(
+ self,
+ packets: list[Packet],
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ """Send packets, return received traffic.
+
+ Send packets on the send_port and then return all traffic captured
+ on the receive_port for the given duration. Also record the captured traffic
+ in a pcap file.
+
+ Args:
+ packets: The packets to send.
+ send_port: The egress port on the TG node.
+ receive_port: The ingress port in the TG node.
+ duration: Capture traffic for this amount of time after sending the packets.
+ capture_name: The name of the .pcap file where to store the capture.
+
+ Returns:
+ A list of received packets. May be empty if no packets are captured.
+ """
+ self._logger.debug(get_packet_summaries(packets))
+ self._logger.debug(
+ f"Sending packet on {send_port.logical_name}, "
+ f"receiving on {receive_port.logical_name}."
+ )
+ received_packets = self._send_packets_and_capture(
+ packets,
+ send_port,
+ receive_port,
+ duration,
+ )
+
+ self._logger.debug(
+ f"Received packets: {get_packet_summaries(received_packets)}"
+ )
+ self._write_capture_from_packets(capture_name, received_packets)
+ return received_packets
+
+ @abstractmethod
+ def _send_packets_and_capture(
+ self,
+ packets: list[Packet],
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ ) -> list[Packet]:
+ """
+ The extended classes must implement this method which
+ sends packets on send_port and receives packets on the receive_port
+ for the specified duration. It must be able to handle no received packets.
+ """
+
+ def _write_capture_from_packets(self, capture_name: str, packets: list[Packet]):
+ file_name = f"{SETTINGS.output_dir}/{capture_name}.pcap"
+ self._logger.debug(f"Writing packets to {file_name}.")
+ scapy.utils.wrpcap(file_name, packets)
diff --git a/dts/framework/testbed_model/hw/port.py b/dts/framework/testbed_model/hw/port.py
new file mode 100644
index 0000000000..680c29bfe3
--- /dev/null
+++ b/dts/framework/testbed_model/hw/port.py
@@ -0,0 +1,60 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from dataclasses import dataclass
+
+from framework.config import PortConfig
+
+
+@dataclass(slots=True, frozen=True)
+class PortIdentifier:
+ node: str
+ pci: str
+
+
+@dataclass(slots=True)
+class Port:
+ """
+ identifier: The PCI address of the port on a node.
+
+ os_driver: The driver used by this port when the OS is controlling it.
+ Example: i40e
+ os_driver_for_dpdk: The driver the device must be bound to for DPDK to use it,
+ Example: vfio-pci.
+
+ Note: os_driver and os_driver_for_dpdk may be the same thing.
+ Example: mlx5_core
+
+ peer: The identifier of a port this port is connected with.
+ """
+
+ identifier: PortIdentifier
+ os_driver: str
+ os_driver_for_dpdk: str
+ peer: PortIdentifier
+ mac_address: str = ""
+ logical_name: str = ""
+
+ def __init__(self, node_name: str, config: PortConfig):
+ self.identifier = PortIdentifier(
+ node=node_name,
+ pci=config.pci,
+ )
+ self.os_driver = config.os_driver
+ self.os_driver_for_dpdk = config.os_driver_for_dpdk
+ self.peer = PortIdentifier(node=config.peer_node, pci=config.peer_pci)
+
+ @property
+ def node(self) -> str:
+ return self.identifier.node
+
+ @property
+ def pci(self) -> str:
+ return self.identifier.pci
+
+
+@dataclass(slots=True, frozen=True)
+class PortLink:
+ sut_port: Port
+ tg_port: Port
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index d237b3f75b..c666dfbf4e 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -7,6 +7,7 @@
A node is a generic host that DTS connects to and manages.
"""
+from abc import ABC
from typing import Any, Callable, Type
from framework.config import (
@@ -26,9 +27,10 @@
VirtualDevice,
lcore_filter,
)
+from .hw.port import Port
-class Node(object):
+class Node(ABC):
"""
Basic class for node management. This class implements methods that
manage a node, such as information gathering (of CPU/PCI/NIC) and
@@ -39,6 +41,7 @@ class Node(object):
config: NodeConfiguration
name: str
lcores: list[LogicalCore]
+ ports: list[Port]
_logger: DTSLOG
_other_sessions: list[OSSession]
_execution_config: ExecutionConfiguration
@@ -50,6 +53,8 @@ def __init__(self, node_config: NodeConfiguration):
self._logger = getLogger(self.name)
self.main_session = create_session(self.config, self.name, self._logger)
+ self._logger.info(f"Connected to node: {self.name}")
+
self._get_remote_cpus()
# filter the node lcores according to user config
self.lcores = LogicalCoreListFilter(
@@ -58,8 +63,13 @@ def __init__(self, node_config: NodeConfiguration):
self._other_sessions = []
self.virtual_devices = []
+ self._init_ports()
- self._logger.info(f"Created node: {self.name}")
+ def _init_ports(self) -> None:
+ self.ports = [Port(self.name, port_config) for port_config in self.config.ports]
+ self.main_session.update_ports(self.ports)
+ for port in self.ports:
+ self.configure_port_state(port)
def set_up_execution(self, execution_config: ExecutionConfiguration) -> None:
"""
@@ -205,6 +215,12 @@ def _setup_hugepages(self):
self.config.hugepages.amount, self.config.hugepages.force_first_numa
)
+ def configure_port_state(self, port: Port, enable: bool = True) -> None:
+ """
+ Enable/disable port.
+ """
+ self.main_session.configure_port_state(port, enable)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/scapy.py
new file mode 100644
index 0000000000..1a23dc9fa3
--- /dev/null
+++ b/dts/framework/testbed_model/scapy.py
@@ -0,0 +1,74 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Scapy traffic generator.
+
+Traffic generator used for functional testing, implemented using the Scapy library.
+The traffic generator uses an XML-RPC server to run Scapy on the remote TG node.
+
+The XML-RPC server runs in an interactive remote SSH session running Python console,
+where we start the server. The communication with the server is facilitated with
+a local server proxy.
+"""
+
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.config import OS, ScapyTrafficGeneratorConfig
+from framework.logger import getLogger
+
+from .capturing_traffic_generator import (
+ CapturingTrafficGenerator,
+ _get_default_capture_name,
+)
+from .hw.port import Port
+from .tg_node import TGNode
+
+
+class ScapyTrafficGenerator(CapturingTrafficGenerator):
+ """Provides access to scapy functions via an RPC interface.
+
+ The traffic generator first starts an XML-RPC on the remote TG node.
+ Then it populates the server with functions which use the Scapy library
+ to send/receive traffic.
+
+ Any packets sent to the remote server are first converted to bytes.
+ They are received as xmlrpc.client.Binary objects on the server side.
+ When the server sends the packets back, they are also received as
+ xmlrpc.client.Binary object on the client side, are converted back to Scapy
+ packets and only then returned from the methods.
+
+ Arguments:
+ tg_node: The node where the traffic generator resides.
+ config: The user configuration of the traffic generator.
+ """
+
+ _config: ScapyTrafficGeneratorConfig
+ _tg_node: TGNode
+
+ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
+ self._config = config
+ self._tg_node = tg_node
+ self._logger = getLogger(
+ f"{self._tg_node.name} {self._config.traffic_generator_type}"
+ )
+
+ assert (
+ self._tg_node.config.os == OS.linux
+ ), "Linux is the only supported OS for scapy traffic generation"
+
+ def _send_packets(self, packets: list[Packet], port: Port) -> None:
+ raise NotImplementedError()
+
+ def _send_packets_and_capture(
+ self,
+ packets: list[Packet],
+ send_port: Port,
+ receive_port: Port,
+ duration: float,
+ capture_name: str = _get_default_capture_name(),
+ ) -> list[Packet]:
+ raise NotImplementedError()
+
+ def close(self):
+ pass
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index ad3bffd9d3..f0b017a383 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -105,6 +105,7 @@ def __init__(self, node_config: SutNodeConfiguration):
self._dpdk_version = None
self._node_info = None
self._compiler_version = None
+ self._logger.info(f"Created node: {self.name}")
@property
def _remote_dpdk_dir(self) -> PurePath:
diff --git a/dts/framework/testbed_model/tg_node.py b/dts/framework/testbed_model/tg_node.py
new file mode 100644
index 0000000000..27025cfa31
--- /dev/null
+++ b/dts/framework/testbed_model/tg_node.py
@@ -0,0 +1,99 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""Traffic generator node.
+
+This is the node where the traffic generator resides.
+The distinction between a node and a traffic generator is as follows:
+A node is a host that DTS connects to. It could be a baremetal server,
+a VM or a container.
+A traffic generator is software running on the node.
+A traffic generator node is a node running a traffic generator.
+A node can be a traffic generator node as well as system under test node.
+"""
+
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.config import (
+ ScapyTrafficGeneratorConfig,
+ TGNodeConfiguration,
+ TrafficGeneratorType,
+)
+from framework.exception import ConfigurationError
+
+from .capturing_traffic_generator import CapturingTrafficGenerator
+from .hw.port import Port
+from .node import Node
+
+
+class TGNode(Node):
+ """Manage connections to a node with a traffic generator.
+
+ Apart from basic node management capabilities, the Traffic Generator node has
+ specialized methods for handling the traffic generator running on it.
+
+ Arguments:
+ node_config: The user configuration of the traffic generator node.
+
+ Attributes:
+ traffic_generator: The traffic generator running on the node.
+ """
+
+ traffic_generator: CapturingTrafficGenerator
+
+ def __init__(self, node_config: TGNodeConfiguration):
+ super(TGNode, self).__init__(node_config)
+ self.traffic_generator = create_traffic_generator(
+ self, node_config.traffic_generator
+ )
+ self._logger.info(f"Created node: {self.name}")
+
+ def send_packet_and_capture(
+ self,
+ packet: Packet,
+ send_port: Port,
+ receive_port: Port,
+ duration: float = 1,
+ ) -> list[Packet]:
+ """Send a packet, return received traffic.
+
+ Send a packet on the send_port and then return all traffic captured
+ on the receive_port for the given duration. Also record the captured traffic
+ in a pcap file.
+
+ Args:
+ packet: The packet to send.
+ send_port: The egress port on the TG node.
+ receive_port: The ingress port in the TG node.
+ duration: Capture traffic for this amount of time after sending the packet.
+
+ Returns:
+ A list of received packets. May be empty if no packets are captured.
+ """
+ return self.traffic_generator.send_packet_and_capture(
+ packet, send_port, receive_port, duration
+ )
+
+ def close(self) -> None:
+ """Free all resources used by the node"""
+ self.traffic_generator.close()
+ super(TGNode, self).close()
+
+
+def create_traffic_generator(
+ tg_node: TGNode, traffic_generator_config: ScapyTrafficGeneratorConfig
+) -> CapturingTrafficGenerator:
+ """A factory function for creating traffic generator object from user config."""
+
+ from .scapy import ScapyTrafficGenerator
+
+ match traffic_generator_config.traffic_generator_type:
+ case TrafficGeneratorType.SCAPY:
+ return ScapyTrafficGenerator(tg_node, traffic_generator_config)
+ case _:
+ raise ConfigurationError(
+ "Unknown traffic generator: "
+ f"{traffic_generator_config.traffic_generator_type}"
+ )
diff --git a/dts/framework/testbed_model/traffic_generator.py b/dts/framework/testbed_model/traffic_generator.py
new file mode 100644
index 0000000000..28c35d3ce4
--- /dev/null
+++ b/dts/framework/testbed_model/traffic_generator.py
@@ -0,0 +1,72 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 University of New Hampshire
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""The base traffic generator.
+
+These traffic generators can't capture received traffic,
+only count the number of received packets.
+"""
+
+from abc import ABC, abstractmethod
+
+from scapy.packet import Packet # type: ignore[import]
+
+from framework.logger import DTSLOG
+from framework.utils import get_packet_summaries
+
+from .hw.port import Port
+
+
+class TrafficGenerator(ABC):
+ """The base traffic generator.
+
+ Defines the few basic methods that each traffic generator must implement.
+ """
+
+ _logger: DTSLOG
+
+ def send_packet(self, packet: Packet, port: Port) -> None:
+ """Send a packet and block until it is fully sent.
+
+ What fully sent means is defined by the traffic generator.
+
+ Args:
+ packet: The packet to send.
+ port: The egress port on the TG node.
+ """
+ self.send_packets([packet], port)
+
+ def send_packets(self, packets: list[Packet], port: Port) -> None:
+ """Send packets and block until they are fully sent.
+
+ What fully sent means is defined by the traffic generator.
+
+ Args:
+ packets: The packets to send.
+ port: The egress port on the TG node.
+ """
+ self._logger.info(f"Sending packet{'s' if len(packets) > 1 else ''}.")
+ self._logger.debug(get_packet_summaries(packets))
+ self._send_packets(packets, port)
+
+ @abstractmethod
+ def _send_packets(self, packets: list[Packet], port: Port) -> None:
+ """
+ The extended classes must implement this method which
+ sends packets on send_port. The method should block until all packets
+ are fully sent.
+ """
+
+ @property
+ def is_capturing(self) -> bool:
+ """Whether this traffic generator can capture traffic.
+
+ Returns:
+ True if the traffic generator can capture traffic, False otherwise.
+ """
+ return False
+
+ @abstractmethod
+ def close(self) -> None:
+ """Free all resources used by the traffic generator."""
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 60abe46edf..d27c2c5b5f 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -4,6 +4,7 @@
# Copyright(c) 2022-2023 University of New Hampshire
import atexit
+import json
import os
import subprocess
import sys
@@ -11,6 +12,8 @@
from pathlib import Path
from subprocess import SubprocessError
+from scapy.packet import Packet # type: ignore[import]
+
from .exception import ConfigurationError
@@ -64,6 +67,16 @@ def expand_range(range_str: str) -> list[int]:
return expanded_range
+def get_packet_summaries(packets: list[Packet]):
+ if len(packets) == 1:
+ packet_summaries = packets[0].summary()
+ else:
+ packet_summaries = json.dumps(
+ list(map(lambda pkt: pkt.summary(), packets)), indent=4
+ )
+ return f"Packet contents: \n{packet_summaries}"
+
+
def RED(text: str) -> str:
return f"\u001B[31;1m{str(text)}\u001B[0m"
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 4/6] dts: add python remote interactive shell
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
` (2 preceding siblings ...)
2023-07-19 14:13 ` [PATCH v3 3/6] dts: traffic generator abstractions Juraj Linkeš
@ 2023-07-19 14:13 ` Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 5/6] dts: scapy traffic generator implementation Juraj Linkeš
` (2 subsequent siblings)
6 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 14:13 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
jspewock, probb
Cc: dev, Juraj Linkeš
The shell can be used to remotely run any Python code interactively.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/remote/python_shell.py | 12 ++++++++++++
1 file changed, 12 insertions(+)
create mode 100644 dts/framework/remote_session/remote/python_shell.py
diff --git a/dts/framework/remote_session/remote/python_shell.py b/dts/framework/remote_session/remote/python_shell.py
new file mode 100644
index 0000000000..cc3ad48a68
--- /dev/null
+++ b/dts/framework/remote_session/remote/python_shell.py
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+from pathlib import PurePath
+
+from .interactive_shell import InteractiveShell
+
+
+class PythonShell(InteractiveShell):
+ _default_prompt: str = ">>>"
+ _command_extra_chars: str = "\n"
+ path: PurePath = PurePath("python3")
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 5/6] dts: scapy traffic generator implementation
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
` (3 preceding siblings ...)
2023-07-19 14:13 ` [PATCH v3 4/6] dts: add python remote interactive shell Juraj Linkeš
@ 2023-07-19 14:13 ` Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 6/6] dts: add basic UDP test case Juraj Linkeš
2023-07-24 14:23 ` [PATCH v3 0/6] dts: tg abstractions and scapy tg Thomas Monjalon
6 siblings, 0 replies; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 14:13 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
jspewock, probb
Cc: dev, Juraj Linkeš
Scapy is a traffic generator capable of sending and receiving traffic.
Since it's a software traffic generator, it's not suitable for
performance testing, but it is suitable for functional testing.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/framework/remote_session/__init__.py | 1 +
.../remote_session/remote/__init__.py | 1 +
dts/framework/testbed_model/scapy.py | 224 +++++++++++++++++-
3 files changed, 222 insertions(+), 4 deletions(-)
diff --git a/dts/framework/remote_session/__init__.py b/dts/framework/remote_session/__init__.py
index 1155dd8318..00b6d1f03a 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -22,6 +22,7 @@
CommandResult,
InteractiveRemoteSession,
InteractiveShell,
+ PythonShell,
RemoteSession,
SSHSession,
TestPmdDevice,
diff --git a/dts/framework/remote_session/remote/__init__.py b/dts/framework/remote_session/remote/__init__.py
index 1d29c3ea0d..06403691a5 100644
--- a/dts/framework/remote_session/remote/__init__.py
+++ b/dts/framework/remote_session/remote/__init__.py
@@ -9,6 +9,7 @@
from .interactive_remote_session import InteractiveRemoteSession
from .interactive_shell import InteractiveShell
+from .python_shell import PythonShell
from .remote_session import CommandResult, RemoteSession
from .ssh_session import SSHSession
from .testpmd_shell import TestPmdDevice, TestPmdShell
diff --git a/dts/framework/testbed_model/scapy.py b/dts/framework/testbed_model/scapy.py
index 1a23dc9fa3..af0d4dbb25 100644
--- a/dts/framework/testbed_model/scapy.py
+++ b/dts/framework/testbed_model/scapy.py
@@ -12,10 +12,21 @@
a local server proxy.
"""
+import inspect
+import marshal
+import time
+import types
+import xmlrpc.client
+from xmlrpc.server import SimpleXMLRPCServer
+
+import scapy.all # type: ignore[import]
+from scapy.layers.l2 import Ether # type: ignore[import]
from scapy.packet import Packet # type: ignore[import]
from framework.config import OS, ScapyTrafficGeneratorConfig
-from framework.logger import getLogger
+from framework.logger import DTSLOG, getLogger
+from framework.remote_session import PythonShell
+from framework.settings import SETTINGS
from .capturing_traffic_generator import (
CapturingTrafficGenerator,
@@ -24,6 +35,134 @@
from .hw.port import Port
from .tg_node import TGNode
+"""
+========= BEGIN RPC FUNCTIONS =========
+
+All of the functions in this section are intended to be exported to a python
+shell which runs a scapy RPC server. These functions are made available via that
+RPC server to the packet generator. To add a new function to the RPC server,
+first write the function in this section. Then, if you need any imports, make sure to
+add them to SCAPY_RPC_SERVER_IMPORTS as well. After that, add the function to the list
+in EXPORTED_FUNCTIONS. Note that kwargs (keyword arguments) do not work via xmlrpc,
+so you may need to construct wrapper functions around many scapy types.
+"""
+
+"""
+Add the line needed to import something in a normal python environment
+as an entry to this array. It will be imported before any functions are
+sent to the server.
+"""
+SCAPY_RPC_SERVER_IMPORTS = [
+ "from scapy.all import *",
+ "import xmlrpc",
+ "import sys",
+ "from xmlrpc.server import SimpleXMLRPCServer",
+ "import marshal",
+ "import pickle",
+ "import types",
+ "import time",
+]
+
+
+def scapy_send_packets_and_capture(
+ xmlrpc_packets: list[xmlrpc.client.Binary],
+ send_iface: str,
+ recv_iface: str,
+ duration: float,
+) -> list[bytes]:
+ """RPC function to send and capture packets.
+
+ The function is meant to be executed on the remote TG node.
+
+ Args:
+ xmlrpc_packets: The packets to send. These need to be converted to
+ xmlrpc.client.Binary before sending to the remote server.
+ send_iface: The logical name of the egress interface.
+ recv_iface: The logical name of the ingress interface.
+ duration: Capture for this amount of time, in seconds.
+
+ Returns:
+ A list of bytes. Each item in the list represents one packet, which needs
+ to be converted back upon transfer from the remote node.
+ """
+ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
+ sniffer = scapy.all.AsyncSniffer(
+ iface=recv_iface,
+ store=True,
+ started_callback=lambda *args: scapy.all.sendp(scapy_packets, iface=send_iface),
+ )
+ sniffer.start()
+ time.sleep(duration)
+ return [scapy_packet.build() for scapy_packet in sniffer.stop(join=True)]
+
+
+def scapy_send_packets(
+ xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: str
+) -> None:
+ """RPC function to send packets.
+
+ The function is meant to be executed on the remote TG node.
+ It doesn't return anything, only sends packets.
+
+ Args:
+ xmlrpc_packets: The packets to send. These need to be converted to
+ xmlrpc.client.Binary before sending to the remote server.
+ send_iface: The logical name of the egress interface.
+
+ Returns:
+ A list of bytes. Each item in the list represents one packet, which needs
+ to be converted back upon transfer from the remote node.
+ """
+ scapy_packets = [scapy.all.Packet(packet.data) for packet in xmlrpc_packets]
+ scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, verbose=True)
+
+
+"""
+Functions to be exposed by the scapy RPC server.
+"""
+RPC_FUNCTIONS = [
+ scapy_send_packets,
+ scapy_send_packets_and_capture,
+]
+
+"""
+========= END RPC FUNCTIONS =========
+"""
+
+
+class QuittableXMLRPCServer(SimpleXMLRPCServer):
+ """Basic XML-RPC server that may be extended
+ by functions serializable by the marshal module.
+ """
+
+ def __init__(self, *args, **kwargs):
+ kwargs["allow_none"] = True
+ super().__init__(*args, **kwargs)
+ self.register_introspection_functions()
+ self.register_function(self.quit)
+ self.register_function(self.add_rpc_function)
+
+ def quit(self) -> None:
+ self._BaseServer__shutdown_request = True
+ return None
+
+ def add_rpc_function(self, name: str, function_bytes: xmlrpc.client.Binary):
+ """Add a function to the server.
+
+ This is meant to be executed remotely.
+
+ Args:
+ name: The name of the function.
+ function_bytes: The code of the function.
+ """
+ function_code = marshal.loads(function_bytes.data)
+ function = types.FunctionType(function_code, globals(), name)
+ self.register_function(function)
+
+ def serve_forever(self, poll_interval: float = 0.5) -> None:
+ print("XMLRPC OK")
+ super().serve_forever(poll_interval)
+
class ScapyTrafficGenerator(CapturingTrafficGenerator):
"""Provides access to scapy functions via an RPC interface.
@@ -41,10 +180,19 @@ class ScapyTrafficGenerator(CapturingTrafficGenerator):
Arguments:
tg_node: The node where the traffic generator resides.
config: The user configuration of the traffic generator.
+
+ Attributes:
+ session: The exclusive interactive remote session created by the Scapy
+ traffic generator where the XML-RPC server runs.
+ rpc_server_proxy: The object used by clients to execute functions
+ on the XML-RPC server.
"""
+ session: PythonShell
+ rpc_server_proxy: xmlrpc.client.ServerProxy
_config: ScapyTrafficGeneratorConfig
_tg_node: TGNode
+ _logger: DTSLOG
def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
self._config = config
@@ -57,8 +205,58 @@ def __init__(self, tg_node: TGNode, config: ScapyTrafficGeneratorConfig):
self._tg_node.config.os == OS.linux
), "Linux is the only supported OS for scapy traffic generation"
+ self.session = self._tg_node.create_interactive_shell(
+ PythonShell, timeout=5, privileged=True
+ )
+
+ # import libs in remote python console
+ for import_statement in SCAPY_RPC_SERVER_IMPORTS:
+ self.session.send_command(import_statement)
+
+ # start the server
+ xmlrpc_server_listen_port = 8000
+ self._start_xmlrpc_server_in_remote_python(xmlrpc_server_listen_port)
+
+ # connect to the server
+ server_url = (
+ f"http://{self._tg_node.config.hostname}:{xmlrpc_server_listen_port}"
+ )
+ self.rpc_server_proxy = xmlrpc.client.ServerProxy(
+ server_url, allow_none=True, verbose=SETTINGS.verbose
+ )
+
+ # add functions to the server
+ for function in RPC_FUNCTIONS:
+ # A slightly hacky way to move a function to the remote server.
+ # It is constructed from the name and code on the other side.
+ # Pickle cannot handle functions, nor can any of the other serialization
+ # frameworks aside from the libraries used to generate pyc files, which
+ # are even more messy to work with.
+ function_bytes = marshal.dumps(function.__code__)
+ self.rpc_server_proxy.add_rpc_function(function.__name__, function_bytes)
+
+ def _start_xmlrpc_server_in_remote_python(self, listen_port: int):
+ # load the source of the function
+ src = inspect.getsource(QuittableXMLRPCServer)
+ # Lines with only whitespace break the repl if in the middle of a function
+ # or class, so strip all lines containing only whitespace
+ src = "\n".join(
+ [line for line in src.splitlines() if not line.isspace() and line != ""]
+ )
+
+ spacing = "\n" * 4
+
+ # execute it in the python terminal
+ self.session.send_command(spacing + src + spacing)
+ self.session.send_command(
+ f"server = QuittableXMLRPCServer(('0.0.0.0', {listen_port}));"
+ f"server.serve_forever()",
+ "XMLRPC OK",
+ )
+
def _send_packets(self, packets: list[Packet], port: Port) -> None:
- raise NotImplementedError()
+ packets = [packet.build() for packet in packets]
+ self.rpc_server_proxy.scapy_send_packets(packets, port.logical_name)
def _send_packets_and_capture(
self,
@@ -68,7 +266,25 @@ def _send_packets_and_capture(
duration: float,
capture_name: str = _get_default_capture_name(),
) -> list[Packet]:
- raise NotImplementedError()
+ binary_packets = [packet.build() for packet in packets]
+
+ xmlrpc_packets: list[
+ xmlrpc.client.Binary
+ ] = self.rpc_server_proxy.scapy_send_packets_and_capture(
+ binary_packets,
+ send_port.logical_name,
+ receive_port.logical_name,
+ duration,
+ ) # type: ignore[assignment]
+
+ scapy_packets = [Ether(packet.data) for packet in xmlrpc_packets]
+ return scapy_packets
def close(self):
- pass
+ try:
+ self.rpc_server_proxy.quit()
+ except ConnectionRefusedError:
+ # Because the python instance closes, we get no RPC response.
+ # Thus, this error is expected
+ pass
+ self.session.close()
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 6/6] dts: add basic UDP test case
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
` (4 preceding siblings ...)
2023-07-19 14:13 ` [PATCH v3 5/6] dts: scapy traffic generator implementation Juraj Linkeš
@ 2023-07-19 14:13 ` Juraj Linkeš
2023-07-20 15:21 ` Jeremy Spewock
2023-07-24 14:23 ` [PATCH v3 0/6] dts: tg abstractions and scapy tg Thomas Monjalon
6 siblings, 1 reply; 29+ messages in thread
From: Juraj Linkeš @ 2023-07-19 14:13 UTC (permalink / raw)
To: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson,
jspewock, probb
Cc: dev, Juraj Linkeš
The test cases showcases the scapy traffic generator code.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
dts/conf.yaml | 1 +
dts/framework/config/conf_yaml_schema.json | 3 +-
dts/framework/remote_session/linux_session.py | 20 +-
dts/framework/remote_session/os_session.py | 20 +-
dts/framework/test_suite.py | 217 +++++++++++++++++-
dts/framework/testbed_model/node.py | 14 +-
dts/framework/testbed_model/sut_node.py | 3 +
dts/tests/TestSuite_os_udp.py | 45 ++++
8 files changed, 315 insertions(+), 8 deletions(-)
create mode 100644 dts/tests/TestSuite_os_udp.py
diff --git a/dts/conf.yaml b/dts/conf.yaml
index 0440d1d20a..37967daea0 100644
--- a/dts/conf.yaml
+++ b/dts/conf.yaml
@@ -13,6 +13,7 @@ executions:
skip_smoke_tests: false # optional flag that allows you to skip smoke tests
test_suites:
- hello_world
+ - os_udp
system_under_test_node:
node_name: "SUT 1"
vdevs: # optional; if removed, vdevs won't be used in the execution
diff --git a/dts/framework/config/conf_yaml_schema.json b/dts/framework/config/conf_yaml_schema.json
index 936a4bac5b..84e45fe3c2 100644
--- a/dts/framework/config/conf_yaml_schema.json
+++ b/dts/framework/config/conf_yaml_schema.json
@@ -185,7 +185,8 @@
"test_suite": {
"type": "string",
"enum": [
- "hello_world"
+ "hello_world",
+ "os_udp"
]
},
"test_target": {
diff --git a/dts/framework/remote_session/linux_session.py b/dts/framework/remote_session/linux_session.py
index decce4039c..a3f1a6bf3b 100644
--- a/dts/framework/remote_session/linux_session.py
+++ b/dts/framework/remote_session/linux_session.py
@@ -3,7 +3,8 @@
# Copyright(c) 2023 University of New Hampshire
import json
-from typing import TypedDict
+from ipaddress import IPv4Interface, IPv6Interface
+from typing import TypedDict, Union
from typing_extensions import NotRequired
@@ -181,3 +182,20 @@ def configure_port_state(self, port: Port, enable: bool) -> None:
self.send_command(
f"ip link set dev {port.logical_name} {state}", privileged=True
)
+
+ def configure_port_ip_address(
+ self,
+ address: Union[IPv4Interface, IPv6Interface],
+ port: Port,
+ delete: bool,
+ ) -> None:
+ command = "del" if delete else "add"
+ self.send_command(
+ f"ip address {command} {address} dev {port.logical_name}",
+ privileged=True,
+ verify=True,
+ )
+
+ def configure_ipv4_forwarding(self, enable: bool) -> None:
+ state = 1 if enable else 0
+ self.send_command(f"sysctl -w net.ipv4.ip_forward={state}", privileged=True)
diff --git a/dts/framework/remote_session/os_session.py b/dts/framework/remote_session/os_session.py
index ab4bfbfe4c..8a709eac1c 100644
--- a/dts/framework/remote_session/os_session.py
+++ b/dts/framework/remote_session/os_session.py
@@ -4,8 +4,9 @@
from abc import ABC, abstractmethod
from collections.abc import Iterable
+from ipaddress import IPv4Interface, IPv6Interface
from pathlib import PurePath
-from typing import Type, TypeVar
+from typing import Type, TypeVar, Union
from framework.config import Architecture, NodeConfiguration, NodeInfo
from framework.logger import DTSLOG
@@ -264,3 +265,20 @@ def configure_port_state(self, port: Port, enable: bool) -> None:
"""
Enable/disable port.
"""
+
+ @abstractmethod
+ def configure_port_ip_address(
+ self,
+ address: Union[IPv4Interface, IPv6Interface],
+ port: Port,
+ delete: bool,
+ ) -> None:
+ """
+ Configure (add or delete) an IP address of the input port.
+ """
+
+ @abstractmethod
+ def configure_ipv4_forwarding(self, enable: bool) -> None:
+ """
+ Enable IPv4 forwarding in the underlying OS.
+ """
diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index 056460dd05..3b890c0451 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -9,7 +9,13 @@
import importlib
import inspect
import re
+from ipaddress import IPv4Interface, IPv6Interface, ip_interface
from types import MethodType
+from typing import Union
+
+from scapy.layers.inet import IP # type: ignore[import]
+from scapy.layers.l2 import Ether # type: ignore[import]
+from scapy.packet import Packet, Padding # type: ignore[import]
from .exception import (
BlockingTestSuiteError,
@@ -21,6 +27,8 @@
from .settings import SETTINGS
from .test_result import BuildTargetResult, Result, TestCaseResult, TestSuiteResult
from .testbed_model import SutNode, TGNode
+from .testbed_model.hw.port import Port, PortLink
+from .utils import get_packet_summaries
class TestSuite(object):
@@ -47,6 +55,15 @@ class TestSuite(object):
_test_cases_to_run: list[str]
_func: bool
_result: TestSuiteResult
+ _port_links: list[PortLink]
+ _sut_port_ingress: Port
+ _sut_port_egress: Port
+ _sut_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
+ _sut_ip_address_egress: Union[IPv4Interface, IPv6Interface]
+ _tg_port_ingress: Port
+ _tg_port_egress: Port
+ _tg_ip_address_ingress: Union[IPv4Interface, IPv6Interface]
+ _tg_ip_address_egress: Union[IPv4Interface, IPv6Interface]
def __init__(
self,
@@ -63,6 +80,31 @@ def __init__(
self._test_cases_to_run.extend(SETTINGS.test_cases)
self._func = func
self._result = build_target_result.add_test_suite(self.__class__.__name__)
+ self._port_links = []
+ self._process_links()
+ self._sut_port_ingress, self._tg_port_egress = (
+ self._port_links[0].sut_port,
+ self._port_links[0].tg_port,
+ )
+ self._sut_port_egress, self._tg_port_ingress = (
+ self._port_links[1].sut_port,
+ self._port_links[1].tg_port,
+ )
+ self._sut_ip_address_ingress = ip_interface("192.168.100.2/24")
+ self._sut_ip_address_egress = ip_interface("192.168.101.2/24")
+ self._tg_ip_address_egress = ip_interface("192.168.100.3/24")
+ self._tg_ip_address_ingress = ip_interface("192.168.101.3/24")
+
+ def _process_links(self) -> None:
+ for sut_port in self.sut_node.ports:
+ for tg_port in self.tg_node.ports:
+ if (sut_port.identifier, sut_port.peer) == (
+ tg_port.peer,
+ tg_port.identifier,
+ ):
+ self._port_links.append(
+ PortLink(sut_port=sut_port, tg_port=tg_port)
+ )
def set_up_suite(self) -> None:
"""
@@ -85,14 +127,181 @@ def tear_down_test_case(self) -> None:
Tear down the previously created test fixtures after each test case.
"""
+ def configure_testbed_ipv4(self, restore: bool = False) -> None:
+ delete = True if restore else False
+ enable = False if restore else True
+ self._configure_ipv4_forwarding(enable)
+ self.sut_node.configure_port_ip_address(
+ self._sut_ip_address_egress, self._sut_port_egress, delete
+ )
+ self.sut_node.configure_port_state(self._sut_port_egress, enable)
+ self.sut_node.configure_port_ip_address(
+ self._sut_ip_address_ingress, self._sut_port_ingress, delete
+ )
+ self.sut_node.configure_port_state(self._sut_port_ingress, enable)
+ self.tg_node.configure_port_ip_address(
+ self._tg_ip_address_ingress, self._tg_port_ingress, delete
+ )
+ self.tg_node.configure_port_state(self._tg_port_ingress, enable)
+ self.tg_node.configure_port_ip_address(
+ self._tg_ip_address_egress, self._tg_port_egress, delete
+ )
+ self.tg_node.configure_port_state(self._tg_port_egress, enable)
+
+ def _configure_ipv4_forwarding(self, enable: bool) -> None:
+ self.sut_node.configure_ipv4_forwarding(enable)
+
+ def send_packet_and_capture(
+ self, packet: Packet, duration: float = 1
+ ) -> list[Packet]:
+ """
+ Send a packet through the appropriate interface and
+ receive on the appropriate interface.
+ Modify the packet with l3/l2 addresses corresponding
+ to the testbed and desired traffic.
+ """
+ packet = self._adjust_addresses(packet)
+ return self.tg_node.send_packet_and_capture(
+ packet, self._tg_port_egress, self._tg_port_ingress, duration
+ )
+
+ def get_expected_packet(self, packet: Packet) -> Packet:
+ return self._adjust_addresses(packet, expected=True)
+
+ def _adjust_addresses(self, packet: Packet, expected: bool = False) -> Packet:
+ """
+ Assumptions:
+ Two links between SUT and TG, one link is TG -> SUT,
+ the other SUT -> TG.
+ """
+ if expected:
+ # The packet enters the TG from SUT
+ # update l2 addresses
+ packet.src = self._sut_port_egress.mac_address
+ packet.dst = self._tg_port_ingress.mac_address
+
+ # The packet is routed from TG egress to TG ingress
+ # update l3 addresses
+ packet.payload.src = self._tg_ip_address_egress.ip.exploded
+ packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
+ else:
+ # The packet leaves TG towards SUT
+ # update l2 addresses
+ packet.src = self._tg_port_egress.mac_address
+ packet.dst = self._sut_port_ingress.mac_address
+
+ # The packet is routed from TG egress to TG ingress
+ # update l3 addresses
+ packet.payload.src = self._tg_ip_address_egress.ip.exploded
+ packet.payload.dst = self._tg_ip_address_ingress.ip.exploded
+
+ return Ether(packet.build())
+
def verify(self, condition: bool, failure_description: str) -> None:
if not condition:
+ self._fail_test_case_verify(failure_description)
+
+ def _fail_test_case_verify(self, failure_description: str) -> None:
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on SUT:"
+ )
+ for command_res in self.sut_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ self._logger.debug(
+ "A test case failed, showing the last 10 commands executed on TG:"
+ )
+ for command_res in self.tg_node.main_session.remote_session.history[-10:]:
+ self._logger.debug(command_res.command)
+ raise TestCaseVerifyError(failure_description)
+
+ def verify_packets(
+ self, expected_packet: Packet, received_packets: list[Packet]
+ ) -> None:
+ for received_packet in received_packets:
+ if self._compare_packets(expected_packet, received_packet):
+ break
+ else:
+ self._logger.debug(
+ f"The expected packet {get_packet_summaries(expected_packet)} "
+ f"not found among received {get_packet_summaries(received_packets)}"
+ )
+ self._fail_test_case_verify(
+ "An expected packet not found among received packets."
+ )
+
+ def _compare_packets(
+ self, expected_packet: Packet, received_packet: Packet
+ ) -> bool:
+ self._logger.debug(
+ "Comparing packets: \n"
+ f"{expected_packet.summary()}\n"
+ f"{received_packet.summary()}"
+ )
+
+ l3 = IP in expected_packet.layers()
+ self._logger.debug("Found l3 layer")
+
+ received_payload = received_packet
+ expected_payload = expected_packet
+ while received_payload and expected_payload:
+ self._logger.debug("Comparing payloads:")
+ self._logger.debug(f"Received: {received_payload}")
+ self._logger.debug(f"Expected: {expected_payload}")
+ if received_payload.__class__ == expected_payload.__class__:
+ self._logger.debug("The layers are the same.")
+ if received_payload.__class__ == Ether:
+ if not self._verify_l2_frame(received_payload, l3):
+ return False
+ elif received_payload.__class__ == IP:
+ if not self._verify_l3_packet(received_payload, expected_payload):
+ return False
+ else:
+ # Different layers => different packets
+ return False
+ received_payload = received_payload.payload
+ expected_payload = expected_payload.payload
+
+ if expected_payload:
self._logger.debug(
- "A test case failed, showing the last 10 commands executed on SUT:"
+ f"The expected packet did not contain {expected_payload}."
)
- for command_res in self.sut_node.main_session.remote_session.history[-10:]:
- self._logger.debug(command_res.command)
- raise TestCaseVerifyError(failure_description)
+ return False
+ if received_payload and received_payload.__class__ != Padding:
+ self._logger.debug(
+ "The received payload had extra layers which were not padding."
+ )
+ return False
+ return True
+
+ def _verify_l2_frame(self, received_packet: Ether, l3: bool) -> bool:
+ self._logger.debug("Looking at the Ether layer.")
+ self._logger.debug(
+ f"Comparing received dst mac '{received_packet.dst}' "
+ f"with expected '{self._tg_port_ingress.mac_address}'."
+ )
+ if received_packet.dst != self._tg_port_ingress.mac_address:
+ return False
+
+ expected_src_mac = self._tg_port_egress.mac_address
+ if l3:
+ expected_src_mac = self._sut_port_egress.mac_address
+ self._logger.debug(
+ f"Comparing received src mac '{received_packet.src}' "
+ f"with expected '{expected_src_mac}'."
+ )
+ if received_packet.src != expected_src_mac:
+ return False
+
+ return True
+
+ def _verify_l3_packet(self, received_packet: IP, expected_packet: IP) -> bool:
+ self._logger.debug("Looking at the IP layer.")
+ if (
+ received_packet.src != expected_packet.src
+ or received_packet.dst != expected_packet.dst
+ ):
+ return False
+ return True
def run(self) -> None:
"""
diff --git a/dts/framework/testbed_model/node.py b/dts/framework/testbed_model/node.py
index c666dfbf4e..fc01e0bf8e 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -8,7 +8,8 @@
"""
from abc import ABC
-from typing import Any, Callable, Type
+from ipaddress import IPv4Interface, IPv6Interface
+from typing import Any, Callable, Type, Union
from framework.config import (
BuildTargetConfiguration,
@@ -221,6 +222,17 @@ def configure_port_state(self, port: Port, enable: bool = True) -> None:
"""
self.main_session.configure_port_state(port, enable)
+ def configure_port_ip_address(
+ self,
+ address: Union[IPv4Interface, IPv6Interface],
+ port: Port,
+ delete: bool = False,
+ ) -> None:
+ """
+ Configure the IP address of a port on this node.
+ """
+ self.main_session.configure_port_ip_address(address, port, delete)
+
def close(self) -> None:
"""
Close all connections and free other resources.
diff --git a/dts/framework/testbed_model/sut_node.py b/dts/framework/testbed_model/sut_node.py
index f0b017a383..202aebfd06 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -351,6 +351,9 @@ def run_dpdk_app(
f"{app_path} {eal_args}", timeout, privileged=True, verify=True
)
+ def configure_ipv4_forwarding(self, enable: bool) -> None:
+ self.main_session.configure_ipv4_forwarding(enable)
+
def create_interactive_shell(
self,
shell_cls: Type[InteractiveShellType],
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
new file mode 100644
index 0000000000..9b5f39711d
--- /dev/null
+++ b/dts/tests/TestSuite_os_udp.py
@@ -0,0 +1,45 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 PANTHEON.tech s.r.o.
+
+"""
+Configure SUT node to route traffic from if1 to if2.
+Send a packet to the SUT node, verify it comes back on the second port on the TG node.
+"""
+
+from scapy.layers.inet import IP, UDP # type: ignore[import]
+from scapy.layers.l2 import Ether # type: ignore[import]
+
+from framework.test_suite import TestSuite
+
+
+class TestOSUdp(TestSuite):
+ def set_up_suite(self) -> None:
+ """
+ Setup:
+ Configure SUT ports and SUT to route traffic from if1 to if2.
+ """
+
+ self.configure_testbed_ipv4()
+
+ def test_os_udp(self) -> None:
+ """
+ Steps:
+ Send a UDP packet.
+ Verify:
+ The packet with proper addresses arrives at the other TG port.
+ """
+
+ packet = Ether() / IP() / UDP()
+
+ received_packets = self.send_packet_and_capture(packet)
+
+ expected_packet = self.get_expected_packet(packet)
+
+ self.verify_packets(expected_packet, received_packets)
+
+ def tear_down_suite(self) -> None:
+ """
+ Teardown:
+ Remove the SUT port configuration configured in setup.
+ """
+ self.configure_testbed_ipv4(restore=True)
--
2.34.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 6/6] dts: add basic UDP test case
2023-07-19 14:13 ` [PATCH v3 6/6] dts: add basic UDP test case Juraj Linkeš
@ 2023-07-20 15:21 ` Jeremy Spewock
0 siblings, 0 replies; 29+ messages in thread
From: Jeremy Spewock @ 2023-07-20 15:21 UTC (permalink / raw)
To: Juraj Linkeš
Cc: thomas, Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, probb, dev
[-- Attachment #1: Type: text/plain, Size: 48 bytes --]
Acked-by: Jeremy Spewock <jspewock@iol.unh.edu>
[-- Attachment #2: Type: text/html, Size: 191 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 0/6] dts: tg abstractions and scapy tg
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
` (5 preceding siblings ...)
2023-07-19 14:13 ` [PATCH v3 6/6] dts: add basic UDP test case Juraj Linkeš
@ 2023-07-24 14:23 ` Thomas Monjalon
6 siblings, 0 replies; 29+ messages in thread
From: Thomas Monjalon @ 2023-07-24 14:23 UTC (permalink / raw)
To: Juraj Linkeš
Cc: Honnappa.Nagarahalli, lijuan.tu, bruce.richardson, jspewock, probb, dev
19/07/2023 16:12, Juraj Linkeš:
> Add abstractions for traffic generator split into those that can and
> can't capture traffic.
>
> The Scapy implementation uses an XML-RPC server for remote control. This
> requires an interactive session to add Scapy functions to the server. The
> interactive session code is based on another patch [0].
>
> The basic test case is there to showcase the Scapy implementation - it
> sends just one UDP packet and verifies it on the other end.
>
> [0]: http://patches.dpdk.org/project/dpdk/patch/20230718214802.3214-3-jspewock@iol.unh.edu/
>
> Juraj Linkeš (6):
> dts: add scapy dependency
> dts: add traffic generator config
> dts: traffic generator abstractions
> dts: add python remote interactive shell
> dts: scapy traffic generator implementation
> dts: add basic UDP test case
Applied, thanks
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2023-07-24 14:23 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-20 9:31 [RFC PATCH v1 0/5] dts: add tg abstractions and scapy Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 1/5] dts: add scapy dependency Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 2/5] dts: add traffic generator config Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 3/5] dts: traffic generator abstractions Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 4/5] dts: scapy traffic generator implementation Juraj Linkeš
2023-04-20 9:31 ` [RFC PATCH v1 5/5] dts: add traffic generator node to dts runner Juraj Linkeš
2023-05-03 18:02 ` Jeremy Spewock
2023-07-17 11:07 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 1/6] dts: add scapy dependency Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 2/6] dts: add traffic generator config Juraj Linkeš
2023-07-18 15:55 ` Jeremy Spewock
2023-07-19 12:57 ` Juraj Linkeš
2023-07-19 13:18 ` Jeremy Spewock
2023-07-17 11:07 ` [PATCH v2 3/6] dts: traffic generator abstractions Juraj Linkeš
2023-07-18 19:56 ` Jeremy Spewock
2023-07-19 13:23 ` Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 4/6] dts: add python remote interactive shell Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 5/6] dts: scapy traffic generator implementation Juraj Linkeš
2023-07-17 11:07 ` [PATCH v2 6/6] dts: add basic UDP test case Juraj Linkeš
2023-07-18 21:04 ` [PATCH v2 0/6] dts: tg abstractions and scapy tg Jeremy Spewock
2023-07-19 14:12 ` [PATCH v3 " Juraj Linkeš
2023-07-19 14:12 ` [PATCH v3 1/6] dts: add scapy dependency Juraj Linkeš
2023-07-19 14:12 ` [PATCH v3 2/6] dts: add traffic generator config Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 3/6] dts: traffic generator abstractions Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 4/6] dts: add python remote interactive shell Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 5/6] dts: scapy traffic generator implementation Juraj Linkeš
2023-07-19 14:13 ` [PATCH v3 6/6] dts: add basic UDP test case Juraj Linkeš
2023-07-20 15:21 ` Jeremy Spewock
2023-07-24 14:23 ` [PATCH v3 0/6] dts: tg abstractions and scapy tg Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).