From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8E2648A85; Thu, 6 Nov 2025 14:30:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 87D9E4021F; Thu, 6 Nov 2025 14:30:59 +0100 (CET) Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by mails.dpdk.org (Postfix) with ESMTP id 516094013F for ; Thu, 6 Nov 2025 14:30:58 +0100 (CET) Received: by mail-ed1-f46.google.com with SMTP id 4fb4d7f45d1cf-640aaa89697so1390278a12.3 for ; Thu, 06 Nov 2025 05:30:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1762435858; x=1763040658; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=GtHCUFZJEzkwooUQ0HpCerhPyhdqKQpMOMPr7favwEc=; b=bplBxj61OrY3ZMV117z3XDltJrQ8Jk8Nsp9LlY/0bUCe7tOxZyZcXcsngjBPppyD9p DowhmU8DcNS9H0zrINeoo4z7xECeEXeQQ7dd3mNcJPbH3dsYX6sMah08P/5tZybITTyN 0DLie14MgID4g+8vA3Fqtl/8L0oDn74oM+8ek= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762435858; x=1763040658; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GtHCUFZJEzkwooUQ0HpCerhPyhdqKQpMOMPr7favwEc=; b=qjOq0IA3T+KRJSox+BQPwCyswyfPPWLv/FjJtzJkThy2TjbIHGoa80dPlHKe6JlRaA +QDPSm1FICpmL5m22xKDe9rnYgii6QT6iqOvCv/4eXqFsxAW7EyBxye3ichiKFX5RgXv 0Kw+P7npgB3OA3QV4QfNrKVrHdbI/5ggwqr7IqE7vl0dbQirO2rDXtp054bYsxRPmKb5 sIlY2JlgsMLuysKtc1WYGPjt84DVsfm8Q12spciOmQYRRzv4G92rsEx88YMte7wqpCPA FkLA3mYBYaOhgzOEzo72eJ7rT4msaWNcUXM/R+waClfdxzZwut7PXE4MEW2oMan6fzOG LL0A== X-Forwarded-Encrypted: i=1; AJvYcCXcjOoy7rJzACC6nbk7bDZdnnHYwnRuT9OZCEZrCLWIqvbQwB6wk4qhP+LycSUPh4MEDHg=@dpdk.org X-Gm-Message-State: AOJu0Yw1zyUCGAOflOkWFZvXirpxp8aciHzfIyOi9tjqsJfmgpoP8OX0 TG1A2OJX7+twiE0tV6JhihcAGvxgXCdEcRFpp8BpnYfiaMd3Jr0GWo6CBRfsWP8wxy8huc838JD tXxwrPIH1pay6AYZ6J3QIgRQ8BRN2GIDale9xXdZVRg== X-Gm-Gg: ASbGncvASE1tAsAxsosyhyQmPN+UKA5i1dphKAZNPyZ1feJH00ESfvPk4TtYpTncfrb usc6se9Gy1sZtYGdwiUgWlMZ3FK1YsYecj7gkp0hMrlCEkFqxNlfiRdZcijhQIYTNsPbFpcG6wJ Gwx4p7wjBeyxrXkUf2Xt6azz7sGl7FvoViK+77qBWUZXElNlUGZI45CFTKBHb6QJpminY0IJfvL XtYVlrTFgaaIX+i2uS7PRCZBOgxDmGYKshgN8EsRgvjWYPlMec9GHIFp77Uc8FqKoeRLbwkEFl1 4Lo93dQgDZ5l9oa+nA== X-Google-Smtp-Source: AGHT+IET4b70NI9SBXSc13TYJjg/KJ9hlPfzCThQY+dxu7AgTMsaSzLntw6QMt3Nua5ZBm+Sh6Ze/IAlK2AdEdbHrpY= X-Received: by 2002:a17:907:2d89:b0:b2d:d7ba:8e7b with SMTP id a640c23a62f3a-b72652b858amr676294666b.23.1762435857774; Thu, 06 Nov 2025 05:30:57 -0800 (PST) MIME-Version: 1.0 References: <20251023013049.1368129-1-probb@iol.unh.edu> <20251105223628.1659390-1-probb@iol.unh.edu> <20251105223628.1659390-4-probb@iol.unh.edu> In-Reply-To: <20251105223628.1659390-4-probb@iol.unh.edu> From: Andrew Bailey Date: Thu, 6 Nov 2025 08:30:46 -0500 X-Gm-Features: AWmQ_bnUTCgkpmdAjClDvwlOUkEJlNnPWLnTCcbND4Di8DLC6c2dC3xCUWDRvrU Message-ID: Subject: Re: [PATCH v6 3/3] dts: add performance test functions to test suite API To: Patrick Robb Cc: Luca.Vizzarro@arm.com, dev@dpdk.org, Paul.Szczepanek@arm.com, dmarx@iol.unh.edu, Nicholas Pratte Content-Type: multipart/alternative; boundary="0000000000003e0a620642ed13f7" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --0000000000003e0a620642ed13f7 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable + params["measured_mpps"] =3D self._transmit(testpmd, frame_size) + params["performance_delta"] =3D ( + float(params["measured_mpps"]) - float(params["expected_mpps"]) + ) / float(params["expected_mpps"]) + params["pass"] =3D float(params["performance_delta"]) >=3D -self.delta_tolerance This code seems like it can produce false positives. If we are checking if a measured mpps is within dela_tolerance of the expected, (pass =3D measured_mpps >=3D expected_mpps - delta_tolerance) may be more effective. = As an example of a false positive, use measured_mpps =3D 1.0, expected_mpps = =3D 2.0, and delta_tolerance =3D 0.51. This should be a fail since 1.0 is not within 0.51 of 2.0 but equates to (-.50) >=3D (-.51) =3D=3D true. On Wed, Nov 5, 2025 at 5:37=E2=80=AFPM Patrick Robb wro= te: > From: Nicholas Pratte > > Provide packet transmission function to support performance tests using a > user-supplied performance traffic generator. The single core performance > test is included. It allows the user to define a matrix of frame size, > descriptor count, and expected mpps, and fails if any combination does > not forward a mpps count within 5% of the given baseline. > > Bugzilla ID: 1697 > Signed-off-by: Nicholas Pratte > Signed-off-by: Patrick Robb > Reviewed-by: Dean Marx > Reviewed-by: Andrew Bailey > --- > ...sts.TestSuite_single_core_forward_perf.rst | 8 + > dts/api/packet.py | 35 +++- > dts/api/test.py | 32 ++++ > dts/configurations/tests_config.example.yaml | 12 ++ > .../TestSuite_single_core_forward_perf.py | 149 ++++++++++++++++++ > 5 files changed, 235 insertions(+), 1 deletion(-) > create mode 100644 > doc/api/dts/tests.TestSuite_single_core_forward_perf.rst > create mode 100644 dts/tests/TestSuite_single_core_forward_perf.py > > diff --git a/doc/api/dts/tests.TestSuite_single_core_forward_perf.rst > b/doc/api/dts/tests.TestSuite_single_core_forward_perf.rst > new file mode 100644 > index 0000000000..3651b0b041 > --- /dev/null > +++ b/doc/api/dts/tests.TestSuite_single_core_forward_perf.rst > @@ -0,0 +1,8 @@ > +.. SPDX-License-Identifier: BSD-3-Clause > + > +single_core_forward_perf Test Suite > +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + > +.. automodule:: tests.TestSuite_single_core_forward_perf > + :members: > + :show-inheritance: > diff --git a/dts/api/packet.py b/dts/api/packet.py > index ac7f64dd17..094a1b7a9d 100644 > --- a/dts/api/packet.py > +++ b/dts/api/packet.py > @@ -33,6 +33,9 @@ > from > framework.testbed_model.traffic_generator.capturing_traffic_generator > import ( > PacketFilteringConfig, > ) > +from > framework.testbed_model.traffic_generator.performance_traffic_generator > import ( > + PerformanceTrafficStats, > +) > from framework.utils import get_packet_summaries > > > @@ -108,7 +111,9 @@ def send_packets( > packets: Packets to send. > """ > packets =3D adjust_addresses(packets) > - get_ctx().func_tg.send_packets(packets, > get_ctx().topology.tg_port_egress) > + tg =3D get_ctx().func_tg > + if tg: > + tg.send_packets(packets, get_ctx().topology.tg_port_egress) > > > def get_expected_packets( > @@ -317,3 +322,31 @@ def _verify_l3_packet(received_packet: IP, > expected_packet: IP) -> bool: > if received_packet.src !=3D expected_packet.src or received_packet.d= st > !=3D expected_packet.dst: > return False > return True > + > + > +def assess_performance_by_packet( > + packet: Packet, duration: float, send_mpps: int | None =3D None > +) -> PerformanceTrafficStats: > + """Send a given packet for a given duration and assess basic > performance statistics. > + > + Send `packet` and assess NIC performance for a given duration, > corresponding to the test > + suite's given topology. > + > + Args: > + packet: The packet to send. > + duration: Performance test duration (in seconds). > + send_mpps: The millions packets per second send rate. > + > + Returns: > + Performance statistics of the generated test. > + """ > + from > framework.testbed_model.traffic_generator.performance_traffic_generator > import ( > + PerformanceTrafficGenerator, > + ) > + > + assert isinstance( > + get_ctx().perf_tg, PerformanceTrafficGenerator > + ), "Cannot send performance traffic with non-performance traffic > generator" > + tg: PerformanceTrafficGenerator =3D cast(PerformanceTrafficGenerator= , > get_ctx().perf_tg) > + # TODO: implement @requires for types of traffic generator > + return tg.calculate_traffic_and_stats(packet, duration, send_mpps) > diff --git a/dts/api/test.py b/dts/api/test.py > index f58c82715d..11265ee2c1 100644 > --- a/dts/api/test.py > +++ b/dts/api/test.py > @@ -6,9 +6,13 @@ > This module provides utility functions for test cases, including logging= , > verification. > """ > > +import json > +from datetime import datetime > + > from framework.context import get_ctx > from framework.exception import InternalError, SkippedTestException, > TestCaseVerifyError > from framework.logger import DTSLogger > +from framework.testbed_model.artifact import Artifact > > > def get_current_test_case_name() -> str: > @@ -124,3 +128,31 @@ def get_logger() -> DTSLogger: > if current_test_suite is None: > raise InternalError("No current test suite") > return current_test_suite._logger > + > + > +def write_performance_json( > + performance_data: dict, filename: str =3D "performance_metrics.json" > +) -> None: > + """Write performance test results to a JSON file in the test suite's > output directory. > + > + This method creates a JSON file containing performance metrics in th= e > test suite's > + output directory. The data can be a dictionary of any structure. No > specific format > + is required. > + > + Args: > + performance_data: Dictionary containing performance metrics and > results. > + filename: Name of the JSON file to create. > + > + Raises: > + InternalError: If performance data is not provided. > + """ > + if not performance_data: > + raise InternalError("No performance data to write") > + > + perf_data =3D {"timestamp": datetime.now().isoformat(), > **performance_data} > + perf_json_artifact =3D Artifact("local", filename) > + > + with perf_json_artifact.open("w") as json_file: > + json.dump(perf_data, json_file, indent=3D2) > + > + get_logger().info(f"Performance results written to: > {perf_json_artifact.local_path}") > diff --git a/dts/configurations/tests_config.example.yaml > b/dts/configurations/tests_config.example.yaml > index c011ac0588..167bc91a35 100644 > --- a/dts/configurations/tests_config.example.yaml > +++ b/dts/configurations/tests_config.example.yaml > @@ -3,3 +3,15 @@ > # Define the custom test suite configurations > hello_world: > msg: A custom hello world to you! > +single_core_forward_perf: > + test_parameters: # Add frame size / descriptor count combinations as > needed > + - frame_size: 64 > + num_descriptors: 512 > + expected_mpps: 1.0 # Set millions of packets per second according > to your devices expected throughput for this given frame size / descripto= r > count > + - frame_size: 64 > + num_descriptors: 1024 > + expected_mpps: 1.0 > + - frame_size: 512 > + num_descriptors: 1024 > + expected_mpps: 1.0 > + delta_tolerance: 0.05 > \ No newline at end of file > diff --git a/dts/tests/TestSuite_single_core_forward_perf.py > b/dts/tests/TestSuite_single_core_forward_perf.py > new file mode 100644 > index 0000000000..8a92ba39b5 > --- /dev/null > +++ b/dts/tests/TestSuite_single_core_forward_perf.py > @@ -0,0 +1,149 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2025 University of New Hampshire > + > +"""Single core forwarding performance test suite. > + > +This suite measures the amount of packets which can be forwarded by DPDK > using a single core. > +The testsuites takes in as parameters a set of parameters, each > consisting of a frame size, > +Tx/Rx descriptor count, and the expected MPPS to be forwarded by the DPD= K > application. The > +test leverages a performance traffic generator to send traffic at two > paired TestPMD interfaces > +on the SUT system, which forward to one another and then back to the > traffic generator's ports. > +The aggregate packets forwarded by the two TestPMD ports are compared > against the expected MPPS > +baseline which is given in the test config, in order to determine the > test result. > +""" > + > +from scapy.layers.inet import IP > +from scapy.layers.l2 import Ether > +from scapy.packet import Raw > + > +from api.capabilities import ( > + LinkTopology, > + requires_link_topology, > +) > +from api.packet import assess_performance_by_packet > +from api.test import verify, write_performance_json > +from api.testpmd import TestPmd > +from api.testpmd.config import RXRingParams, TXRingParams > +from framework.params.types import TestPmdParamsDict > +from framework.test_suite import BaseConfig, TestSuite, perf_test > + > + > +class Config(BaseConfig): > + """Performance test metrics.""" > + > + test_parameters: list[dict[str, int | float]] =3D [ > + {"frame_size": 64, "num_descriptors": 1024, "expected_mpps": > 1.00}, > + {"frame_size": 128, "num_descriptors": 1024, "expected_mpps": > 1.00}, > + {"frame_size": 256, "num_descriptors": 1024, "expected_mpps": > 1.00}, > + {"frame_size": 512, "num_descriptors": 1024, "expected_mpps": > 1.00}, > + {"frame_size": 1024, "num_descriptors": 1024, "expected_mpps": > 1.00}, > + {"frame_size": 1518, "num_descriptors": 1024, "expected_mpps": > 1.00}, > + ] > + delta_tolerance: float =3D 0.05 > + > + > +@requires_link_topology(LinkTopology.TWO_LINKS) > +class TestSingleCoreForwardPerf(TestSuite): > + """Single core forwarding performance test suite.""" > + > + config: Config > + > + def set_up_suite(self): > + """Set up the test suite.""" > + self.test_parameters =3D self.config.test_parameters > + self.delta_tolerance =3D self.config.delta_tolerance > + > + def _transmit(self, testpmd: TestPmd, frame_size: int) -> float: > + """Create a testpmd session with every rule in the given list, > verify jump behavior. > + > + Args: > + testpmd: The testpmd shell to use for forwarding packets. > + frame_size: The size of the frame to transmit. > + > + Returns: > + The MPPS (millions of packets per second) forwarded by the > SUT. > + """ > + # Build packet with dummy values, and account for the 14B and 20= B > Ether and IP headers > + packet =3D ( > + Ether(src=3D"52:00:00:00:00:00") > + / IP(src=3D"1.2.3.4", dst=3D"192.18.1.0") > + / Raw(load=3D"x" * (frame_size - 14 - 20)) > + ) > + > + testpmd.start() > + > + # Transmit for 30 seconds. > + stats =3D assess_performance_by_packet(packet=3Dpacket, duration= =3D30) > + > + rx_mpps =3D stats.rx_pps / 1_000_000 > + > + return rx_mpps > + > + def _produce_stats_table(self, test_parameters: list[dict[str, int | > float]]) -> None: > + """Display performance results in table format and write to > structured JSON file. > + > + Args: > + test_parameters: The expected and real stats per set of test > parameters. > + """ > + header =3D f"{'Frame Size':>12} | {'TXD/RXD':>12} | {'Real > MPPS':>12} | {'Expected MPPS':>14}" > + print("-" * len(header)) > + print(header) > + print("-" * len(header)) > + for params in test_parameters: > + print(f"{params['frame_size']:>12} | > {params['num_descriptors']:>12} | ", end=3D"") > + print(f"{params['measured_mpps']:>12.2f} | > {params['expected_mpps']:>14.2f}") > + print("-" * len(header)) > + > + write_performance_json({"results": test_parameters}) > + > + @perf_test > + def single_core_forward_perf(self) -> None: > + """Validate expected single core forwarding performance. > + > + Steps: > + * Create a packet according to the frame size specified in > the test config. > + * Transmit from the traffic generator's ports 0 and 1 at > above the expect. > + * Forward on TestPMD's interfaces 0 and 1 with 1 core. > + > + Verify: > + * The resulting MPPS forwarded is greater than > expected_mpps*(1-delta_tolerance). > + """ > + # Find SUT DPDK driver to determine driver specific performance > optimization flags > + sut_dpdk_driver =3D > self._ctx.sut_node.config.ports[0].os_driver_for_dpdk > + > + for params in self.test_parameters: > + frame_size =3D params["frame_size"] > + num_descriptors =3D params["num_descriptors"] > + > + driver_specific_testpmd_args: TestPmdParamsDict =3D { > + "tx_ring": TXRingParams(descriptors=3Dnum_descriptors), > + "rx_ring": RXRingParams(descriptors=3Dnum_descriptors), > + "nb_cores": 1, > + } > + > + if sut_dpdk_driver =3D=3D "mlx5_core": > + driver_specific_testpmd_args["burst"] =3D 64 > + driver_specific_testpmd_args["mbcache"] =3D 512 > + elif sut_dpdk_driver =3D=3D "i40e": > + driver_specific_testpmd_args["rx_queues"] =3D 2 > + driver_specific_testpmd_args["tx_queues"] =3D 2 > + > + with TestPmd( > + **driver_specific_testpmd_args, > + ) as testpmd: > + params["measured_mpps"] =3D self._transmit(testpmd, > frame_size) > + params["performance_delta"] =3D ( > + float(params["measured_mpps"]) - > float(params["expected_mpps"]) > + ) / float(params["expected_mpps"]) > + params["pass"] =3D float(params["performance_delta"]) >= =3D > -self.delta_tolerance > + > + self._produce_stats_table(self.test_parameters) > + > + for params in self.test_parameters: > + verify( > + params["pass"] is True, > + f"""Packets forwarded is less than {(1 > -self.delta_tolerance)*100}% > + of the expected baseline. > + Measured MPPS =3D {params["measured_mpps"]} > + Expected MPPS =3D {params["expected_mpps"]}""", > + ) > -- > 2.49.0 > > --0000000000003e0a620642ed13f7 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
+=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 params["measured_= mpps"] =3D self._transmit(testpmd, frame_size)
+=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 params["performance_delta"= ] =3D (
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 float(params["measured_mpps"]) - float(params["expect= ed_mpps"])
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 ) / float(params["expected_mpps"])
+=C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 params["pass"] =3D = float(params["performance_delta"]) >=3D -self.delta_tolerance<= br>
This code seems like it can produce false positives. If we are check= ing if a measured mpps is within dela_tolerance of the expected, (pass =3D = measured_mpps >=3D expected_mpps - delta_tolerance) may be more effectiv= e. As an example of a false positive, use measured_mpps =3D 1.0, expected_m= pps =3D 2.0, and delta_tolerance =3D 0.51. This should be a fail since 1.0 = is not within 0.51 of 2.0 but equates to (-.50) >=3D (-.51) =3D=3D true.= =C2=A0 =C2=A0

On Wed, Nov 5, 2025 at 5:37=E2=80=AFPM P= atrick Robb <probb@iol.unh.edu&= gt; wrote:
From:= Nicholas Pratte <npratte@iol.unh.edu>

Provide packet transmission function to support performance tests using a user-supplied performance traffic generator. The single core performance test is included. It allows the user to define a matrix of frame size,
descriptor count, and expected mpps, and fails if any combination does
not forward a mpps count within 5% of the given baseline.

Bugzilla ID: 1697
Signed-off-by: Nicholas Pratte <npratte@iol.unh.edu>
Signed-off-by: Patrick Robb <probb@iol.unh.edu>
Reviewed-by: Dean Marx <dmarx@iol.unh.edu>
Reviewed-by: Andrew Bailey <abailey@iol.unh.edu>
---
=C2=A0...sts.TestSuite_single_core_forward_perf.rst |=C2=A0 =C2=A08 +
=C2=A0dts/api/packet.py=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 35 +++-
=C2=A0dts/api/test.py=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 32 ++++<= br> =C2=A0dts/configurations/tests_config.example.yaml=C2=A0 |=C2=A0 12 ++
=C2=A0.../TestSuite_single_core_forward_perf.py=C2=A0 =C2=A0 =C2=A0| 149 ++= ++++++++++++++++
=C2=A05 files changed, 235 insertions(+), 1 deletion(-)
=C2=A0create mode 100644 doc/api/dts/tests.TestSuite_single_core_forward_pe= rf.rst
=C2=A0create mode 100644 dts/tests/TestSuite_single_core_forward_perf.py
diff --git a/doc/api/dts/tests.TestSuite_single_core_forward_perf.rst b/doc= /api/dts/tests.TestSuite_single_core_forward_perf.rst
new file mode 100644
index 0000000000..3651b0b041
--- /dev/null
+++ b/doc/api/dts/tests.TestSuite_single_core_forward_perf.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+
+single_core_forward_perf Test Suite
+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
+
+.. automodule:: tests.TestSuite_single_core_forward_perf
+=C2=A0 =C2=A0:members:
+=C2=A0 =C2=A0:show-inheritance:
diff --git a/dts/api/packet.py b/dts/api/packet.py
index ac7f64dd17..094a1b7a9d 100644
--- a/dts/api/packet.py
+++ b/dts/api/packet.py
@@ -33,6 +33,9 @@
=C2=A0from framework.testbed_model.traffic_generator.capturing_traffic_gene= rator import (
=C2=A0 =C2=A0 =C2=A0PacketFilteringConfig,
=C2=A0)
+from framework.testbed_model.traffic_generator.performance_traffic_generat= or import (
+=C2=A0 =C2=A0 PerformanceTrafficStats,
+)
=C2=A0from framework.utils import get_packet_summaries


@@ -108,7 +111,9 @@ def send_packets(
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0packets: Packets to send.
=C2=A0 =C2=A0 =C2=A0"""
=C2=A0 =C2=A0 =C2=A0packets =3D adjust_addresses(packets)
-=C2=A0 =C2=A0 get_ctx().func_tg.send_packets(packets, get_ctx().topology.t= g_port_egress)
+=C2=A0 =C2=A0 tg =3D get_ctx().func_tg
+=C2=A0 =C2=A0 if tg:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 tg.send_packets(packets, get_ctx().topology.tg= _port_egress)


=C2=A0def get_expected_packets(
@@ -317,3 +322,31 @@ def _verify_l3_packet(received_packet: IP, expected_pa= cket: IP) -> bool:
=C2=A0 =C2=A0 =C2=A0if received_packet.src !=3D expected_packet.src or rece= ived_packet.dst !=3D expected_packet.dst:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return False
=C2=A0 =C2=A0 =C2=A0return True
+
+
+def assess_performance_by_packet(
+=C2=A0 =C2=A0 packet: Packet, duration: float, send_mpps: int | None =3D N= one
+) -> PerformanceTrafficStats:
+=C2=A0 =C2=A0 """Send a given packet for a given duration a= nd assess basic performance statistics.
+
+=C2=A0 =C2=A0 Send `packet` and assess NIC performance for a given duratio= n, corresponding to the test
+=C2=A0 =C2=A0 suite's given topology.
+
+=C2=A0 =C2=A0 Args:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 packet: The packet to send.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 duration: Performance test duration (in second= s).
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 send_mpps: The millions packets per second sen= d rate.
+
+=C2=A0 =C2=A0 Returns:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 Performance statistics of the generated test.<= br> +=C2=A0 =C2=A0 """
+=C2=A0 =C2=A0 from framework.testbed_model.traffic_generator.performance_t= raffic_generator import (
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 PerformanceTrafficGenerator,
+=C2=A0 =C2=A0 )
+
+=C2=A0 =C2=A0 assert isinstance(
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 get_ctx().perf_tg, PerformanceTrafficGenerator=
+=C2=A0 =C2=A0 ), "Cannot send performance traffic with non-performanc= e traffic generator"
+=C2=A0 =C2=A0 tg: PerformanceTrafficGenerator =3D cast(PerformanceTrafficG= enerator, get_ctx().perf_tg)
+=C2=A0 =C2=A0 # TODO: implement @requires for types of traffic generator +=C2=A0 =C2=A0 return tg.calculate_traffic_and_stats(packet, duration, send= _mpps)
diff --git a/dts/api/test.py b/dts/api/test.py
index f58c82715d..11265ee2c1 100644
--- a/dts/api/test.py
+++ b/dts/api/test.py
@@ -6,9 +6,13 @@
=C2=A0This module provides utility functions for test cases, including logg= ing, verification.
=C2=A0"""

+import json
+from datetime import datetime
+
=C2=A0from framework.context import get_ctx
=C2=A0from framework.exception import InternalError, SkippedTestException, = TestCaseVerifyError
=C2=A0from framework.logger import DTSLogger
+from framework.testbed_model.artifact import Artifact


=C2=A0def get_current_test_case_name() -> str:
@@ -124,3 +128,31 @@ def get_logger() -> DTSLogger:
=C2=A0 =C2=A0 =C2=A0if current_test_suite is None:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0raise InternalError("No current test= suite")
=C2=A0 =C2=A0 =C2=A0return current_test_suite._logger
+
+
+def write_performance_json(
+=C2=A0 =C2=A0 performance_data: dict, filename: str =3D "performance_= metrics.json"
+) -> None:
+=C2=A0 =C2=A0 """Write performance test results to a JSON f= ile in the test suite's output directory.
+
+=C2=A0 =C2=A0 This method creates a JSON file containing performance metri= cs in the test suite's
+=C2=A0 =C2=A0 output directory. The data can be a dictionary of any struct= ure. No specific format
+=C2=A0 =C2=A0 is required.
+
+=C2=A0 =C2=A0 Args:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 performance_data: Dictionary containing perfor= mance metrics and results.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 filename: Name of the JSON file to create.
+
+=C2=A0 =C2=A0 Raises:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 InternalError: If performance data is not prov= ided.
+=C2=A0 =C2=A0 """
+=C2=A0 =C2=A0 if not performance_data:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 raise InternalError("No performance data = to write")
+
+=C2=A0 =C2=A0 perf_data =3D {"timestamp": datetime.now().isoform= at(), **performance_data}
+=C2=A0 =C2=A0 perf_json_artifact =3D Artifact("local", filename)=
+
+=C2=A0 =C2=A0 with perf_json_artifact.open("w") as json_file: +=C2=A0 =C2=A0 =C2=A0 =C2=A0 json.dump(perf_data, json_file, indent=3D2) +
+=C2=A0 =C2=A0 get_logger().info(f"Performance results written to: {pe= rf_json_artifact.local_path}")
diff --git a/dts/configurations/tests_config.example.yaml b/dts/configurati= ons/tests_config.example.yaml
index c011ac0588..167bc91a35 100644
--- a/dts/configurations/tests_config.example.yaml
+++ b/dts/configurations/tests_config.example.yaml
@@ -3,3 +3,15 @@
=C2=A0# Define the custom test suite configurations
=C2=A0hello_world:
=C2=A0 =C2=A0msg: A custom hello world to you!
+single_core_forward_perf:
+=C2=A0 test_parameters: # Add frame size / descriptor count combinations a= s needed
+=C2=A0 =C2=A0 - frame_size: 64
+=C2=A0 =C2=A0 =C2=A0 num_descriptors: 512
+=C2=A0 =C2=A0 =C2=A0 expected_mpps: 1.0 # Set millions of packets per seco= nd according to your devices expected throughput for this given frame size = / descriptor count
+=C2=A0 =C2=A0 - frame_size: 64
+=C2=A0 =C2=A0 =C2=A0 num_descriptors: 1024
+=C2=A0 =C2=A0 =C2=A0 expected_mpps: 1.0
+=C2=A0 =C2=A0 - frame_size: 512
+=C2=A0 =C2=A0 =C2=A0 num_descriptors: 1024
+=C2=A0 =C2=A0 =C2=A0 expected_mpps: 1.0
+=C2=A0 delta_tolerance: 0.05
\ No newline at end of file
diff --git a/dts/tests/TestSuite_single_core_forward_perf.py b/dts/tests/Te= stSuite_single_core_forward_perf.py
new file mode 100644
index 0000000000..8a92ba39b5
--- /dev/null
+++ b/dts/tests/TestSuite_single_core_forward_perf.py
@@ -0,0 +1,149 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 University of New Hampshire
+
+"""Single core forwarding performance test suite.
+
+This suite measures the amount of packets which can be forwarded by DPDK u= sing a single core.
+The testsuites takes in as parameters a set of parameters, each consisting= of a frame size,
+Tx/Rx descriptor count, and the expected MPPS to be forwarded by the DPDK = application. The
+test leverages a performance traffic generator to send traffic at two pair= ed TestPMD interfaces
+on the SUT system, which forward to one another and then back to the traff= ic generator's ports.
+The aggregate packets forwarded by the two TestPMD ports are compared agai= nst the expected MPPS
+baseline which is given in the test config, in order to determine the test= result.
+"""
+
+from scapy.layers.inet import IP
+from scapy.layers.l2 import Ether
+from scapy.packet import Raw
+
+from api.capabilities import (
+=C2=A0 =C2=A0 LinkTopology,
+=C2=A0 =C2=A0 requires_link_topology,
+)
+from api.packet import assess_performance_by_packet
+from api.test import verify, write_performance_json
+from api.testpmd import TestPmd
+from api.testpmd.config import RXRingParams, TXRingParams
+from framework.params.types import TestPmdParamsDict
+from framework.test_suite import BaseConfig, TestSuite, perf_test
+
+
+class Config(BaseConfig):
+=C2=A0 =C2=A0 """Performance test metrics."""= ;
+
+=C2=A0 =C2=A0 test_parameters: list[dict[str, int | float]] =3D [
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {"frame_size": 64, "num_descrip= tors": 1024, "expected_mpps": 1.00},
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {"frame_size": 128, "num_descri= ptors": 1024, "expected_mpps": 1.00},
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {"frame_size": 256, "num_descri= ptors": 1024, "expected_mpps": 1.00},
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {"frame_size": 512, "num_descri= ptors": 1024, "expected_mpps": 1.00},
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {"frame_size": 1024, "num_descr= iptors": 1024, "expected_mpps": 1.00},
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 {"frame_size": 1518, "num_descr= iptors": 1024, "expected_mpps": 1.00},
+=C2=A0 =C2=A0 ]
+=C2=A0 =C2=A0 delta_tolerance: float =3D 0.05
+
+
+@requires_link_topology(LinkTopology.TWO_LINKS)
+class TestSingleCoreForwardPerf(TestSuite):
+=C2=A0 =C2=A0 """Single core forwarding performance test su= ite."""
+
+=C2=A0 =C2=A0 config: Config
+
+=C2=A0 =C2=A0 def set_up_suite(self):
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Set up the test suite."= ""
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.test_parameters =3D self.config.test_para= meters
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 self.delta_tolerance =3D self.config.delta_tol= erance
+
+=C2=A0 =C2=A0 def _transmit(self, testpmd: TestPmd, frame_size: int) ->= float:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Create a testpmd session wit= h every rule in the given list, verify jump behavior.
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 Args:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 testpmd: The testpmd shell to us= e for forwarding packets.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 frame_size: The size of the fram= e to transmit.
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 Returns:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 The MPPS (millions of packets pe= r second) forwarded by the SUT.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 # Build packet with dummy values, and account = for the 14B and 20B Ether and IP headers
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 packet =3D (
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Ether(src=3D"52:00:00:00:00= :00")
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 / IP(src=3D"1.2.3.4", = dst=3D"192.18.1.0")
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 / Raw(load=3D"x" * (fr= ame_size - 14 - 20))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 )
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 testpmd.start()
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 # Transmit for 30 seconds.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 stats =3D assess_performance_by_packet(packet= =3Dpacket, duration=3D30)
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 rx_mpps =3D stats.rx_pps / 1_000_000
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return rx_mpps
+
+=C2=A0 =C2=A0 def _produce_stats_table(self, test_parameters: list[dict[st= r, int | float]]) -> None:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Display performance results = in table format and write to structured JSON file.
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 Args:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 test_parameters: The expected an= d real stats per set of test parameters.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 header =3D f"{'Frame Size':>12= } | {'TXD/RXD':>12} | {'Real MPPS':>12} | {'Expec= ted MPPS':>14}"
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 print("-" * len(header))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 print(header)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 print("-" * len(header))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 for params in test_parameters:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 print(f"{params['frame_= size']:>12} | {params['num_descriptors']:>12} | ", e= nd=3D"")
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 print(f"{params['measur= ed_mpps']:>12.2f} | {params['expected_mpps']:>14.2f}"= ;)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 print("-" * len(header= ))
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 write_performance_json({"results": t= est_parameters})
+
+=C2=A0 =C2=A0 @perf_test
+=C2=A0 =C2=A0 def single_core_forward_perf(self) -> None:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 """Validate expected single cor= e forwarding performance.
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 Steps:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Create a packet according to t= he frame size specified in the test config.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Transmit from the traffic gene= rator's ports 0 and 1 at above the expect.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Forward on TestPMD's inter= faces 0 and 1 with 1 core.
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 Verify:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * The resulting MPPS forwarded i= s greater than expected_mpps*(1-delta_tolerance).
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 """
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 # Find SUT DPDK driver to determine driver spe= cific performance optimization flags
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 sut_dpdk_driver =3D self._ctx.sut_node.config.= ports[0].os_driver_for_dpdk
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 for params in self.test_parameters:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 frame_size =3D params["fram= e_size"]
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 num_descriptors =3D params["= ;num_descriptors"]
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 driver_specific_testpmd_args: Te= stPmdParamsDict =3D {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "tx_ring"= ;: TXRingParams(descriptors=3Dnum_descriptors),
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "rx_ring"= ;: RXRingParams(descriptors=3Dnum_descriptors),
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 "nb_cores&quo= t;: 1,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if sut_dpdk_driver =3D=3D "= mlx5_core":
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 driver_specific_te= stpmd_args["burst"] =3D 64
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 driver_specific_te= stpmd_args["mbcache"] =3D 512
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 elif sut_dpdk_driver =3D=3D &quo= t;i40e":
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 driver_specific_te= stpmd_args["rx_queues"] =3D 2
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 driver_specific_te= stpmd_args["tx_queues"] =3D 2
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 with TestPmd(
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 **driver_specific_= testpmd_args,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ) as testpmd:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 params["measu= red_mpps"] =3D self._transmit(testpmd, frame_size)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 params["perfo= rmance_delta"] =3D (
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 floa= t(params["measured_mpps"]) - float(params["expected_mpps&quo= t;])
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ) / float(params[&= quot;expected_mpps"])
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 params["pass&= quot;] =3D float(params["performance_delta"]) >=3D -self.delta= _tolerance
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 self._produce_stats_table(self.test_parameters= )
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 for params in self.test_parameters:
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 verify(
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 params["pass&= quot;] is True,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 f"""= ;Packets forwarded is less than {(1 -self.delta_tolerance)*100}%
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 of the expected ba= seline.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Measured MPPS =3D = {params["measured_mpps"]}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Expected MPPS =3D = {params["expected_mpps"]}""",
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 )
--
2.49.0

--0000000000003e0a620642ed13f7--