* [dts][PATCH V1 2/7] tests/l2fwd: Separated performance cases
2023-01-09 18:46 [dts][PATCH V1 1/7] tests/efd: Separated performance cases Hongbo Li
@ 2023-01-09 18:46 ` Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 3/7] tests/tso: " Hongbo Li
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Hongbo Li @ 2023-01-09 18:46 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance test cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/l2fwd_test_plan.rst | 36 -----
test_plans/perf_l2fwd_test_plan.rst | 90 +++++++++++
tests/TestSuite_l2fwd.py | 106 -------------
tests/TestSuite_perf_l2fwd.py | 233 ++++++++++++++++++++++++++++
4 files changed, 323 insertions(+), 142 deletions(-)
create mode 100644 test_plans/perf_l2fwd_test_plan.rst
create mode 100644 tests/TestSuite_perf_l2fwd.py
diff --git a/test_plans/l2fwd_test_plan.rst b/test_plans/l2fwd_test_plan.rst
index d8803cb5..1bf9ad45 100644
--- a/test_plans/l2fwd_test_plan.rst
+++ b/test_plans/l2fwd_test_plan.rst
@@ -69,39 +69,3 @@ Trigger the packet generator of bursting packets from ``port A``, then check if
``port 0`` could receive them and ``port 1`` could forward them back. Stop it
and then trigger the packet generator of bursting packets from ``port B``, then
check if ``port 1`` could receive them and ``port 0`` could forward them back.
-
-Test Case: ``64/128/256/512/1024/1500`` bytes packet forwarding test
-====================================================================
-
-Set the packet stream to be sent out from packet generator before testing as below.
-
-+-------+---------+---------+---------+-----------+
-| Frame | 1q | 2q | 4q | 8 q |
-| Size | | | | |
-+-------+---------+---------+---------+-----------+
-| 64 | | | | |
-+-------+---------+---------+---------+-----------+
-| 65 | | | | |
-+-------+---------+---------+---------+-----------+
-| 128 | | | | |
-+-------+---------+---------+---------+-----------+
-| 256 | | | | |
-+-------+---------+---------+---------+-----------+
-| 512 | | | | |
-+-------+---------+---------+---------+-----------+
-| 1024 | | | | |
-+-------+---------+---------+---------+-----------+
-| 1280 | | | | |
-+-------+---------+---------+---------+-----------+
-| 1518 | | | | |
-+-------+---------+---------+---------+-----------+
-
-Then run the test application as below::
-
- $ ./x86_64-native-linuxapp-gcc/examples/dpdk-l2fwd -n 2 -c f -- -q 1 -p 0x3
-
-The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
-
-Trigger the packet generator of bursting packets to the port 0 and 1 on the onboard
-NIC to be tested. Then measure the forwarding throughput for different packet sizes
-and different number of queues.
diff --git a/test_plans/perf_l2fwd_test_plan.rst b/test_plans/perf_l2fwd_test_plan.rst
new file mode 100644
index 00000000..e5fbaf30
--- /dev/null
+++ b/test_plans/perf_l2fwd_test_plan.rst
@@ -0,0 +1,90 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+===================
+L2 Forwarding Tests
+===================
+
+This test application is a basic packet processing application using Intel®
+DPDK. It is a layer-2 (L2) forwarding application which takes traffic from
+a single RX port and transmits it with few modification on a single TX port.
+
+For a packet received on a RX port (RX_PORT), it would be transmitted from a
+TX port (TX_PORT=RX_PORT+1) if RX_PORT is even; otherwise from a TX port
+(TX_PORT=RX_PORT-1) if RX_PORT is odd. Before being transmitted, the source
+mac address of the packet would be replaced by the mac address of the TX port,
+while the destination mac address would be replaced by 00:09:c0:00:00:TX_PORT_ID.
+
+The test application should be run with the wanted paired ports configured using
+the coremask parameter via the command line. i.e. port 0 and 1 is a valid pair,
+while port 1 and 2 isn't. The test is performed by running the test application
+and using a traffic generator. Tests are run with receiving a variety of size of
+packets generated by the traffic generator and forwarding back to the traffic
+generator. The packet loss and the throughput are the right ones need to be
+measured.
+
+The ``l2fwd`` application is run with EAL parameters and parameters for
+the application itself. For details about the EAL parameters, see the relevant
+DPDK **Getting Started Guide**. This application supports two parameters for
+itself.
+
+- ``-p PORTMASK``: hexadecimal bitmask of ports to configure
+- ``-q NQ``: number of queue per lcore (default is 1)
+
+Prerequisites
+=============
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+Assume port 0 and 1 are connected to the traffic generator, to run the test
+application in linuxapp environment with 4 lcores, 2 ports and 8 RX queues
+per lcore::
+
+ $ ./x86_64-native-linuxapp-gcc/examples/dpdk-l2fwd -n 1 -c f -- -q 8 -p 0x3
+
+Also, if the ports to be tested are different, the port mask should be changed.
+The lcore used to run the test application and the number of queue used for a
+lcore could be changed. For benchmarking, the EAL parameters and the parameters
+for the application itself for different test cases should be the same.
+
+Test Case: ``64/128/256/512/1024/1500`` bytes packet forwarding test
+====================================================================
+
+Set the packet stream to be sent out from packet generator before testing as below.
+
++-------+---------+---------+---------+-----------+
+| Frame | 1q | 2q | 4q | 8 q |
+| Size | | | | |
++-------+---------+---------+---------+-----------+
+| 64 | | | | |
++-------+---------+---------+---------+-----------+
+| 65 | | | | |
++-------+---------+---------+---------+-----------+
+| 128 | | | | |
++-------+---------+---------+---------+-----------+
+| 256 | | | | |
++-------+---------+---------+---------+-----------+
+| 512 | | | | |
++-------+---------+---------+---------+-----------+
+| 1024 | | | | |
++-------+---------+---------+---------+-----------+
+| 1280 | | | | |
++-------+---------+---------+---------+-----------+
+| 1518 | | | | |
++-------+---------+---------+---------+-----------+
+
+Then run the test application as below::
+
+ $ ./x86_64-native-linuxapp-gcc/examples/dpdk-l2fwd -n 2 -c f -- -q 1 -p 0x3
+
+The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
+
+Trigger the packet generator of bursting packets to the port 0 and 1 on the onboard
+NIC to be tested. Then measure the forwarding throughput for different packet sizes
+and different number of queues.
diff --git a/tests/TestSuite_l2fwd.py b/tests/TestSuite_l2fwd.py
index 9e8fd2d5..7594958b 100644
--- a/tests/TestSuite_l2fwd.py
+++ b/tests/TestSuite_l2fwd.py
@@ -150,112 +150,6 @@ class TestL2fwd(TestCase):
self.quit_l2fwd()
- def test_perf_l2fwd_performance(self):
- """
- Benchmark performance for frame_sizes.
- """
- ports = []
- for port in range(self.number_of_ports):
- ports.append(self.dut_ports[port])
-
- port_mask = utils.create_mask(ports)
- cores = self.dut.get_core_list(self.core_config, socket=self.ports_socket)
-
- eal_params = self.dut.create_eal_parameters(cores=cores)
- eal_param = ""
- for i in ports:
- eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
-
- for frame_size in self.frame_sizes:
-
- payload_size = frame_size - self.headers_size
-
- tgen_input = []
- cnt = 1
- for port in range(self.number_of_ports):
- rx_port = self.tester.get_local_port(
- self.dut_ports[port % self.number_of_ports]
- )
- tx_port = self.tester.get_local_port(
- self.dut_ports[(port + 1) % self.number_of_ports]
- )
- destination_mac = self.dut.get_mac_address(
- self.dut_ports[(port + 1) % self.number_of_ports]
- )
- pcap = os.sep.join(
- [self.output_path, "l2fwd_{0}_{1}.pcap".format(port, cnt)]
- )
- self.tester.scapy_append(
- 'wrpcap("%s", [Ether(dst="%s")/IP()/UDP()/("X"*%d)])'
- % (pcap, destination_mac, payload_size)
- )
- tgen_input.append((tx_port, rx_port, pcap))
- time.sleep(3)
- self.tester.scapy_execute()
- cnt += 1
-
- for queues in self.test_queues:
-
- command_line = "./%s %s %s -- -q %s -p %s &" % (
- self.app_l2fwd_path,
- eal_params,
- eal_param,
- str(queues["queues"]),
- port_mask,
- )
-
- # self.dut.send_expect(command_line, "memory mapped", 60)
- self.dut.send_expect(command_line, "L2FWD: entering main loop", 60)
- # wait 5 second after l2fwd boot up.
- # It is aimed to make sure trex detect link up status.
- if self.tester.is_pktgen:
- time.sleep(5)
- info = (
- "Executing l2fwd using %s queues, frame size %d and %s setup.\n"
- % (queues["queues"], frame_size, self.core_config)
- )
-
- self.logger.info(info)
- self.rst_report(info, annex=True)
- self.rst_report(command_line + "\n\n", frame=True, annex=True)
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- Mpps = pps / 1000000.0
- queues["Mpps"][frame_size] = Mpps
- queues["pct"][frame_size] = (
- Mpps
- * 100
- / float(self.wirespeed(self.nic, frame_size, self.number_of_ports))
- )
-
- self.quit_l2fwd()
-
- # Look for transmission error in the results
- for frame_size in self.frame_sizes:
- for n in range(len(self.test_queues)):
- self.verify(
- self.test_queues[n]["Mpps"][frame_size] > 0, "No traffic detected"
- )
-
- # Prepare the results for table
- for frame_size in self.frame_sizes:
- results_row = []
- results_row.append(frame_size)
- for queue in self.test_queues:
- results_row.append(queue["Mpps"][frame_size])
- results_row.append(queue["pct"][frame_size])
-
- self.result_table_add(results_row)
-
- self.result_table_print()
-
def tear_down(self):
"""
Run after each test case.
diff --git a/tests/TestSuite_perf_l2fwd.py b/tests/TestSuite_perf_l2fwd.py
new file mode 100644
index 00000000..2cbf8899
--- /dev/null
+++ b/tests/TestSuite_perf_l2fwd.py
@@ -0,0 +1,233 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2019 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Test Layer-2 Forwarding support
+"""
+import os
+import time
+
+import framework.utils as utils
+from framework.pktgen import PacketGeneratorHelper
+from framework.settings import HEADER_SIZE
+from framework.test_case import TestCase
+
+
+class TestL2fwd(TestCase):
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+
+ L2fwd prerequisites.
+ """
+ self.frame_sizes = [64, 65, 128, 256, 512, 1024, 1280, 1518]
+
+ self.test_queues = [
+ {"queues": 1, "Mpps": {}, "pct": {}},
+ {"queues": 2, "Mpps": {}, "pct": {}},
+ {"queues": 4, "Mpps": {}, "pct": {}},
+ {"queues": 8, "Mpps": {}, "pct": {}},
+ ]
+
+ self.core_config = "1S/4C/1T"
+ self.number_of_ports = 2
+ self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["udp"]
+
+ self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
+
+ self.verify(
+ len(self.dut_ports) >= self.number_of_ports,
+ "Not enough ports for " + self.nic,
+ )
+
+ self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+
+ # compile
+ out = self.dut.build_dpdk_apps("./examples/l2fwd")
+ self.app_l2fwd_path = self.dut.apps_name["l2fwd"]
+ self.verify("Error" not in out, "Compilation error")
+ self.verify("No such" not in out, "Compilation error")
+
+ self.table_header = ["Frame"]
+ for queue in self.test_queues:
+ self.table_header.append("%d queues Mpps" % queue["queues"])
+ self.table_header.append("% linerate")
+
+ self.result_table_create(self.table_header)
+
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def quit_l2fwd(self):
+ self.dut.send_expect("fg", "l2fwd ", 5)
+ self.dut.send_expect("^C", "# ", 5)
+
+ def notest_port_testing(self):
+ """
+ Check port forwarding.
+ """
+ # the cases use the first two ports
+ port_mask = utils.create_mask([self.dut_ports[0], self.dut_ports[1]])
+ eal_params = self.dut.create_eal_parameters()
+
+ self.dut.send_expect(
+ "./%s %s -- -q 8 -p %s &" % (self.app_l2fwd_path, eal_params, port_mask),
+ "L2FWD: entering main loop",
+ 60,
+ )
+
+ for i in [0, 1]:
+ tx_port = self.tester.get_local_port(self.dut_ports[i])
+ rx_port = self.tester.get_local_port(self.dut_ports[1 - i])
+
+ tx_interface = self.tester.get_interface(tx_port)
+ rx_interface = self.tester.get_interface(rx_port)
+
+ self.tester.scapy_background()
+ self.tester.scapy_append('p = sniff(iface="%s", count=1)' % rx_interface)
+ self.tester.scapy_append("number_packets=len(p)")
+ self.tester.scapy_append("RESULT = str(number_packets)")
+
+ self.tester.scapy_foreground()
+ self.tester.scapy_append(
+ 'sendp([Ether()/IP()/UDP()/("X"*46)], iface="%s")' % tx_interface
+ )
+
+ self.tester.scapy_execute()
+ number_packets = self.tester.scapy_get_result()
+ self.verify(number_packets == "1", "Failed to switch L2 frame")
+
+ self.quit_l2fwd()
+
+ def test_perf_l2fwd_performance(self):
+ """
+ Benchmark performance for frame_sizes.
+ """
+ ports = []
+ for port in range(self.number_of_ports):
+ ports.append(self.dut_ports[port])
+
+ port_mask = utils.create_mask(ports)
+ cores = self.dut.get_core_list(self.core_config, socket=self.ports_socket)
+
+ eal_params = self.dut.create_eal_parameters(cores=cores)
+ eal_param = ""
+ for i in ports:
+ eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
+
+ for frame_size in self.frame_sizes:
+
+ payload_size = frame_size - self.headers_size
+
+ tgen_input = []
+ cnt = 1
+ for port in range(self.number_of_ports):
+ rx_port = self.tester.get_local_port(
+ self.dut_ports[port % self.number_of_ports]
+ )
+ tx_port = self.tester.get_local_port(
+ self.dut_ports[(port + 1) % self.number_of_ports]
+ )
+ destination_mac = self.dut.get_mac_address(
+ self.dut_ports[(port + 1) % self.number_of_ports]
+ )
+ pcap = os.sep.join(
+ [self.output_path, "l2fwd_{0}_{1}.pcap".format(port, cnt)]
+ )
+ self.tester.scapy_append(
+ 'wrpcap("%s", [Ether(dst="%s")/IP()/UDP()/("X"*%d)])'
+ % (pcap, destination_mac, payload_size)
+ )
+ tgen_input.append((tx_port, rx_port, pcap))
+ time.sleep(3)
+ self.tester.scapy_execute()
+ cnt += 1
+
+ for queues in self.test_queues:
+
+ command_line = "./%s %s %s -- -q %s -p %s &" % (
+ self.app_l2fwd_path,
+ eal_params,
+ eal_param,
+ str(queues["queues"]),
+ port_mask,
+ )
+
+ # self.dut.send_expect(command_line, "memory mapped", 60)
+ self.dut.send_expect(command_line, "L2FWD: entering main loop", 60)
+ # wait 5 second after l2fwd boot up.
+ # It is aimed to make sure trex detect link up status.
+ if self.tester.is_pktgen:
+ time.sleep(5)
+ info = (
+ "Executing l2fwd using %s queues, frame size %d and %s setup.\n"
+ % (queues["queues"], frame_size, self.core_config)
+ )
+
+ self.logger.info(info)
+ self.rst_report(info, annex=True)
+ self.rst_report(command_line + "\n\n", frame=True, annex=True)
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ Mpps = pps / 1000000.0
+ queues["Mpps"][frame_size] = Mpps
+ queues["pct"][frame_size] = (
+ Mpps
+ * 100
+ / float(self.wirespeed(self.nic, frame_size, self.number_of_ports))
+ )
+
+ self.quit_l2fwd()
+
+ # Look for transmission error in the results
+ for frame_size in self.frame_sizes:
+ for n in range(len(self.test_queues)):
+ self.verify(
+ self.test_queues[n]["Mpps"][frame_size] > 0, "No traffic detected"
+ )
+
+ # Prepare the results for table
+ for frame_size in self.frame_sizes:
+ results_row = []
+ results_row.append(frame_size)
+ for queue in self.test_queues:
+ results_row.append(queue["Mpps"][frame_size])
+ results_row.append(queue["pct"][frame_size])
+
+ self.result_table_add(results_row)
+
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.dut.send_expect("fg", "l2fwd|# ", 5)
+ self.dut.send_expect("^C", "# ", 5)
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ pass
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts][PATCH V1 3/7] tests/tso: Separated performance cases
2023-01-09 18:46 [dts][PATCH V1 1/7] tests/efd: Separated performance cases Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 2/7] tests/l2fwd: " Hongbo Li
@ 2023-01-09 18:46 ` Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 4/7] tests/vxlan: " Hongbo Li
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Hongbo Li @ 2023-01-09 18:46 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance test cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/perf_tso_test_plan.rst | 87 +++++++++
test_plans/tso_test_plan.rst | 34 ----
tests/TestSuite_perf_tso.py | 302 ++++++++++++++++++++++++++++++
tests/TestSuite_tso.py | 113 -----------
4 files changed, 389 insertions(+), 147 deletions(-)
create mode 100644 test_plans/perf_tso_test_plan.rst
create mode 100644 tests/TestSuite_perf_tso.py
diff --git a/test_plans/perf_tso_test_plan.rst b/test_plans/perf_tso_test_plan.rst
new file mode 100644
index 00000000..026291ca
--- /dev/null
+++ b/test_plans/perf_tso_test_plan.rst
@@ -0,0 +1,87 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2015-2017 Intel Corporation
+
+=========================================
+Transmit Segmentation Offload (TSO) Tests
+=========================================
+
+Description
+===========
+
+This document provides the plan for testing the TSO (Transmit Segmentation
+Offload, also called Large Send offload - LSO) feature of
+Intel Ethernet Controller, including Intel 82599 10GbE Ethernet Controller and
+Intel® Ethernet Converged Network Adapter XL710-QDA2. TSO enables the TCP/IP stack to
+pass to the network device a larger ULP datagram than the Maximum Transmit
+Unit Size (MTU). NIC divides the large ULP datagram to multiple segments
+according to the MTU size.
+
+
+Prerequisites
+=============
+
+Hardware:
+ Intel® Ethernet 700 Series, Intel® Ethernet 800 Series and 82599/500 Series
+
+The DUT must take one of the Ethernet controller ports connected to a port on another
+device that is controlled by the Scapy packet generator.
+
+The Ethernet interface identifier of the port that Scapy will use must be known.
+On tester, all offload feature should be disabled on tx port, and start rx port capture::
+
+ ifconfig <tx port> mtu 9000
+ ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
+ ip l set <tx port> up
+ tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
+
+
+On DUT, run pmd with parameter "--enable-rx-cksum". Then enable TSO on tx port
+and checksum on rx port. The test commands is below::
+
+ #enable hw checksum on rx port
+ csum set ip hw 0
+ csum set udp hw 0
+ csum set tcp hw 0
+ csum set sctp hw 0
+ set fwd csum
+
+ # enable TSO on tx port
+ *tso set 800 1
+
+
+
+Test case: TSO performance
+==========================
+
+Set the packet stream to be sent out from packet generator before testing as
+below.
+
++-------+---------+---------+---------+----------+----------+
+| Frame | 1S/1C/1T| 1S/1C/1T| 1S/2C/1T| 1S/2C/2T | 1S/2C/2T |
+| Size | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 64 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 65 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 128 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 256 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 512 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 1024 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 1280 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 1518 | | | | | |
++-------+---------+---------+---------+----------+----------+
+
+Then run the test application as below::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
+ --burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
+ --txfreet=32 --txrst=32 --enable-rx-cksum
+
+The -n command is used to select the number of memory channels. It should match the
+number of memory channels on that setup.
diff --git a/test_plans/tso_test_plan.rst b/test_plans/tso_test_plan.rst
index e5122bb9..a44d6eba 100644
--- a/test_plans/tso_test_plan.rst
+++ b/test_plans/tso_test_plan.rst
@@ -161,38 +161,4 @@ Test nvgre() in scapy::
sendp([Ether(dst="%s",src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2",proto=47)/GRE(key_present=1,proto=0x6558,key=0x00001000)/Ether(dst="%s",src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/("X"*%s)], iface="%s")
-Test case: TSO performance
-==========================
-
-Set the packet stream to be sent out from packet generator before testing as
-below.
-
-+-------+---------+---------+---------+----------+----------+
-| Frame | 1S/1C/1T| 1S/1C/1T| 1S/2C/1T| 1S/2C/2T | 1S/2C/2T |
-| Size | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 64 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 65 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 128 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 256 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 512 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 1024 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 1280 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 1518 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-
-Then run the test application as below::
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
- --burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
- --txfreet=32 --txrst=32 --enable-rx-cksum
-
-The -n command is used to select the number of memory channels. It should match the
-number of memory channels on that setup.
diff --git a/tests/TestSuite_perf_tso.py b/tests/TestSuite_perf_tso.py
new file mode 100644
index 00000000..95933307
--- /dev/null
+++ b/tests/TestSuite_perf_tso.py
@@ -0,0 +1,302 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+
+Tests for TSO.
+
+"""
+import os
+import re
+import time
+
+import framework.utils as utils
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.settings import HEADER_SIZE
+from framework.test_case import TestCase
+
+DEFAULT_MUT = 1500
+TSO_MTU = 9000
+
+
+class TestTSO(TestCase):
+ dut_ports = []
+
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+ """
+ # Based on h/w type, choose how many ports to use
+ self.dut_ports = self.dut.get_ports(self.nic)
+
+ # Verify that enough ports are available
+ self.verify(len(self.dut_ports) >= 2, "Insufficient ports for testing")
+
+ # Verify that enough threads are available
+ self.portMask = utils.create_mask([self.dut_ports[0], self.dut_ports[1]])
+ self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+ core_config = "1S/2C/1T"
+ cores = self.dut.get_core_list(core_config, socket=self.ports_socket)
+ self.verify(cores is not None, "Insufficient cores for speed testing")
+
+ self.loading_sizes = [128, 800, 801, 1700, 2500]
+
+ self.test_result = {"header": [], "data": []}
+
+ self.eal_param = self.dut.create_eal_parameters(
+ cores=core_config, socket=self.ports_socket, ports=self.dut_ports
+ )
+ self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"]
+
+ self.tester.send_expect(
+ "ifconfig %s mtu %s"
+ % (
+ self.tester.get_interface(
+ self.tester.get_local_port(self.dut_ports[0])
+ ),
+ TSO_MTU,
+ ),
+ "# ",
+ )
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+ self.path = self.dut.apps_name["test-pmd"]
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def tcpdump_start_sniffing(self, ifaces=[]):
+ """
+ Starts tcpdump in the background to sniff the tester interface where
+ the packets are transmitted to and from the self.dut.
+ All the captured packets are going to be stored in a file for a
+ post-analysis.
+ """
+
+ for iface in ifaces:
+ command = (
+ "tcpdump -w /tmp/tcpdump_{0}.pcap -i {0} 2>tcpdump_{0}.out &"
+ ).format(iface)
+ del_cmd = ("rm -f /tmp/tcpdump_{0}.pcap").format(iface)
+ self.tester.send_expect(del_cmd, "#")
+ self.tester.send_expect(command, "#")
+
+ def tcpdump_stop_sniff(self):
+ """
+ Stops the tcpdump process running in the background.
+ """
+ self.tester.send_expect("killall tcpdump", "#")
+ time.sleep(1)
+ self.tester.send_expect('echo "Cleaning buffer"', "#")
+ time.sleep(1)
+
+ def tcpdump_command(self, command):
+ """
+ Sends a tcpdump related command and returns an integer from the output
+ """
+
+ result = self.tester.send_expect(command, "#")
+ return int(result.strip())
+
+ def number_of_packets(self, iface):
+ """
+ By reading the file generated by tcpdump it counts how many packets were
+ forwarded by the sample app and received in the self.tester. The sample app
+ will add a known MAC address for the test to look for.
+ """
+
+ command = (
+ "tcpdump -A -nn -e -v -r /tmp/tcpdump_{iface}.pcap 2>/dev/null | "
+ + 'grep -c "seq"'
+ )
+ return self.tcpdump_command(command.format(**locals()))
+
+ def tcpdump_scanner(self, scanner):
+ """
+ Execute scanner to return results
+ """
+ scanner_result = self.tester.send_expect(scanner, "#", 60)
+ fially_result = re.findall(r"length( \d+)", scanner_result)
+ return list(fially_result)
+
+ def number_of_bytes(self, iface):
+ """
+ Get the length of loading_sizes
+ """
+ scanner = 'tcpdump -vv -r /tmp/tcpdump_{iface}.pcap 2>/dev/null | grep "seq" | grep "length"'
+ return self.tcpdump_scanner(scanner.format(**locals()))
+
+ def get_chksum_value_and_verify(self, dump_pcap, Nic_list):
+ packet = Packet()
+ pkts = packet.read_pcapfile(dump_pcap, self.tester)
+ for pkt in pkts:
+ chksum_list_rx = re.findall(r"chksum\s*=\s*(0x\w+)", pkt.show(dump=True))
+ pkt["IP"].chksum = None
+ if "VXLAN" in pkt:
+ pkt["UDP"].chksum = None
+ pkt["VXLAN"]["IP"].chksum = None
+ pkt["VXLAN"]["TCP"].chksum = None
+ elif "GRE" in pkt:
+ pkt["GRE"]["IP"].chksum = None
+ pkt["GRE"]["TCP"].chksum = None
+ chksum_list_good = re.findall(r"chksum\s*=\s*(0x\w+)", pkt.show2(dump=True))
+ if self.nic in Nic_list and "VXLAN" in pkt:
+ self.verify(
+ chksum_list_rx[0] == chksum_list_good[0]
+ and chksum_list_rx[2] == chksum_list_good[2]
+ and chksum_list_rx[3] == chksum_list_good[3],
+ "The obtained chksum value is incorrect.",
+ )
+ else:
+ self.verify(
+ chksum_list_rx == chksum_list_good,
+ "The obtained chksum value is incorrect.",
+ )
+
+ def test_perf_TSO_2ports(self):
+ """
+ TSO Performance Benchmarking with 2 ports.
+ """
+
+ # set header table
+ header_row = ["Fwd Core", "Frame Size", "Throughput", "Rate"]
+ self.test_result["header"] = header_row
+ self.result_table_create(header_row)
+ self.test_result["data"] = []
+
+ test_configs = ["1S/1C/1T", "1S/1C/2T", "1S/2C/2T"]
+ core_offset = 3
+ # prepare traffic generator input
+ tgen_input = []
+
+ # run testpmd for each core config
+ for configs in test_configs:
+ cores = configs.split("/")[1]
+ thread = configs.split("/")[-1]
+ thread_num = int(int(thread[:-1]) // int(cores[:-1]))
+ _cores = str(core_offset + int(cores[0])) + "C"
+ core_config = "/".join(["1S", _cores, str(thread_num) + "T"])
+ corelist = self.dut.get_core_list(core_config, self.ports_socket)
+ core_list = corelist[(core_offset - 1) * thread_num :]
+ if "2T" in core_config:
+ core_list = core_list[1:2] + core_list[0::2] + core_list[1::2][1:]
+ _core_list = core_list[thread_num - 1 :]
+ self.eal_param = self.dut.create_eal_parameters(
+ cores=_core_list, socket=self.ports_socket, ports=self.dut_ports
+ )
+ command_line = (
+ "%s %s -- -i --rxd=512 --txd=512 --burst=32 --rxfreet=64 --mbcache=128 --portmask=%s --max-pkt-len=%s --txpt=36 --txht=0 --txwt=0 --txfreet=32 --txrst=32 "
+ % (self.path, self.eal_param, self.portMask, TSO_MTU)
+ )
+ info = "Executing PMD using cores: {0} of config {1}".format(
+ _core_list, configs
+ )
+ self.logger.info(info)
+ self.dut.send_expect(command_line, "testpmd> ", 120)
+ self.dut.send_expect("port stop all", "testpmd> ", 120)
+ for i in range(2):
+ self.dut.send_expect(
+ "csum set ip hw %d" % self.dut_ports[i], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set udp hw %d" % self.dut_ports[i], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set tcp hw %d" % self.dut_ports[i], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set sctp hw %d" % self.dut_ports[i], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set outer-ip hw %d" % self.dut_ports[i], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum parse-tunnel on %d" % self.dut_ports[i], "testpmd> ", 120
+ )
+ self.dut.send_expect("tso set 800 %d" % self.dut_ports[1], "testpmd> ", 120)
+ self.dut.send_expect("set fwd csum", "testpmd> ", 120)
+ self.dut.send_expect("port start all", "testpmd> ", 120)
+ self.dut.send_expect("set promisc all off", "testpmd> ", 120)
+ self.dut.send_expect("start", "testpmd> ")
+ for loading_size in self.loading_sizes:
+ frame_size = loading_size + self.headers_size
+ wirespeed = self.wirespeed(self.nic, frame_size, 2)
+
+ # create pcap file
+ self.logger.info("Running with frame size %d " % frame_size)
+ payload_size = frame_size - self.headers_size
+ for _port in range(2):
+ mac = self.dut.get_mac_address(self.dut_ports[_port])
+
+ pcap = os.sep.join([self.output_path, "dts{0}.pcap".format(_port)])
+ self.tester.scapy_append(
+ 'wrpcap("%s", [Ether(dst="%s",src="52:00:00:00:00:01")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/("X"*%d)])'
+ % (pcap, mac, payload_size)
+ )
+ tgen_input.append(
+ (
+ self.tester.get_local_port(self.dut_ports[_port]),
+ self.tester.get_local_port(self.dut_ports[1 - _port]),
+ "%s" % pcap,
+ )
+ )
+ self.tester.scapy_execute()
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+ self.verify(pps > 0, "No traffic detected")
+ pps /= 1000000.0
+ percentage = pps * 100 / wirespeed
+ data_row = [
+ configs,
+ frame_size,
+ "{:.3f} Mpps".format(pps),
+ "{:.3f}%".format(percentage),
+ ]
+ self.result_table_add(data_row)
+ self.dut.send_expect("stop", "testpmd> ")
+ self.dut.send_expect("quit", "# ", 30)
+ time.sleep(5)
+
+ # Print results
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.dut.send_expect("quit", "# ")
+ self.dut.kill_all()
+ time.sleep(2)
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.tester.send_expect(
+ "ifconfig %s mtu %s"
+ % (
+ self.tester.get_interface(
+ self.tester.get_local_port(self.dut_ports[0])
+ ),
+ DEFAULT_MUT,
+ ),
+ "# ",
+ )
diff --git a/tests/TestSuite_tso.py b/tests/TestSuite_tso.py
index 8cbd7a70..acefbec1 100755
--- a/tests/TestSuite_tso.py
+++ b/tests/TestSuite_tso.py
@@ -454,119 +454,6 @@ class TestTSO(TestCase):
)
self.get_chksum_value_and_verify(dump_pcap, Nic_list)
- def test_perf_TSO_2ports(self):
- """
- TSO Performance Benchmarking with 2 ports.
- """
-
- # set header table
- header_row = ["Fwd Core", "Frame Size", "Throughput", "Rate"]
- self.test_result["header"] = header_row
- self.result_table_create(header_row)
- self.test_result["data"] = []
-
- test_configs = ["1S/1C/1T", "1S/1C/2T", "1S/2C/2T"]
- core_offset = 3
- # prepare traffic generator input
- tgen_input = []
-
- # run testpmd for each core config
- for configs in test_configs:
- cores = configs.split("/")[1]
- thread = configs.split("/")[-1]
- thread_num = int(int(thread[:-1]) // int(cores[:-1]))
- _cores = str(core_offset + int(cores[0])) + "C"
- core_config = "/".join(["1S", _cores, str(thread_num) + "T"])
- corelist = self.dut.get_core_list(core_config, self.ports_socket)
- core_list = corelist[(core_offset - 1) * thread_num :]
- if "2T" in core_config:
- core_list = core_list[1:2] + core_list[0::2] + core_list[1::2][1:]
- _core_list = core_list[thread_num - 1 :]
- self.eal_param = self.dut.create_eal_parameters(
- cores=_core_list, socket=self.ports_socket, ports=self.dut_ports
- )
- command_line = (
- "%s %s -- -i --rxd=512 --txd=512 --burst=32 --rxfreet=64 --mbcache=128 --portmask=%s --max-pkt-len=%s --txpt=36 --txht=0 --txwt=0 --txfreet=32 --txrst=32 "
- % (self.path, self.eal_param, self.portMask, TSO_MTU)
- )
- info = "Executing PMD using cores: {0} of config {1}".format(
- _core_list, configs
- )
- self.logger.info(info)
- self.dut.send_expect(command_line, "testpmd> ", 120)
- self.dut.send_expect("port stop all", "testpmd> ", 120)
- for i in range(2):
- self.dut.send_expect(
- "csum set ip hw %d" % self.dut_ports[i], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set udp hw %d" % self.dut_ports[i], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set tcp hw %d" % self.dut_ports[i], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set sctp hw %d" % self.dut_ports[i], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set outer-ip hw %d" % self.dut_ports[i], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum parse-tunnel on %d" % self.dut_ports[i], "testpmd> ", 120
- )
- self.dut.send_expect("tso set 800 %d" % self.dut_ports[1], "testpmd> ", 120)
- self.dut.send_expect("set fwd csum", "testpmd> ", 120)
- self.dut.send_expect("port start all", "testpmd> ", 120)
- self.dut.send_expect("set promisc all off", "testpmd> ", 120)
- self.dut.send_expect("start", "testpmd> ")
- for loading_size in self.loading_sizes:
- frame_size = loading_size + self.headers_size
- wirespeed = self.wirespeed(self.nic, frame_size, 2)
-
- # create pcap file
- self.logger.info("Running with frame size %d " % frame_size)
- payload_size = frame_size - self.headers_size
- for _port in range(2):
- mac = self.dut.get_mac_address(self.dut_ports[_port])
-
- pcap = os.sep.join([self.output_path, "dts{0}.pcap".format(_port)])
- self.tester.scapy_append(
- 'wrpcap("%s", [Ether(dst="%s",src="52:00:00:00:00:01")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/("X"*%d)])'
- % (pcap, mac, payload_size)
- )
- tgen_input.append(
- (
- self.tester.get_local_port(self.dut_ports[_port]),
- self.tester.get_local_port(self.dut_ports[1 - _port]),
- "%s" % pcap,
- )
- )
- self.tester.scapy_execute()
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
- self.verify(pps > 0, "No traffic detected")
- pps /= 1000000.0
- percentage = pps * 100 / wirespeed
- data_row = [
- configs,
- frame_size,
- "{:.3f} Mpps".format(pps),
- "{:.3f}%".format(percentage),
- ]
- self.result_table_add(data_row)
- self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("quit", "# ", 30)
- time.sleep(5)
-
- # Print results
- self.result_table_print()
-
def tear_down(self):
"""
Run after each test case.
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts][PATCH V1 4/7] tests/vxlan: Separated performance cases
2023-01-09 18:46 [dts][PATCH V1 1/7] tests/efd: Separated performance cases Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 2/7] tests/l2fwd: " Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 3/7] tests/tso: " Hongbo Li
@ 2023-01-09 18:46 ` Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 5/7] tests/ipfrag: " Hongbo Li
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Hongbo Li @ 2023-01-09 18:46 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance test cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/perf_vxlan_test_plan.rst | 85 +++
test_plans/vxlan_test_plan.rst | 60 +-
tests/TestSuite_perf_vxlan.py | 1022 +++++++++++++++++++++++++++
tests/TestSuite_vxlan.py | 213 ------
4 files changed, 1108 insertions(+), 272 deletions(-)
create mode 100644 test_plans/perf_vxlan_test_plan.rst
create mode 100644 tests/TestSuite_perf_vxlan.py
diff --git a/test_plans/perf_vxlan_test_plan.rst b/test_plans/perf_vxlan_test_plan.rst
new file mode 100644
index 00000000..a330657a
--- /dev/null
+++ b/test_plans/perf_vxlan_test_plan.rst
@@ -0,0 +1,85 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2014-2017 Intel Corporation
+
+==========================================
+Intel® Ethernet 700/800 Series Vxlan Tests
+==========================================
+Cloud providers build virtual network overlays over existing network
+infrastructure that provide tenant isolation and scaling. Tunneling
+layers added to the packets carry the virtual networking frames over
+existing Layer 2 and IP networks. Conceptually, this is similar to
+creating virtual private networks over the Internet. Intel® Ethernet
+700 Series will process these tunneling layers by the hardware.
+
+The suit support NIC: Intel® Ethernet 700 Series, Intel® Ethernet 800 Series.
+
+This document provides test plan for Intel® Ethernet 700 Series vxlan
+packet detecting, checksum computing and filtering.
+
+Prerequisites
+=============
+1x Intel® X710 (Intel® Ethernet 700 Series) NICs (2x 40GbE full duplex
+optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
+
+1x Intel® XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC)
+plugged into the available PCIe Gen3 8-lane slot.
+
+DUT board must be two sockets system and each cpu have more than 8 lcores.
+
+Test Case: Vxlan Checksum Offload Performance Benchmarking
+==========================================================
+The throughput is measured for each of these cases for vxlan tx checksum
+offload of "all by software", "L3 offload by hardware", "L4 offload by
+hardware", "l3&l4 offload by hardware".
+
+The results are printed in the following table:
+
++----------------+--------+--------+------------+
+| Calculate Type | Queues | Mpps | % linerate |
++================+========+========+============+
+| SOFTWARE ALL | Single | | |
++----------------+--------+--------+------------+
+| HW L4 | Single | | |
++----------------+--------+--------+------------+
+| HW L3&L4 | Single | | |
++----------------+--------+--------+------------+
+| SOFTWARE ALL | Multi | | |
++----------------+--------+--------+------------+
+| HW L4 | Multi | | |
++----------------+--------+--------+------------+
+| HW L3&L4 | Multi | | |
++----------------+--------+--------+------------+
+
+Test Case: Vxlan Tunnel filter Performance Benchmarking
+=======================================================
+The throughput is measured for different Vxlan tunnel filter types.
+Queue single mean there's only one flow and forwarded to the first queue.
+Queue multi mean there are two flows and configure to different queues.
+
++--------+------------------+--------+--------+------------+
+| Packet | Filter | Queue | Mpps | % linerate |
++========+==================+========+========+============+
+| Normal | None | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | None | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan-tenid | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-tenid | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | omac-imac-tenid | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan-tenid | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-tenid | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | omac-imac-tenid | Multi | | |
++--------+------------------+--------+--------+------------+
diff --git a/test_plans/vxlan_test_plan.rst b/test_plans/vxlan_test_plan.rst
index e65b18d6..d46c550f 100644
--- a/test_plans/vxlan_test_plan.rst
+++ b/test_plans/vxlan_test_plan.rst
@@ -307,62 +307,4 @@ Add Cloud filter with invalid vlan "4097" will be failed.
Add Cloud filter with invalid vni "16777216" will be failed.
-Add Cloud filter with invalid queue id "64" will be failed.
-
-Test Case: Vxlan Checksum Offload Performance Benchmarking
-==========================================================
-The throughput is measured for each of these cases for vxlan tx checksum
-offload of "all by software", "L3 offload by hardware", "L4 offload by
-hardware", "l3&l4 offload by hardware".
-
-The results are printed in the following table:
-
-+----------------+--------+--------+------------+
-| Calculate Type | Queues | Mpps | % linerate |
-+================+========+========+============+
-| SOFTWARE ALL | Single | | |
-+----------------+--------+--------+------------+
-| HW L4 | Single | | |
-+----------------+--------+--------+------------+
-| HW L3&L4 | Single | | |
-+----------------+--------+--------+------------+
-| SOFTWARE ALL | Multi | | |
-+----------------+--------+--------+------------+
-| HW L4 | Multi | | |
-+----------------+--------+--------+------------+
-| HW L3&L4 | Multi | | |
-+----------------+--------+--------+------------+
-
-Test Case: Vxlan Tunnel filter Performance Benchmarking
-=======================================================
-The throughput is measured for different Vxlan tunnel filter types.
-Queue single mean there's only one flow and forwarded to the first queue.
-Queue multi mean there are two flows and configure to different queues.
-
-+--------+------------------+--------+--------+------------+
-| Packet | Filter | Queue | Mpps | % linerate |
-+========+==================+========+========+============+
-| Normal | None | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | None | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan-tenid | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-tenid | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | omac-imac-tenid | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan-tenid | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-tenid | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | omac-imac-tenid | Multi | | |
-+--------+------------------+--------+--------+------------+
+Add Cloud filter with invalid queue id "64" will be failed.
\ No newline at end of file
diff --git a/tests/TestSuite_perf_vxlan.py b/tests/TestSuite_perf_vxlan.py
new file mode 100644
index 00000000..69e5603e
--- /dev/null
+++ b/tests/TestSuite_perf_vxlan.py
@@ -0,0 +1,1022 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2020 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+
+Test VXLAN behaviour in DPDK.
+
+"""
+
+import os
+import re
+import string
+import time
+from random import randint
+
+from scapy.config import conf
+from scapy.layers.inet import IP, TCP, UDP, Ether
+from scapy.layers.inet6 import IPv6
+from scapy.layers.l2 import Dot1Q
+from scapy.layers.sctp import SCTP, SCTPChunkData
+from scapy.layers.vxlan import VXLAN
+from scapy.route import *
+from scapy.sendrecv import sniff
+from scapy.utils import rdpcap, wrpcap
+
+import framework.packet as packet
+import framework.utils as utils
+from framework.packet import IncreaseIP, IncreaseIPv6
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.settings import FOLDERS, HEADER_SIZE
+from framework.test_case import TestCase
+
+#
+#
+# Test class.
+#
+
+VXLAN_PORT = 4789
+PACKET_LEN = 128
+MAX_TXQ_RXQ = 4
+BIDIRECT = True
+
+
+class VxlanTestConfig(object):
+
+ """
+ Module for config/create/transmit vxlan packet
+ """
+
+ def __init__(self, test_case, **kwargs):
+ self.test_case = test_case
+ self.init()
+ for name in kwargs:
+ setattr(self, name, kwargs[name])
+ self.pkt_obj = packet.Packet()
+
+ def init(self):
+ self.packets_config()
+
+ def packets_config(self):
+ """
+ Default vxlan packet format
+ """
+ self.pcap_file = packet.TMP_PATH + "vxlan.pcap"
+ self.capture_file = packet.TMP_PATH + "vxlan_capture.pcap"
+ self.outer_mac_src = "00:00:10:00:00:00"
+ self.outer_mac_dst = "11:22:33:44:55:66"
+ self.outer_vlan = "N/A"
+ self.outer_ip_src = "192.168.1.1"
+ self.outer_ip_dst = "192.168.1.2"
+ self.outer_ip_invalid = 0
+ self.outer_ip6_src = "N/A"
+ self.outer_ip6_dst = "N/A"
+ self.outer_ip6_invalid = 0
+ self.outer_udp_src = 63
+ self.outer_udp_dst = VXLAN_PORT
+ self.outer_udp_invalid = 0
+ self.vni = 1
+ self.inner_mac_src = "00:00:20:00:00:00"
+ self.inner_mac_dst = "00:00:20:00:00:01"
+ self.inner_vlan = "N/A"
+ self.inner_ip_src = "192.168.2.1"
+ self.inner_ip_dst = "192.168.2.2"
+ self.inner_ip_invalid = 0
+ self.inner_ip6_src = "N/A"
+ self.inner_ip6_dst = "N/A"
+ self.inner_ip6_invalid = 0
+ self.payload_size = 18
+ self.inner_l4_type = "UDP"
+ self.inner_l4_invalid = 0
+
+ def packet_type(self):
+ """
+ Return vxlan packet type
+ """
+ if self.outer_udp_dst != VXLAN_PORT:
+ if self.outer_ip6_src != "N/A":
+ return "L3_IPV6_EXT_UNKNOWN"
+ else:
+ return "L3_IPV4_EXT_UNKNOWN"
+ else:
+ if self.inner_ip6_src != "N/A":
+ return "L3_IPV6_EXT_UNKNOWN"
+ else:
+ return "L3_IPV4_EXT_UNKNOWN"
+
+ def create_pcap(self):
+ """
+ Create pcap file and copy it to tester if configured
+ Return scapy packet object for later usage
+ """
+ if self.inner_l4_type == "SCTP":
+ self.inner_payload = SCTPChunkData(data="X" * 16)
+ else:
+ self.inner_payload = "X" * self.payload_size
+
+ if self.inner_l4_type == "TCP":
+ l4_pro = TCP()
+ elif self.inner_l4_type == "SCTP":
+ l4_pro = SCTP()
+ else:
+ l4_pro = UDP()
+
+ if self.inner_ip6_src != "N/A":
+ inner_l3 = IPv6()
+ else:
+ inner_l3 = IP()
+
+ if self.inner_vlan != "N/A":
+ inner = Ether() / Dot1Q() / inner_l3 / l4_pro / self.inner_payload
+ inner[Dot1Q].vlan = self.inner_vlan
+ else:
+ inner = Ether() / inner_l3 / l4_pro / self.inner_payload
+
+ if self.inner_ip6_src != "N/A":
+ inner[inner_l3.name].src = self.inner_ip6_src
+ inner[inner_l3.name].dst = self.inner_ip6_dst
+ else:
+ inner[inner_l3.name].src = self.inner_ip_src
+ inner[inner_l3.name].dst = self.inner_ip_dst
+
+ if self.inner_ip_invalid == 1:
+ inner[inner_l3.name].chksum = 0
+
+ # when udp checksum is 0, will skip checksum
+ if self.inner_l4_invalid == 1:
+ if self.inner_l4_type == "SCTP":
+ inner[SCTP].chksum = 0
+ else:
+ inner[self.inner_l4_type].chksum = 1
+
+ inner[Ether].src = self.inner_mac_src
+ inner[Ether].dst = self.inner_mac_dst
+
+ if self.outer_ip6_src != "N/A":
+ outer_l3 = IPv6()
+ else:
+ outer_l3 = IP()
+
+ if self.outer_vlan != "N/A":
+ outer = Ether() / Dot1Q() / outer_l3 / UDP()
+ outer[Dot1Q].vlan = self.outer_vlan
+ else:
+ outer = Ether() / outer_l3 / UDP()
+
+ outer[Ether].src = self.outer_mac_src
+ outer[Ether].dst = self.outer_mac_dst
+
+ if self.outer_ip6_src != "N/A":
+ outer[outer_l3.name].src = self.outer_ip6_src
+ outer[outer_l3.name].dst = self.outer_ip6_dst
+ else:
+ outer[outer_l3.name].src = self.outer_ip_src
+ outer[outer_l3.name].dst = self.outer_ip_dst
+
+ outer[UDP].sport = self.outer_udp_src
+ outer[UDP].dport = self.outer_udp_dst
+
+ if self.outer_ip_invalid == 1:
+ outer[outer_l3.name].chksum = 0
+ # when udp checksum is 0, will skip checksum
+ if self.outer_udp_invalid == 1:
+ outer[UDP].chksum = 1
+
+ if self.outer_udp_dst == VXLAN_PORT:
+ self.pkt = outer / VXLAN(vni=self.vni) / inner
+ else:
+ self.pkt = outer / ("X" * self.payload_size)
+
+ wrpcap(self.pcap_file, self.pkt)
+
+ return self.pkt
+
+ def get_chksums(self, pkt=None):
+ """
+ get chksum values of Outer and Inner packet L3&L4
+ Skip outer udp for it will be calculated by software
+ """
+ chk_sums = {}
+ if pkt is None:
+ pkt = rdpcap(self.pcap_file)
+ else:
+ pkt = pkt.pktgen.pkt
+
+ time.sleep(1)
+ if pkt[0].guess_payload_class(pkt[0]).name == "802.1Q":
+ payload = pkt[0][Dot1Q]
+ else:
+ payload = pkt[0]
+
+ if payload.guess_payload_class(payload).name == "IP":
+ chk_sums["outer_ip"] = hex(payload[IP].chksum)
+
+ if pkt[0].haslayer("VXLAN") == 1:
+ inner = pkt[0]["VXLAN"]
+ if inner.haslayer(IP) == 1:
+ chk_sums["inner_ip"] = hex(inner[IP].chksum)
+ if inner[IP].proto == 6:
+ chk_sums["inner_tcp"] = hex(inner[TCP].chksum)
+ if inner[IP].proto == 17:
+ chk_sums["inner_udp"] = hex(inner[UDP].chksum)
+ if inner[IP].proto == 132:
+ chk_sums["inner_sctp"] = hex(inner[SCTP].chksum)
+ elif inner.haslayer(IPv6) == 1:
+ if inner[IPv6].nh == 6:
+ chk_sums["inner_tcp"] = hex(inner[TCP].chksum)
+ if inner[IPv6].nh == 17:
+ chk_sums["inner_udp"] = hex(inner[UDP].chksum)
+ # scapy can not get sctp checksum, so extracted manually
+ if inner[IPv6].nh == 59:
+ load = str(inner[IPv6].payload)
+ chk_sums["inner_sctp"] = hex(
+ (ord(load[8]) << 24)
+ | (ord(load[9]) << 16)
+ | (ord(load[10]) << 8)
+ | (ord(load[11]))
+ )
+
+ return chk_sums
+
+ def send_pcap(self, iface=""):
+ """
+ Send vxlan pcap file by iface
+ """
+ del self.pkt_obj.pktgen.pkts[:]
+ self.pkt_obj.pktgen.assign_pkt(self.pkt)
+ self.pkt_obj.pktgen.update_pkts()
+ self.pkt_obj.send_pkt(crb=self.test_case.tester, tx_port=iface)
+
+ def pcap_len(self):
+ """
+ Return length of pcap packet, will plus 4 bytes crc
+ """
+ # add four bytes crc
+ return len(self.pkt) + 4
+
+
+class TestVxlan(TestCase):
+ def set_up_all(self):
+ """
+ vxlan Prerequisites
+ """
+ # this feature only enable in Intel® Ethernet 700 Series now
+ if self.nic in [
+ "I40E_10G-SFP_XL710",
+ "I40E_40G-QSFP_A",
+ "I40E_40G-QSFP_B",
+ "I40E_25G-25G_SFP28",
+ "I40E_10G-SFP_X722",
+ "I40E_10G-10G_BASE_T_X722",
+ "I40E_10G-10G_BASE_T_BC",
+ ]:
+ self.compile_switch = "CONFIG_RTE_LIBRTE_I40E_INC_VECTOR"
+ elif self.nic in ["IXGBE_10G-X550T", "IXGBE_10G-X550EM_X_10G_T"]:
+ self.compile_switch = "CONFIG_RTE_IXGBE_INC_VECTOR"
+ elif self.nic in [
+ "ICE_25G-E810C_SFP",
+ "ICE_100G-E810C_QSFP",
+ "ICE_25G-E823C_QSFP",
+ ]:
+ print("Intel® Ethernet 700 Series support default none VECTOR")
+ else:
+ self.verify(False, "%s not support this vxlan" % self.nic)
+ # Based on h/w type, choose how many ports to use
+ ports = self.dut.get_ports()
+
+ # Verify that enough ports are available
+ self.verify(len(ports) >= 2, "Insufficient ports for testing")
+ global valports
+ valports = [_ for _ in ports if self.tester.get_local_port(_) != -1]
+
+ self.portMask = utils.create_mask(valports[:2])
+
+ # Verify that enough threads are available
+ netdev = self.dut.ports_info[ports[0]]["port"]
+ self.ports_socket = netdev.socket
+
+ # start testpmd
+ self.pmdout = PmdOutput(self.dut)
+
+ # init port config
+ self.dut_port = valports[0]
+ self.dut_port_mac = self.dut.get_mac_address(self.dut_port)
+ tester_port = self.tester.get_local_port(self.dut_port)
+ self.tester_iface = self.tester.get_interface(tester_port)
+ self.recv_port = valports[1]
+ tester_recv_port = self.tester.get_local_port(self.recv_port)
+ self.recv_iface = self.tester.get_interface(tester_recv_port)
+
+ # invalid parameter
+ self.invalid_mac = "00:00:00:00:01"
+ self.invalid_ip = "192.168.1.256"
+ self.invalid_vlan = 4097
+ self.invalid_queue = 64
+ self.path = self.dut.apps_name["test-pmd"]
+
+ # vxlan payload length for performance test
+ # inner packet not contain crc, should need add four
+ self.vxlan_payload = (
+ PACKET_LEN
+ - HEADER_SIZE["eth"]
+ - HEADER_SIZE["ip"]
+ - HEADER_SIZE["udp"]
+ - HEADER_SIZE["vxlan"]
+ - HEADER_SIZE["eth"]
+ - HEADER_SIZE["ip"]
+ - HEADER_SIZE["udp"]
+ + 4
+ )
+
+ self.cal_type = [
+ {
+ "Type": "SOFTWARE ALL",
+ "csum": [],
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L4",
+ "csum": ["udp"],
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L3&L4",
+ "csum": ["ip", "udp", "outer-ip"],
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "SOFTWARE ALL",
+ "csum": [],
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L4",
+ "csum": ["udp"],
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L3&L4",
+ "csum": ["ip", "udp", "outer-ip"],
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ ]
+
+ self.chksum_header = ["Calculate Type"]
+ self.chksum_header.append("Queues")
+ self.chksum_header.append("Mpps")
+ self.chksum_header.append("% linerate")
+
+ # tunnel filter performance test
+ self.default_vlan = 1
+ self.tunnel_multiqueue = 2
+ self.tunnel_header = ["Packet", "Filter", "Queue", "Mpps", "% linerate"]
+ self.tunnel_perf = [
+ {
+ "Packet": "Normal",
+ "tunnel_filter": "None",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "None",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan-tenid",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-tenid",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "omac-imac-tenid",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "None",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan-tenid",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-tenid",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "omac-imac-tenid",
+ "recvqueue": "Multi",
+ },
+ ]
+
+ self.pktgen_helper = PacketGeneratorHelper()
+
+ def set_fields(self):
+ fields_config = {
+ "ip": {
+ "src": {"action": "random"},
+ "dst": {"action": "random"},
+ },
+ }
+ return fields_config
+
+ def suite_measure_throughput(self, tgen_input, use_vm=False):
+ vm_config = self.set_fields()
+ self.tester.pktgen.clear_streams()
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, vm_config if use_vm else None, self.tester.pktgen
+ )
+ result = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ return result
+
+ def perf_tunnel_filter_set_rule(self, rule_config):
+ rule_list = {
+ # check inner mac + inner vlan filter can work
+ "imac-ivlan": f'flow create {rule_config.get("dut_port")} ingress pattern eth / '
+ f'ipv4 / udp / vxlan / eth dst is {rule_config.get("inner_mac_dst")} / '
+ f'vlan tci is {rule_config.get("inner_vlan")} / end actions pf / '
+ f'queue index {rule_config.get("queue")} / end',
+ # check inner mac + inner vlan + tunnel id filter can work
+ "imac-ivlan-tenid": f'flow create {rule_config.get("dut_port")} ingress pattern eth / '
+ f'ipv4 / udp / vxlan vni is {rule_config.get("vni")} / '
+ f'eth dst is {rule_config.get("inner_mac_dst")} / '
+ f'vlan tci is {rule_config.get("inner_vlan")} / '
+ f'end actions pf / queue index {rule_config.get("queue")} / end',
+ # check inner mac + tunnel id filter can work
+ "imac-tenid": f'flow create {rule_config.get("dut_port")} ingress pattern eth / '
+ f'ipv4 / udp / vxlan vni is {rule_config.get("vni")} / '
+ f'eth dst is {rule_config.get("inner_mac_dst")} / end actions pf / '
+ f'queue index {rule_config.get("queue")} / end',
+ # check inner mac filter can work
+ "imac": f'flow create {rule_config.get("dut_port")} ingress pattern eth / '
+ f'ipv4 / udp / vxlan / eth dst is {rule_config.get("inner_mac_dst")} / end actions pf / '
+ f'queue index {rule_config.get("queue")} / end',
+ # check outer mac + inner mac + tunnel id filter can work
+ "omac-imac-tenid": f'flow create {rule_config.get("dut_port")} ingress pattern '
+ f'eth dst is {rule_config.get("outer_mac_dst")} / '
+ f'ipv4 / udp / vxlan vni is {rule_config.get("vni")} / '
+ f'eth dst is {rule_config.get("inner_mac_dst")} / '
+ f'end actions pf / queue index {rule_config.get("queue")} / end',
+ }
+ rule = rule_list.get(rule_config.get("tun_filter"))
+ if not rule:
+ msg = "not support format"
+ self.logger.error(msg)
+ return
+ out = self.dut.send_expect(rule, "testpmd>", 3)
+ pat = "Flow rule #\d+ created"
+ self.verify(re.findall(pat, out, re.M), "Flow rule create failed")
+
+ def send_and_detect(self, **kwargs):
+ """
+ send vxlan packet and check whether testpmd detect the correct
+ packet type
+ """
+ arg_str = ""
+ for arg in kwargs:
+ arg_str += "[%s = %s]" % (arg, kwargs[arg])
+
+ # create pcap file with supplied arguments
+ self.logger.info("send vxlan pkts %s" % arg_str)
+ config = VxlanTestConfig(self, **kwargs)
+ # now cloud filter will default enable L2 mac filter, so dst mac must
+ # be same
+ config.outer_mac_dst = self.dut_port_mac
+ config.create_pcap()
+ self.dut.send_expect("start", "testpmd>", 10)
+ self.pmdout.wait_link_status_up(self.dut_port)
+ config.send_pcap(self.tester_iface)
+ # check whether detect vxlan type
+ out = self.dut.get_session_output(timeout=2)
+ print(out)
+ self.verify(config.packet_type() in out, "Vxlan Packet not detected")
+
+ def send_and_check(self, **kwargs):
+ """
+ send vxlan packet and check whether receive packet with correct
+ checksum
+ """
+ # create pcap file with supplied arguments
+ outer_ipv6 = False
+ args = {}
+ for arg in kwargs:
+ if "invalid" not in arg:
+ args[arg] = kwargs[arg]
+ if "outer_ip6" in arg:
+ outer_ipv6 = True
+
+ # if packet outer L3 is ipv6, should not enable hardware checksum
+ if outer_ipv6:
+ self.csum_set_sw("outer-ip", self.dut_port)
+ self.csum_set_sw("outer-ip", self.recv_port)
+
+ config = VxlanTestConfig(self, **args)
+ # now cloud filter will default enable L2 mac filter, so dst mac must
+ # be same
+ config.outer_mac_dst = self.dut_port_mac
+ # csum function will not auto add outer ip src address already, so update send packet src ip address
+ if config.outer_ip6_src != "N/A":
+ config.outer_ip6_src = config.outer_ip6_src
+ else:
+ config.outer_ip_src = config.outer_ip_src
+
+ # csum function will not auto add outer ip src address already, so update send packet src ip address
+ if config.outer_udp_dst == VXLAN_PORT:
+ if config.inner_ip6_src != "N/A":
+ config.inner_ip6_src = config.inner_ip6_src
+ else:
+ config.inner_ip_src = config.inner_ip_src
+
+ # extract the checksum value of vxlan packet
+ config.create_pcap()
+ chksums_ref = config.get_chksums()
+ self.logger.info("chksums_ref" + str(chksums_ref))
+
+ # log the vxlan format
+ arg_str = ""
+ for arg in kwargs:
+ arg_str += "[%s = %s]" % (arg, kwargs[arg])
+
+ self.logger.info("vxlan packet %s" % arg_str)
+
+ out = self.dut.send_expect("start", "testpmd>", 10)
+
+ # create pcap file with supplied arguments
+ config = VxlanTestConfig(self, **kwargs)
+ config.outer_mac_dst = self.dut_port_mac
+ config.create_pcap()
+
+ self.pmdout.wait_link_status_up(self.dut_port)
+ # save the capture packet into pcap format
+ inst = self.tester.tcpdump_sniff_packets(self.recv_iface)
+ config.send_pcap(self.tester_iface)
+ pkt = self.tester.load_tcpdump_sniff_packets(inst, timeout=3)
+
+ # extract the checksum offload from saved pcap file
+ chksums = config.get_chksums(pkt=pkt)
+ self.logger.info("chksums" + str(chksums))
+
+ out = self.dut.send_expect("stop", "testpmd>", 10)
+ print(out)
+
+ # verify detected l4 invalid checksum
+ if "inner_l4_invalid" in kwargs:
+ self.verify(
+ self.pmdout.get_pmd_value("Bad-l4csum:", out) == 1,
+ "Failed to count inner l4 chksum error",
+ )
+
+ # verify detected l3 invalid checksum
+ if "ip_invalid" in kwargs:
+ self.verify(
+ self.pmdout.get_pmd_value("Bad-ipcsum:", out) == self.iperr_num + 1,
+ "Failed to count inner ip chksum error",
+ )
+ self.iperr_num += 1
+
+ # verify saved pcap checksum same to expected checksum
+ for key in chksums_ref:
+ self.verify(
+ chksums[key] == chksums_ref[key],
+ "%s not matched to %s" % (key, chksums_ref[key]),
+ )
+
+ def filter_and_check(self, rule, config, queue_id):
+ """
+ send vxlan packet and check whether receive packet in assigned queue
+ """
+ # create rule
+ self.tunnel_filter_add(rule)
+
+ # send vxlan packet
+ config.create_pcap()
+ self.dut.send_expect("start", "testpmd>", 10)
+ self.pmdout.wait_link_status_up(self.dut_port)
+ config.send_pcap(self.tester_iface)
+ out = self.dut.get_session_output(timeout=2)
+ print(out)
+
+ queue = -1
+ pattern = re.compile("- Receive queue=0x(\d)")
+ m = pattern.search(out)
+ if m is not None:
+ queue = m.group(1)
+
+ # verify received in expected queue
+ self.verify(queue_id == int(queue), "invalid receive queue")
+
+ # del rule
+ args = [self.dut_port]
+ self.tunnel_filter_del(*args)
+ self.dut.send_expect("stop", "testpmd>", 10)
+
+ def config_tunnelfilter(self, dut_port, recv_port, perf_config, pcapfile):
+ pkts = []
+ config = VxlanTestConfig(self, payload_size=self.vxlan_payload - 4)
+ config.inner_vlan = self.default_vlan
+ config.outer_mac_dst = self.dut.get_mac_address(dut_port)
+ config.pcap_file = pcapfile
+
+ tun_filter = perf_config["tunnel_filter"]
+ recv_queue = perf_config["recvqueue"]
+ # there's known bug that if enable vxlan, rss will be disabled
+ if tun_filter == "None" and recv_queue == "Multi":
+ print((utils.RED("RSS and Tunel filter can't enable in the same time")))
+ else:
+ self.enable_vxlan(dut_port)
+
+ if tun_filter != "None":
+ rule_config = {
+ "dut_port": dut_port,
+ "outer_mac_dst": config.outer_mac_dst,
+ "inner_mac_dst": config.inner_mac_dst,
+ "inner_ip_dst": config.inner_ip_dst,
+ "inner_vlan": config.inner_vlan,
+ "tun_filter": tun_filter,
+ "vni": config.vni,
+ "queue": 0,
+ }
+ self.perf_tunnel_filter_set_rule(rule_config)
+
+ if perf_config["Packet"] == "Normal":
+ config.outer_udp_dst = 63
+ config.outer_mac_dst = self.dut.get_mac_address(dut_port)
+ config.payload_size = (
+ PACKET_LEN - HEADER_SIZE["eth"] - HEADER_SIZE["ip"] - HEADER_SIZE["udp"]
+ )
+
+ # add default pkt into pkt list
+ pkt = config.create_pcap()
+ pkts.append(pkt)
+
+ # add other pkts into pkt list when enable multi receive queues
+ if recv_queue == "Multi" and tun_filter != "None":
+ for queue in range(self.tunnel_multiqueue - 1):
+ if "imac" in tun_filter:
+ config.inner_mac_dst = "00:00:20:00:00:0%d" % (queue + 2)
+ if "ivlan" in tun_filter:
+ config.inner_vlan = queue + 2
+ if "tenid" in tun_filter:
+ config.vni = queue + 2
+
+ # add tunnel filter the same as pkt
+ pkt = config.create_pcap()
+ pkts.append(pkt)
+
+ rule_config = {
+ "dut_port": dut_port,
+ "outer_mac_dst": config.outer_mac_dst,
+ "inner_mac_dst": config.inner_mac_dst,
+ "inner_ip_dst": config.inner_ip_dst,
+ "inner_vlan": config.inner_vlan,
+ "tun_filter": tun_filter,
+ "vni": config.vni,
+ "queue": (queue + 1),
+ }
+ self.perf_tunnel_filter_set_rule(rule_config)
+
+ # save pkt list into pcap file
+ wrpcap(config.pcap_file, pkts)
+ self.tester.session.copy_file_to(config.pcap_file)
+
+ def combine_pcap(self, dest_pcap, src_pcap):
+ pkts = rdpcap(dest_pcap)
+ if len(pkts) != 1:
+ return
+
+ pkts_src = rdpcap(src_pcap)
+ pkts += pkts_src
+
+ wrpcap(dest_pcap, pkts)
+
+ def test_perf_vxlan_tunnelfilter_performance_2ports(self):
+ self.result_table_create(self.tunnel_header)
+ core_list = self.dut.get_core_list(
+ "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
+ )
+
+ pmd_temp = (
+ "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+ )
+
+ for perf_config in self.tunnel_perf:
+ tun_filter = perf_config["tunnel_filter"]
+ recv_queue = perf_config["recvqueue"]
+ print(
+ (
+ utils.GREEN(
+ "Measure tunnel performance of [%s %s %s]"
+ % (perf_config["Packet"], tun_filter, recv_queue)
+ )
+ )
+ )
+
+ if tun_filter == "None" and recv_queue == "Multi":
+ pmd_temp = (
+ "./%s %s -- -i --rss-udp --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+ )
+
+ self.eal_para = self.dut.create_eal_parameters(cores=core_list)
+ pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
+ self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
+
+ # config flow
+ self.config_tunnelfilter(
+ self.dut_port, self.recv_port, perf_config, "flow1.pcap"
+ )
+ # config the flows
+ tgen_input = []
+ tgen_input.append(
+ (
+ self.tester.get_local_port(self.dut_port),
+ self.tester.get_local_port(self.recv_port),
+ "flow1.pcap",
+ )
+ )
+
+ if BIDIRECT:
+ self.config_tunnelfilter(
+ self.recv_port, self.dut_port, perf_config, "flow2.pcap"
+ )
+ tgen_input.append(
+ (
+ self.tester.get_local_port(self.recv_port),
+ self.tester.get_local_port(self.dut_port),
+ "flow2.pcap",
+ )
+ )
+
+ self.dut.send_expect("set fwd io", "testpmd>", 10)
+ self.dut.send_expect("start", "testpmd>", 10)
+ self.pmdout.wait_link_status_up(self.dut_port)
+ if BIDIRECT:
+ wirespeed = self.wirespeed(self.nic, PACKET_LEN, 2)
+ else:
+ wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
+
+ # run traffic generator
+ use_vm = True if recv_queue == "Multi" and tun_filter == "None" else False
+ _, pps = self.suite_measure_throughput(tgen_input, use_vm=use_vm)
+
+ pps /= 1000000.0
+ perf_config["Mpps"] = pps
+ perf_config["pct"] = pps * 100 / wirespeed
+
+ out = self.dut.send_expect("stop", "testpmd>", 10)
+ self.dut.send_expect("quit", "# ", 10)
+
+ # verify every queue work fine
+ check_queue = 0
+ if recv_queue == "Multi":
+ for queue in range(check_queue):
+ self.verify(
+ "Queue= %d -> TX Port" % (queue) in out,
+ "Queue %d no traffic" % queue,
+ )
+
+ table_row = [
+ perf_config["Packet"],
+ tun_filter,
+ recv_queue,
+ perf_config["Mpps"],
+ perf_config["pct"],
+ ]
+
+ self.result_table_add(table_row)
+
+ self.result_table_print()
+
+ def test_perf_vxlan_checksum_performance_2ports(self):
+ self.result_table_create(self.chksum_header)
+ vxlan = VxlanTestConfig(self, payload_size=self.vxlan_payload)
+ vxlan.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
+ vxlan.pcap_file = "vxlan1.pcap"
+ vxlan.inner_mac_dst = "00:00:20:00:00:01"
+ vxlan.create_pcap()
+
+ vxlan_queue = VxlanTestConfig(self, payload_size=self.vxlan_payload)
+ vxlan_queue.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
+ vxlan_queue.pcap_file = "vxlan1_1.pcap"
+ vxlan_queue.inner_mac_dst = "00:00:20:00:00:02"
+ vxlan_queue.create_pcap()
+
+ # socket/core/thread
+ core_list = self.dut.get_core_list(
+ "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
+ )
+ core_mask = utils.create_mask(core_list)
+
+ self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
+ tx_port = self.tester.get_local_port(self.dut_ports[0])
+ rx_port = self.tester.get_local_port(self.dut_ports[1])
+
+ for cal in self.cal_type:
+ recv_queue = cal["recvqueue"]
+ print(
+ (
+ utils.GREEN(
+ "Measure checksum performance of [%s %s %s]"
+ % (cal["Type"], recv_queue, cal["csum"])
+ )
+ )
+ )
+
+ # configure flows
+ tgen_input = []
+ if recv_queue == "Multi":
+ tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
+ tgen_input.append((tx_port, rx_port, "vxlan1_1.pcap"))
+ else:
+ tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
+
+ # multi queue and signle queue commands
+ if recv_queue == "Multi":
+ pmd_temp = "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+ else:
+ pmd_temp = "./%s %s -- -i --nb-cores=2 --portmask=%s"
+
+ self.eal_para = self.dut.create_eal_parameters(cores=core_list)
+ pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
+
+ self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
+ self.dut.send_expect("set fwd csum", "testpmd>", 10)
+ self.enable_vxlan(self.dut_port)
+ self.enable_vxlan(self.recv_port)
+ self.pmdout.wait_link_status_up(self.dut_port)
+
+ # redirect flow to another queue by tunnel filter
+ rule_config = {
+ "dut_port": self.dut_port,
+ "outer_mac_dst": vxlan.outer_mac_dst,
+ "inner_mac_dst": vxlan.inner_mac_dst,
+ "inner_ip_dst": vxlan.inner_ip_dst,
+ "inner_vlan": 0,
+ "tun_filter": "imac",
+ "vni": vxlan.vni,
+ "queue": 0,
+ }
+ self.perf_tunnel_filter_set_rule(rule_config)
+
+ if recv_queue == "Multi":
+ rule_config = {
+ "dut_port": self.dut_port,
+ "outer_mac_dst": vxlan_queue.outer_mac_dst,
+ "inner_mac_dst": vxlan_queue.inner_mac_dst,
+ "inner_ip_dst": vxlan_queue.inner_ip_dst,
+ "inner_vlan": 0,
+ "tun_filter": "imac",
+ "vni": vxlan.vni,
+ "queue": 1,
+ }
+ self.perf_tunnel_filter_set_rule(rule_config)
+
+ for pro in cal["csum"]:
+ self.csum_set_type(pro, self.dut_port)
+ self.csum_set_type(pro, self.recv_port)
+
+ self.dut.send_expect("start", "testpmd>", 10)
+
+ wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
+
+ # run traffic generator
+ _, pps = self.suite_measure_throughput(tgen_input)
+
+ pps /= 1000000.0
+ cal["Mpps"] = pps
+ cal["pct"] = pps * 100 / wirespeed
+
+ out = self.dut.send_expect("stop", "testpmd>", 10)
+ self.dut.send_expect("quit", "# ", 10)
+
+ # verify every queue work fine
+ check_queue = 1
+ if recv_queue == "Multi":
+ for queue in range(check_queue):
+ self.verify(
+ "Queue= %d -> TX Port" % (queue) in out,
+ "Queue %d no traffic" % queue,
+ )
+
+ table_row = [cal["Type"], recv_queue, cal["Mpps"], cal["pct"]]
+ self.result_table_add(table_row)
+
+ self.result_table_print()
+
+ def enable_vxlan(self, port):
+ self.dut.send_expect(
+ "rx_vxlan_port add %d %d" % (VXLAN_PORT, port), "testpmd>", 10
+ )
+
+ def csum_set_type(self, proto, port):
+ self.dut.send_expect("port stop all", "testpmd>")
+ out = self.dut.send_expect("csum set %s hw %d" % (proto, port), "testpmd>", 10)
+ self.dut.send_expect("port start all", "testpmd>")
+ self.verify("Bad arguments" not in out, "Failed to set vxlan csum")
+ self.verify("error" not in out, "Failed to set vxlan csum")
+
+ def csum_set_sw(self, proto, port):
+ self.dut.send_expect("port stop all", "testpmd>")
+ out = self.dut.send_expect("csum set %s sw %d" % (proto, port), "testpmd>", 10)
+ self.dut.send_expect("port start all", "testpmd>")
+ self.verify("Bad arguments" not in out, "Failed to set vxlan csum")
+ self.verify("error" not in out, "Failed to set vxlan csum")
+
+ def tunnel_filter_add(self, rule):
+ out = self.dut.send_expect(rule, "testpmd>", 3)
+ self.verify("Flow rule #0 created" in out, "Flow rule create failed")
+ return out
+
+ def tunnel_filter_add_nocheck(self, rule):
+ out = self.dut.send_expect(rule, "testpmd>", 3)
+ return out
+
+ def tunnel_filter_del(self, *args):
+ out = self.dut.send_expect("flow flush 0", "testpmd>", 10)
+ return out
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.dut.kill_all()
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ pass
diff --git a/tests/TestSuite_vxlan.py b/tests/TestSuite_vxlan.py
index 1bf12743..7c309a01 100644
--- a/tests/TestSuite_vxlan.py
+++ b/tests/TestSuite_vxlan.py
@@ -1175,219 +1175,6 @@ class TestVxlan(TestCase):
wrpcap(dest_pcap, pkts)
- def test_perf_vxlan_tunnelfilter_performance_2ports(self):
- self.result_table_create(self.tunnel_header)
- core_list = self.dut.get_core_list(
- "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
- )
-
- pmd_temp = (
- "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
- )
-
- for perf_config in self.tunnel_perf:
- tun_filter = perf_config["tunnel_filter"]
- recv_queue = perf_config["recvqueue"]
- print(
- (
- utils.GREEN(
- "Measure tunnel performance of [%s %s %s]"
- % (perf_config["Packet"], tun_filter, recv_queue)
- )
- )
- )
-
- if tun_filter == "None" and recv_queue == "Multi":
- pmd_temp = (
- "./%s %s -- -i --rss-udp --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
- )
-
- self.eal_para = self.dut.create_eal_parameters(cores=core_list)
- pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
- self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
-
- # config flow
- self.config_tunnelfilter(
- self.dut_port, self.recv_port, perf_config, "flow1.pcap"
- )
- # config the flows
- tgen_input = []
- tgen_input.append(
- (
- self.tester.get_local_port(self.dut_port),
- self.tester.get_local_port(self.recv_port),
- "flow1.pcap",
- )
- )
-
- if BIDIRECT:
- self.config_tunnelfilter(
- self.recv_port, self.dut_port, perf_config, "flow2.pcap"
- )
- tgen_input.append(
- (
- self.tester.get_local_port(self.recv_port),
- self.tester.get_local_port(self.dut_port),
- "flow2.pcap",
- )
- )
-
- self.dut.send_expect("set fwd io", "testpmd>", 10)
- self.dut.send_expect("start", "testpmd>", 10)
- self.pmdout.wait_link_status_up(self.dut_port)
- if BIDIRECT:
- wirespeed = self.wirespeed(self.nic, PACKET_LEN, 2)
- else:
- wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
-
- # run traffic generator
- use_vm = True if recv_queue == "Multi" and tun_filter == "None" else False
- _, pps = self.suite_measure_throughput(tgen_input, use_vm=use_vm)
-
- pps /= 1000000.0
- perf_config["Mpps"] = pps
- perf_config["pct"] = pps * 100 / wirespeed
-
- out = self.dut.send_expect("stop", "testpmd>", 10)
- self.dut.send_expect("quit", "# ", 10)
-
- # verify every queue work fine
- check_queue = 0
- if recv_queue == "Multi":
- for queue in range(check_queue):
- self.verify(
- "Queue= %d -> TX Port" % (queue) in out,
- "Queue %d no traffic" % queue,
- )
-
- table_row = [
- perf_config["Packet"],
- tun_filter,
- recv_queue,
- perf_config["Mpps"],
- perf_config["pct"],
- ]
-
- self.result_table_add(table_row)
-
- self.result_table_print()
-
- def test_perf_vxlan_checksum_performance_2ports(self):
- self.result_table_create(self.chksum_header)
- vxlan = VxlanTestConfig(self, payload_size=self.vxlan_payload)
- vxlan.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
- vxlan.pcap_file = "vxlan1.pcap"
- vxlan.inner_mac_dst = "00:00:20:00:00:01"
- vxlan.create_pcap()
-
- vxlan_queue = VxlanTestConfig(self, payload_size=self.vxlan_payload)
- vxlan_queue.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
- vxlan_queue.pcap_file = "vxlan1_1.pcap"
- vxlan_queue.inner_mac_dst = "00:00:20:00:00:02"
- vxlan_queue.create_pcap()
-
- # socket/core/thread
- core_list = self.dut.get_core_list(
- "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
- )
- core_mask = utils.create_mask(core_list)
-
- self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
- tx_port = self.tester.get_local_port(self.dut_ports[0])
- rx_port = self.tester.get_local_port(self.dut_ports[1])
-
- for cal in self.cal_type:
- recv_queue = cal["recvqueue"]
- print(
- (
- utils.GREEN(
- "Measure checksum performance of [%s %s %s]"
- % (cal["Type"], recv_queue, cal["csum"])
- )
- )
- )
-
- # configure flows
- tgen_input = []
- if recv_queue == "Multi":
- tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
- tgen_input.append((tx_port, rx_port, "vxlan1_1.pcap"))
- else:
- tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
-
- # multi queue and signle queue commands
- if recv_queue == "Multi":
- pmd_temp = "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
- else:
- pmd_temp = "./%s %s -- -i --nb-cores=2 --portmask=%s"
-
- self.eal_para = self.dut.create_eal_parameters(cores=core_list)
- pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
-
- self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
- self.dut.send_expect("set fwd csum", "testpmd>", 10)
- self.enable_vxlan(self.dut_port)
- self.enable_vxlan(self.recv_port)
- self.pmdout.wait_link_status_up(self.dut_port)
-
- # redirect flow to another queue by tunnel filter
- rule_config = {
- "dut_port": self.dut_port,
- "outer_mac_dst": vxlan.outer_mac_dst,
- "inner_mac_dst": vxlan.inner_mac_dst,
- "inner_ip_dst": vxlan.inner_ip_dst,
- "inner_vlan": 0,
- "tun_filter": "imac",
- "vni": vxlan.vni,
- "queue": 0,
- }
- self.perf_tunnel_filter_set_rule(rule_config)
-
- if recv_queue == "Multi":
- rule_config = {
- "dut_port": self.dut_port,
- "outer_mac_dst": vxlan_queue.outer_mac_dst,
- "inner_mac_dst": vxlan_queue.inner_mac_dst,
- "inner_ip_dst": vxlan_queue.inner_ip_dst,
- "inner_vlan": 0,
- "tun_filter": "imac",
- "vni": vxlan.vni,
- "queue": 1,
- }
- self.perf_tunnel_filter_set_rule(rule_config)
-
- for pro in cal["csum"]:
- self.csum_set_type(pro, self.dut_port)
- self.csum_set_type(pro, self.recv_port)
-
- self.dut.send_expect("start", "testpmd>", 10)
-
- wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
-
- # run traffic generator
- _, pps = self.suite_measure_throughput(tgen_input)
-
- pps /= 1000000.0
- cal["Mpps"] = pps
- cal["pct"] = pps * 100 / wirespeed
-
- out = self.dut.send_expect("stop", "testpmd>", 10)
- self.dut.send_expect("quit", "# ", 10)
-
- # verify every queue work fine
- check_queue = 1
- if recv_queue == "Multi":
- for queue in range(check_queue):
- self.verify(
- "Queue= %d -> TX Port" % (queue) in out,
- "Queue %d no traffic" % queue,
- )
-
- table_row = [cal["Type"], recv_queue, cal["Mpps"], cal["pct"]]
- self.result_table_add(table_row)
-
- self.result_table_print()
-
def enable_vxlan(self, port):
self.dut.send_expect(
"rx_vxlan_port add %d %d" % (VXLAN_PORT, port), "testpmd>", 10
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts][PATCH V1 5/7] tests/ipfrag: Separated performance cases
2023-01-09 18:46 [dts][PATCH V1 1/7] tests/efd: Separated performance cases Hongbo Li
` (2 preceding siblings ...)
2023-01-09 18:46 ` [dts][PATCH V1 4/7] tests/vxlan: " Hongbo Li
@ 2023-01-09 18:46 ` Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 6/7] tests/multiprocess: " Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 7/7] tests/checksum_offload: " Hongbo Li
5 siblings, 0 replies; 8+ messages in thread
From: Hongbo Li @ 2023-01-09 18:46 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance test cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/ipfrag_test_plan.rst | 33 +--
test_plans/perf_ipfrag_test_plan.rst | 141 ++++++++++
tests/TestSuite_ipfrag.py | 21 --
tests/TestSuite_perf_ipfrag.py | 386 +++++++++++++++++++++++++++
4 files changed, 528 insertions(+), 53 deletions(-)
create mode 100644 test_plans/perf_ipfrag_test_plan.rst
create mode 100644 tests/TestSuite_perf_ipfrag.py
diff --git a/test_plans/ipfrag_test_plan.rst b/test_plans/ipfrag_test_plan.rst
index e30a5961..17ea5c3c 100644
--- a/test_plans/ipfrag_test_plan.rst
+++ b/test_plans/ipfrag_test_plan.rst
@@ -129,35 +129,4 @@ For each of them check that:
#. Check number of output packets.
#. Check header of each output packet: length, ID, fragment offset, flags.
-#. Check payload: size and contents as expected, not corrupted.
-
-
-
-Test Case 4: Throughput test
-============================
-
-The test report should provide the throughput rate measurements (in mpps and % of the line rate for 2x NIC ports)
-for the following input frame sizes: 64 bytes, 1518 bytes, 1519 bytes, 2K, 9k.
-
-The following configurations should be tested:
-
-|
-
-+----------+-------------------------+----------------------+
-|# of ports| Socket/Core/HyperThread|Total # of sw threads |
-+----------+-------------------------+----------------------+
-| 2 | 1S/1C/1T | 1 |
-+----------+-------------------------+----------------------+
-| 2 | 1S/1C/2T | 2 |
-+----------+-------------------------+----------------------+
-| 2 | 1S/2C/1T | 2 |
-+----------+-------------------------+----------------------+
-| 2 | 2S/1C/1T | 2 |
-+----------+-------------------------+----------------------+
-
-|
-
-Command line::
-
- ./x86_64-native-linuxapp-gcc/examples/dpdk-ip_fragmentation -c <LCOREMASK> -n 4 -- [-P] -p PORTMASK
- -q <NUM_OF_PORTS_PER_THREAD>
+#. Check payload: size and contents as expected, not corrupted.
\ No newline at end of file
diff --git a/test_plans/perf_ipfrag_test_plan.rst b/test_plans/perf_ipfrag_test_plan.rst
new file mode 100644
index 00000000..96dca62d
--- /dev/null
+++ b/test_plans/perf_ipfrag_test_plan.rst
@@ -0,0 +1,141 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2011-2017 Intel Corporation
+
+======================
+IP fragmentation Tests
+======================
+
+The IP fragmentation results are produced using ''ip_fragmentation'' application.
+The test application should run with both IPv4 and IPv6 fragmentation.
+
+The suit support NIC: Intel® Ethernet 700 Series, Intel® Ethernet 800 Series, 82599 and igc driver NIC.
+
+Prerequisites
+=============
+
+1. Hardware requirements:
+
+ - For each CPU socket, each memory channel should be populated with at least 1x DIMM
+ - Board is populated with at least 2x 1GbE or 10GbE ports. Special PCIe restrictions may
+ be required for performance. For example, the following requirements should be
+ met for Intel 82599 NICs:
+
+ - NICs are plugged into PCIe Gen2 or Gen3 slots
+ - For PCIe Gen2 slots, the number of lanes should be 8x or higher
+ - A single port from each NIC should be used, so for 2x ports, 2x NICs should
+ be used
+
+ - NIC ports connected to traffic generator. It is assumed that the NIC ports
+ P0, P1, P2, P3 (as identified by the DPDK application) are connected to the
+ traffic generator ports TG0, TG1, TG2, TG3. The application-side port mask of
+ NIC ports P0, P1, P2, P3 is noted as PORTMASK in this section.
+ Traffic generator should support sending jumbo frames with size up to 9K.
+
+2. BIOS requirements:
+
+ - Intel Hyper-Threading Technology is ENABLED
+ - Hardware Prefetcher is DISABLED
+ - Adjacent Cache Line Prefetch is DISABLED
+ - Direct Cache Access is DISABLED
+
+3. Linux kernel requirements:
+
+ - Linux kernel has the following features enabled: huge page support, UIO, HPET
+ - Appropriate number of huge pages are reserved at kernel boot time
+ - The IDs of the hardware threads (logical cores) per each CPU socket can be
+ determined by parsing the file /proc/cpuinfo. The naming convention for the
+ logical cores is: C{x.y.z} = hyper-thread z of physical core y of CPU socket x,
+ with typical values of x = 0 .. 3, y = 0 .. 7, z = 0 .. 1. Logical cores
+ C{0.0.1} and C{0.0.1} should be avoided while executing the test, as they are
+ used by the Linux kernel for running regular processes.
+
+4. Software application requirements
+
+5. If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+ using vfio, use the following commands to load the vfio driver and bind it
+ to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+ - The test can be run with IPv4 package. The LPM table used for IPv4 packet routing is:
+
+ +-------+-------------------------------------+-----------+
+ |Entry #|LPM prefix (IP/length) |Output port|
+ +-------+-------------------------------------+-----------+
+ | 0 | 100.10.0.0/16 | P2 |
+ +-------+-------------------------------------+-----------+
+ | 1 | 100.20.0.0/16 | P2 |
+ +-------+-------------------------------------+-----------+
+ | 2 | 100.30.0.0/16 | P0 |
+ +-------+-------------------------------------+-----------+
+ | 3 | 100.40.0.0/16 | P0 |
+ +-------+-------------------------------------+-----------+
+
+
+ - The test can be run with IPv6 package, which follows rules below.
+
+ - There is no support for Hop-by-Hop or Routing extension headers in the packet
+ to be fragmented. All other optional headers, which are not part of the
+ unfragmentable part of the IPv6 packet are supported.
+
+ - When a fragment is generated, its identification field in the IPv6
+ fragmentation extension header is set to 0. This is not RFC compliant, but
+ proper identification number generation is out of the scope of the application
+ and routers in an IPv6 path are not allowed to fragment in the first place...
+ Generating that identification number is the job of a proper IP stack.
+
+ - The LPM table used for IPv6 packet routing is:
+
+ +-------+-------------------------------------+-----------+
+ |Entry #|LPM prefix (IP/length) |Output port|
+ +-------+-------------------------------------+-----------+
+ | 0 | 101:101:101:101:101:101:101:101/48| P2 |
+ +-------+-------------------------------------+-----------+
+ | 1 | 201:101:101:101:101:101:101:101/48| P2 |
+ +-------+-------------------------------------+-----------+
+ | 2 | 301:101:101:101:101:101:101:101/48| P0 |
+ +-------+-------------------------------------+-----------+
+ | 3 | 401:101:101:101:101:101:101:101/48| P0 |
+ +-------+-------------------------------------+-----------+
+
+ The following items are configured through the command line interface of the application:
+
+ - The set of one or several RX queues to be enabled for each NIC port
+ - The set of logical cores to execute the packet forwarding task
+ - Mapping of the NIC RX queues to logical cores handling them.
+
+6. Compile examples/ip_fragmentation::
+
+ meson configure -Dexamples=ip_fragmentation x86_64-native-linuxapp-gcc
+ ninja -C x86_64-native-linuxapp-gcc
+
+Test Case 1: Throughput test
+============================
+
+The test report should provide the throughput rate measurements (in mpps and % of the line rate for 2x NIC ports)
+for the following input frame sizes: 64 bytes, 1518 bytes, 1519 bytes, 2K, 9k.
+
+The following configurations should be tested:
+
+|
+
++----------+-------------------------+----------------------+
+|# of ports| Socket/Core/HyperThread|Total # of sw threads |
++----------+-------------------------+----------------------+
+| 2 | 1S/1C/1T | 1 |
++----------+-------------------------+----------------------+
+| 2 | 1S/1C/2T | 2 |
++----------+-------------------------+----------------------+
+| 2 | 1S/2C/1T | 2 |
++----------+-------------------------+----------------------+
+| 2 | 2S/1C/1T | 2 |
++----------+-------------------------+----------------------+
+
+|
+
+Command line::
+
+ ./x86_64-native-linuxapp-gcc/examples/dpdk-ip_fragmentation -c <LCOREMASK> -n 4 -- [-P] -p PORTMASK
+ -q <NUM_OF_PORTS_PER_THREAD>
diff --git a/tests/TestSuite_ipfrag.py b/tests/TestSuite_ipfrag.py
index 95f99281..169df06b 100644
--- a/tests/TestSuite_ipfrag.py
+++ b/tests/TestSuite_ipfrag.py
@@ -372,27 +372,6 @@ class TestIpfrag(TestCase):
self.dut.send_expect("^C", "#")
- def test_perf_ipfrag_throughtput(self):
- """
- Performance test for 64, 1518, 1519, 2k and 9k.
- """
- sizes = [64, 1518, 1519, 2000, 9000]
-
- tblheader = ["Ports", "S/C/T", "SW threads"]
- for size in sizes:
- tblheader.append("%dB Mpps" % size)
- tblheader.append("%d" % size)
-
- self.result_table_create(tblheader)
-
- lcores = [("1S/1C/1T", 2), ("1S/1C/2T", 2), ("1S/2C/1T", 2), ("2S/1C/1T", 2)]
- index = 1
- for (lcore, numThr) in lcores:
- self.benchmark(index, lcore, numThr, sizes)
- index += 1
-
- self.result_table_print()
-
def tear_down(self):
"""
Run after each test case.
diff --git a/tests/TestSuite_perf_ipfrag.py b/tests/TestSuite_perf_ipfrag.py
new file mode 100644
index 00000000..030aa378
--- /dev/null
+++ b/tests/TestSuite_perf_ipfrag.py
@@ -0,0 +1,386 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Test IPv4 fragmentation features in DPDK.
+"""
+
+import os
+import re
+import string
+import time
+
+import framework.utils as utils
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.settings import HEADER_SIZE
+from framework.test_case import TestCase
+
+lpm_table_ipv6 = [
+ "{{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+ "{{4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+ "{{5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{6,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{7,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+ "{{8,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+]
+
+
+class TestIpfrag(TestCase):
+ def portRepl(self, match):
+ """
+ Function to replace P([0123]) pattern in tables
+ """
+
+ portid = match.group(1)
+ self.verify(int(portid) in range(4), "invalid port id")
+ return "%s" % eval("P" + str(portid))
+
+ def set_up_all(self):
+ """
+ ip_fragmentation Prerequisites
+ """
+
+ # Based on h/w type, choose how many ports to use
+ self.ports = self.dut.get_ports()
+
+ # Verify that enough ports are available
+ self.verify(len(self.ports) >= 2, "Insufficient ports for testing")
+
+ self.ports_socket = self.dut.get_numa_id(self.ports[0])
+
+ # Verify that enough threads are available
+ cores = self.dut.get_core_list("1S/1C/1T")
+ self.verify(cores is not None, "Insufficient cores for speed testing")
+
+ global P0, P1
+ P0 = self.ports[0]
+ P1 = self.ports[1]
+
+ # make application
+ out = self.dut.build_dpdk_apps("examples/ip_fragmentation")
+ self.verify("Error" not in out, "compilation error 1")
+ self.verify("No such file" not in out, "compilation error 2")
+
+ self.eal_para = self.dut.create_eal_parameters(
+ socket=self.ports_socket, ports=self.ports
+ )
+ portmask = utils.create_mask([P0, P1])
+ numPortThread = len([P0, P1]) / len(cores)
+
+ # run ipv4_frag
+ self.app_ip_fragmentation_path = self.dut.apps_name["ip_fragmentation"]
+ self.dut.send_expect(
+ "%s %s -- -p %s -q %s"
+ % (
+ self.app_ip_fragmentation_path,
+ self.eal_para,
+ portmask,
+ int(numPortThread),
+ ),
+ "Link [Uu]p",
+ 120,
+ )
+
+ time.sleep(2)
+ self.txItf = self.tester.get_interface(self.tester.get_local_port(P0))
+ self.rxItf = self.tester.get_interface(self.tester.get_local_port(P1))
+ self.dmac = self.dut.get_mac_address(P0)
+
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+
+ def functional_check_ipv4(self, pkt_sizes, burst=1, flag=None):
+ """
+ Perform functional fragmentation checks.
+ """
+ for size in pkt_sizes[::burst]:
+ # simulate to set TG properties
+ if flag == "frag":
+ # do fragment, each packet max length 1518 - 18 - 20 = 1480
+ expPkts = int((size - HEADER_SIZE["eth"] - HEADER_SIZE["ip"]) / 1480)
+ if (size - HEADER_SIZE["eth"] - HEADER_SIZE["ip"]) % 1480:
+ expPkts += 1
+ val = 0
+ elif flag == "nofrag":
+ expPkts = 0
+ val = 2
+ else:
+ expPkts = 1
+ val = 2
+
+ inst = self.tester.tcpdump_sniff_packets(intf=self.rxItf)
+ # send packet
+ for times in range(burst):
+ pkt_size = pkt_sizes[pkt_sizes.index(size) + times]
+ pkt = Packet(pkt_type="UDP", pkt_len=pkt_size)
+ pkt.config_layer("ether", {"dst": "%s" % self.dmac})
+ pkt.config_layer(
+ "ipv4", {"dst": "100.20.0.1", "src": "1.2.3.4", "flags": val}
+ )
+ pkt.send_pkt(self.tester, tx_port=self.txItf)
+
+ # verify normal packet just by number, verify fragment packet by all elements
+ pkts = self.tester.load_tcpdump_sniff_packets(inst)
+ self.verify(
+ len(pkts) == expPkts,
+ "in functional_check_ipv4(): failed on forward packet size "
+ + str(size),
+ )
+ if flag == "frag":
+ idx = 1
+ for i in range(len(pkts)):
+ pkt_id = pkts.strip_element_layer3("id", p_index=i)
+ if idx == 1:
+ prev_idx = pkt_id
+ self.verify(
+ prev_idx == pkt_id, "Fragmented packets index not match"
+ )
+ prev_idx = pkt_id
+
+ # last flags should be 0
+ flags = pkts.strip_element_layer3("flags", p_index=i)
+ if idx == expPkts:
+ self.verify(
+ flags == 0, "Fragmented last packet flags not match"
+ )
+ else:
+ self.verify(flags == 1, "Fragmented packets flags not match")
+
+ # fragment offset should be correct
+ frag = pkts.strip_element_layer3("frag", p_index=i)
+ self.verify(
+ (frag == ((idx - 1) * 185)), "Fragment packet frag not match"
+ )
+ idx += 1
+
+ def functional_check_ipv6(self, pkt_sizes, burst=1, flag=None, funtion=None):
+ """
+ Perform functional fragmentation checks.
+ """
+ for size in pkt_sizes[::burst]:
+ # simulate to set TG properties
+ if flag == "frag":
+ # each packet max len: 1518 - 18 (eth) - 40 (ipv6) - 8 (ipv6 ext hdr) = 1452
+ expPkts = int((size - HEADER_SIZE["eth"] - HEADER_SIZE["ipv6"]) / 1452)
+ if (size - HEADER_SIZE["eth"] - HEADER_SIZE["ipv6"]) % 1452:
+ expPkts += 1
+ val = 0
+ else:
+ expPkts = 1
+ val = 2
+
+ inst = self.tester.tcpdump_sniff_packets(intf=self.rxItf)
+ # send packet
+ for times in range(burst):
+ pkt_size = pkt_sizes[pkt_sizes.index(size) + times]
+ pkt = Packet(pkt_type="IPv6_UDP", pkt_len=pkt_size)
+ pkt.config_layer("ether", {"dst": "%s" % self.dmac})
+ pkt.config_layer(
+ "ipv6",
+ {
+ "dst": "201:101:101:101:101:101:101:101",
+ "src": "ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80",
+ },
+ )
+ pkt.send_pkt(self.tester, tx_port=self.txItf)
+
+ # verify normal packet just by number, verify fragment packet by all elements
+ pkts = self.tester.load_tcpdump_sniff_packets(inst)
+ self.verify(
+ len(pkts) == expPkts,
+ "In functional_check_ipv6(): failed on forward packet size "
+ + str(size),
+ )
+ if flag == "frag":
+ idx = 1
+ for i in range(len(pkts)):
+ pkt_id = pkts.strip_element_layer4("id", p_index=i)
+ if idx == 1:
+ prev_idx = pkt_id
+ self.verify(
+ prev_idx == pkt_id, "Fragmented packets index not match"
+ )
+ prev_idx = pkt_id
+
+ # last flags should be 0
+ flags = pkts.strip_element_layer4("m", p_index=i)
+ if idx == expPkts:
+ self.verify(
+ flags == 0, "Fragmented last packet flags not match"
+ )
+ else:
+ self.verify(flags == 1, "Fragmented packets flags not match")
+
+ # fragment offset should be correct
+ frag = pkts.strip_element_layer4("offset", p_index=i)
+ self.verify(
+ (frag == int((idx - 1) * 181)), "Fragment packet frag not match"
+ )
+ idx += 1
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ self.tester.send_expect(
+ "ifconfig %s mtu 9200"
+ % self.tester.get_interface(self.tester.get_local_port(P0)),
+ "#",
+ )
+ self.tester.send_expect(
+ "ifconfig %s mtu 9200"
+ % self.tester.get_interface(self.tester.get_local_port(P1)),
+ "#",
+ )
+
+ def benchmark(self, index, lcore, num_pthreads, size_list):
+ """
+ Just Test IPv4 Throughput for selected parameters.
+ """
+
+ Bps = dict()
+ Pps = dict()
+ Pct = dict()
+
+ if int(lcore[0]) == 1:
+ eal_param = self.dut.create_eal_parameters(
+ cores=lcore, socket=self.ports_socket, ports=self.ports
+ )
+ else:
+ eal_param = self.dut.create_eal_parameters(cores=lcore, ports=self.ports)
+ portmask = utils.create_mask([P0, P1])
+ self.dut.send_expect("^c", "# ", 120)
+ self.dut.send_expect(
+ "%s %s -- -p %s -q %s"
+ % (self.app_ip_fragmentation_path, eal_param, portmask, num_pthreads),
+ "IP_FRAG:",
+ 120,
+ )
+ result = [2, lcore, num_pthreads]
+ for size in size_list:
+ dmac = self.dut.get_mac_address(P0)
+ flows_p0 = [
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.10.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.20.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IPv6(dst="101:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ 'Ether(dst="%s")/IPv6(dst="201:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ ]
+
+ # reserved for rx/tx bidirection test
+ dmac = self.dut.get_mac_address(P1)
+ flows_p1 = [
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.30.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.40.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IPv6(dst="301:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ 'Ether(dst="%s")/IPv6(dst="401:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ ]
+ flow_len = len(flows_p0)
+ tgenInput = []
+ for i in range(flow_len):
+
+ pcap0 = os.sep.join([self.output_path, "p0_{}.pcap".format(i)])
+ self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap0, flows_p0[i]))
+ pcap1 = os.sep.join([self.output_path, "p1_{}.pcap".format(i)])
+ self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap1, flows_p1[i]))
+ self.tester.scapy_execute()
+
+ tgenInput.append(
+ (
+ self.tester.get_local_port(P0),
+ self.tester.get_local_port(P1),
+ pcap0,
+ )
+ )
+ tgenInput.append(
+ (
+ self.tester.get_local_port(P1),
+ self.tester.get_local_port(P0),
+ pcap1,
+ )
+ )
+
+ factor = (size + 1517) / 1518
+ # wireSpd = 2 * 10000.0 / ((20 + size) * 8)
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ Bps[str(size)], Pps[str(size)] = self.tester.pktgen.measure_throughput(
+ stream_ids=streams
+ )
+
+ self.verify(Pps[str(size)] > 0, "No traffic detected")
+ Pps[str(size)] *= 1.0 / factor / 1000000
+ Pct[str(size)] = (1.0 * Bps[str(size)] * 100) / (2 * 10000000000)
+
+ result.append(Pps[str(size)])
+ result.append(Pct[str(size)])
+
+ self.result_table_add(result)
+
+ self.dut.send_expect("^C", "#")
+
+ def test_perf_ipfrag_throughtput(self):
+ """
+ Performance test for 64, 1518, 1519, 2k and 9k.
+ """
+ sizes = [64, 1518, 1519, 2000, 9000]
+
+ tblheader = ["Ports", "S/C/T", "SW threads"]
+ for size in sizes:
+ tblheader.append("%dB Mpps" % size)
+ tblheader.append("%d" % size)
+
+ self.result_table_create(tblheader)
+
+ lcores = [("1S/1C/1T", 2), ("1S/1C/2T", 2), ("1S/2C/1T", 2), ("2S/1C/1T", 2)]
+ index = 1
+ for (lcore, numThr) in lcores:
+ self.benchmark(index, lcore, numThr, sizes)
+ index += 1
+
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.tester.send_expect(
+ "ifconfig %s mtu 1500"
+ % self.tester.get_interface(self.tester.get_local_port(P0)),
+ "#",
+ )
+ self.tester.send_expect(
+ "ifconfig %s mtu 1500"
+ % self.tester.get_interface(self.tester.get_local_port(P1)),
+ "#",
+ )
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.dut.send_expect("^C", "#")
+ pass
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts][PATCH V1 6/7] tests/multiprocess: Separated performance cases
2023-01-09 18:46 [dts][PATCH V1 1/7] tests/efd: Separated performance cases Hongbo Li
` (3 preceding siblings ...)
2023-01-09 18:46 ` [dts][PATCH V1 5/7] tests/ipfrag: " Hongbo Li
@ 2023-01-09 18:46 ` Hongbo Li
2023-01-09 18:46 ` [dts][PATCH V1 7/7] tests/checksum_offload: " Hongbo Li
5 siblings, 0 replies; 8+ messages in thread
From: Hongbo Li @ 2023-01-09 18:46 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance test cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/multiprocess_test_plan.rst | 48 -
test_plans/perf_multiprocess_test_plan.rst | 141 +++
tests/TestSuite_multiprocess.py | 210 -----
tests/TestSuite_perf_multiprocess.py | 994 +++++++++++++++++++++
4 files changed, 1135 insertions(+), 258 deletions(-)
create mode 100644 test_plans/perf_multiprocess_test_plan.rst
create mode 100644 tests/TestSuite_perf_multiprocess.py
diff --git a/test_plans/multiprocess_test_plan.rst b/test_plans/multiprocess_test_plan.rst
index c7aae44b..9f5ef8fa 100644
--- a/test_plans/multiprocess_test_plan.rst
+++ b/test_plans/multiprocess_test_plan.rst
@@ -196,27 +196,6 @@ run should remain the same, except for the ``num-procs`` value, which should be
adjusted appropriately.
-Test Case: Performance Tests
-----------------------------
-
-Run the multiprocess application using standard IP traffic - varying source
-and destination address information to allow RSS to evenly distribute packets
-among RX queues. Record traffic throughput results as below.
-
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Num-procs | 1 | 2 | 2 | 4 | 4 | 8 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| %-age Line Rate | X | X | X | X | X | X |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Packet Rate(mpps) | X | X | X | X | X | X |
-+-------------------+-----+-----+-----+-----+-----+-----+
-
Test Case: Function Tests
-------------------------
start 2 symmetric_mp process, send some packets, the number of packets is a random value between 20 and 256.
@@ -294,33 +273,6 @@ An example commands to run 8 client processes is as follows::
root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 &
root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 &
-Test Case: Performance Measurement
-----------------------------------
-
-- On the traffic generator set up a traffic flow in both directions specifying
- IP traffic.
-- Run the server and client applications as above.
-- Start the traffic and record the throughput for transmitted and received packets.
-
-An example set of results is shown below.
-
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Server threads | 1 | 1 | 1 | 1 | 1 | 1 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Num-clients | 1 | 2 | 2 | 4 | 4 | 8 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| %-age Line Rate | X | X | X | X | X | X |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Packet Rate(mpps) | X | X | X | X | X | X |
-+----------------------+-----+-----+-----+-----+-----+-----+
Test Case: Function Tests
-------------------------
diff --git a/test_plans/perf_multiprocess_test_plan.rst b/test_plans/perf_multiprocess_test_plan.rst
new file mode 100644
index 00000000..c1e7ff87
--- /dev/null
+++ b/test_plans/perf_multiprocess_test_plan.rst
@@ -0,0 +1,141 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+=======================================
+Sample Application Tests: Multi-Process
+=======================================
+
+Simple MP Application Test
+==========================
+
+Description
+-----------
+
+This test is a basic multi-process test which demonstrates the basics of sharing
+information between DPDK processes. The same application binary is run
+twice - once as a primary instance, and once as a secondary instance. Messages
+are sent from primary to secondary and vice versa, demonstrating the processes
+are sharing memory and can communicate using rte_ring structures.
+
+Prerequisites
+-------------
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+Assuming that a DPDK build has been set up and the multi-process sample
+applications have been built.
+
+
+Test Case: Performance Tests
+----------------------------
+
+Run the multiprocess application using standard IP traffic - varying source
+and destination address information to allow RSS to evenly distribute packets
+among RX queues. Record traffic throughput results as below.
+
++-------------------+-----+-----+-----+-----+-----+-----+
+| Num-procs | 1 | 2 | 2 | 4 | 4 | 8 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| %-age Line Rate | X | X | X | X | X | X |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Packet Rate(mpps) | X | X | X | X | X | X |
++-------------------+-----+-----+-----+-----+-----+-----+
+
+
+Client Server Multiprocess Tests
+================================
+
+Description
+-----------
+
+The client-server sample application demonstrates the ability of Intel� DPDK
+to use multiple processes in which a server process performs packet I/O and one
+or multiple client processes perform packet processing. The server process
+controls load balancing on the traffic received from a number of input ports to
+a user-specified number of clients. The client processes forward the received
+traffic, outputting the packets directly by writing them to the TX rings of the
+outgoing ports.
+
+Prerequisites
+-------------
+
+Assuming that an Intel� DPDK build has been set up and the multi-process
+sample application has been built.
+Also assuming a traffic generator is connected to the ports "0" and "1".
+
+It is important to run the server application before the client application,
+as the server application manages both the NIC ports with packet transmission
+and reception, as well as shared memory areas and client queues.
+
+Run the Server Application:
+
+- Provide the core mask on which the server process is to run using -c, e.g. -c 3 (bitmask number).
+- Set the number of ports to be engaged using -p, e.g. -p 3 refers to ports 0 & 1.
+- Define the maximum number of clients using -n, e.g. -n 8.
+
+The command line below is an example on how to start the server process on
+logical core 2 to handle a maximum of 8 client processes configured to
+run on socket 0 to handle traffic from NIC ports 0 and 1::
+
+ root@host:mp_server# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_server -c 2 -- -p 3 -n 8
+
+NOTE: If an additional second core is given in the coremask to the server process
+that second core will be used to print statistics. When benchmarking, only a
+single lcore is needed for the server process
+
+Run the Client application:
+
+- In another terminal run the client application.
+- Give each client a distinct core mask with -c.
+- Give each client a unique client-id with -n.
+
+An example commands to run 8 client processes is as follows::
+
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40 --proc-type=secondary -- -n 0 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100 --proc-type=secondary -- -n 1 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 400 --proc-type=secondary -- -n 2 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 1000 --proc-type=secondary -- -n 3 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 4000 --proc-type=secondary -- -n 4 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 10000 --proc-type=secondary -- -n 5 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 &
+
+Test Case: Performance Measurement
+----------------------------------
+
+- On the traffic generator set up a traffic flow in both directions specifying
+ IP traffic.
+- Run the server and client applications as above.
+- Start the traffic and record the throughput for transmitted and received packets.
+
+An example set of results is shown below.
+
++----------------------+-----+-----+-----+-----+-----+-----+
+| Server threads | 1 | 1 | 1 | 1 | 1 | 1 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Num-clients | 1 | 2 | 2 | 4 | 4 | 8 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| %-age Line Rate | X | X | X | X | X | X |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Packet Rate(mpps) | X | X | X | X | X | X |
++----------------------+-----+-----+-----+-----+-----+-----+
\ No newline at end of file
diff --git a/tests/TestSuite_multiprocess.py b/tests/TestSuite_multiprocess.py
index 099ce6e7..a52622c9 100644
--- a/tests/TestSuite_multiprocess.py
+++ b/tests/TestSuite_multiprocess.py
@@ -1714,216 +1714,6 @@ class TestMultiprocess(TestCase):
"core dump" not in out, "Core dump occurred in the secondary process!!!"
)
- def test_perf_multiprocess_performance(self):
- """
- Benchmark Multiprocess performance.
- #"""
- packet_count = 16
- self.dut.send_expect("fg", "# ")
- txPort = self.tester.get_local_port(self.dut_ports[0])
- rxPort = self.tester.get_local_port(self.dut_ports[1])
- mac = self.tester.get_mac(txPort)
- dmac = self.dut.get_mac_address(self.dut_ports[0])
- tgenInput = []
-
- # create mutative src_ip+dst_ip package
- for i in range(packet_count):
- package = (
- r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]'
- % (mac, dmac, i + 1, i + 2)
- )
- self.tester.scapy_append(package)
- pcap = os.sep.join([self.output_path, "test_%d.pcap" % i])
- self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
- tgenInput.append([txPort, rxPort, pcap])
- self.tester.scapy_execute()
-
- # run multiple symmetric_mp process
- validExecutions = []
- for execution in executions:
- if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
- validExecutions.append(execution)
-
- portMask = utils.create_mask(self.dut_ports)
-
- for n in range(len(validExecutions)):
- execution = validExecutions[n]
- # get coreList form execution['cores']
- coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
- # to run a set of symmetric_mp instances, like test plan
- dutSessionList = []
- for index in range(len(coreList)):
- dut_new_session = self.dut.new_session()
- dutSessionList.append(dut_new_session)
- # add -a option when tester and dut in same server
- dut_new_session.send_expect(
- self.app_symmetric_mp
- + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d"
- % (
- utils.create_mask([coreList[index]]),
- self.eal_param,
- portMask,
- execution["nprocs"],
- index,
- ),
- "Finished Process Init",
- )
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgenInput, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- execution["pps"] = pps
-
- # close all symmetric_mp process
- self.dut.send_expect("killall symmetric_mp", "# ")
- # close all dut sessions
- for dut_session in dutSessionList:
- self.dut.close_session(dut_session)
-
- # get rate and mpps data
- for n in range(len(executions)):
- self.verify(executions[n]["pps"] is not 0, "No traffic detected")
- self.result_table_create(
- [
- "Num-procs",
- "Sockets/Cores/Threads",
- "Num Ports",
- "Frame Size",
- "%-age Line Rate",
- "Packet Rate(mpps)",
- ]
- )
-
- for execution in validExecutions:
- self.result_table_add(
- [
- execution["nprocs"],
- execution["cores"],
- 2,
- 64,
- execution["pps"] / float(100000000 / (8 * 84)),
- execution["pps"] / float(1000000),
- ]
- )
-
- self.result_table_print()
-
- def test_perf_multiprocess_client_serverperformance(self):
- """
- Benchmark Multiprocess client-server performance.
- """
- self.dut.kill_all()
- self.dut.send_expect("fg", "# ")
- txPort = self.tester.get_local_port(self.dut_ports[0])
- rxPort = self.tester.get_local_port(self.dut_ports[1])
- mac = self.tester.get_mac(txPort)
-
- self.tester.scapy_append(
- 'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0])
- )
- self.tester.scapy_append('smac="%s"' % mac)
- self.tester.scapy_append(
- 'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]'
- )
-
- pcap = os.sep.join([self.output_path, "test.pcap"])
- self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
- self.tester.scapy_execute()
-
- validExecutions = []
- for execution in executions:
- if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
- validExecutions.append(execution)
-
- for execution in validExecutions:
- coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
- # get core with socket parameter to specified which core dut used when tester and dut in same server
- coreMask = utils.create_mask(
- self.dut.get_core_list("1S/1C/1T", socket=self.socket)
- )
- portMask = utils.create_mask(self.dut_ports)
- # specified mp_server core and add -a option when tester and dut in same server
- self.dut.send_expect(
- self.app_mp_server
- + " -n %d -c %s %s -- -p %s -n %d"
- % (
- self.dut.get_memory_channels(),
- coreMask,
- self.eal_param,
- portMask,
- execution["nprocs"],
- ),
- "Finished Process Init",
- 20,
- )
- self.dut.send_expect("^Z", "\r\n")
- self.dut.send_expect("bg", "# ")
-
- for n in range(execution["nprocs"]):
- time.sleep(5)
- # use next core as mp_client core, different from mp_server
- coreMask = utils.create_mask([str(int(coreList[n]) + 1)])
- self.dut.send_expect(
- self.app_mp_client
- + " -n %d -c %s --proc-type=secondary %s -- -n %d"
- % (self.dut.get_memory_channels(), coreMask, self.eal_param, n),
- "Finished Process Init",
- )
- self.dut.send_expect("^Z", "\r\n")
- self.dut.send_expect("bg", "# ")
-
- tgenInput = []
- tgenInput.append([txPort, rxPort, pcap])
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgenInput, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- execution["pps"] = pps
- self.dut.kill_all()
- time.sleep(5)
-
- for n in range(len(executions)):
- self.verify(executions[n]["pps"] is not 0, "No traffic detected")
-
- self.result_table_create(
- [
- "Server threads",
- "Server Cores/Threads",
- "Num-procs",
- "Sockets/Cores/Threads",
- "Num Ports",
- "Frame Size",
- "%-age Line Rate",
- "Packet Rate(mpps)",
- ]
- )
-
- for execution in validExecutions:
- self.result_table_add(
- [
- 1,
- "1S/1C/1T",
- execution["nprocs"],
- execution["cores"],
- 2,
- 64,
- execution["pps"] / float(100000000 / (8 * 84)),
- execution["pps"] / float(1000000),
- ]
- )
-
- self.result_table_print()
-
def set_fields(self):
"""set ip protocol field behavior"""
fields_config = {
diff --git a/tests/TestSuite_perf_multiprocess.py b/tests/TestSuite_perf_multiprocess.py
new file mode 100644
index 00000000..179574a5
--- /dev/null
+++ b/tests/TestSuite_perf_multiprocess.py
@@ -0,0 +1,994 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Multi-process Test.
+"""
+
+import copy
+import os
+import random
+import re
+import time
+import traceback
+from collections import OrderedDict
+
+import framework.utils as utils
+from framework.exception import VerifyFailure
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.test_case import TestCase, check_supported_nic
+from framework.utils import GREEN, RED
+
+from .rte_flow_common import FdirProcessing as fdirprocess
+from .rte_flow_common import RssProcessing as rssprocess
+
+executions = []
+
+
+class TestMultiprocess(TestCase):
+
+ support_nic = ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"]
+
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+
+ Multiprocess prerequisites.
+ Requirements:
+ OS is not freeBSD
+ DUT core number >= 4
+ multi_process build pass
+ """
+ # self.verify('bsdapp' not in self.target, "Multiprocess not support freebsd")
+
+ self.verify(len(self.dut.get_all_cores()) >= 4, "Not enough Cores")
+ self.pkt = Packet()
+ self.dut_ports = self.dut.get_ports()
+ self.socket = self.dut.get_numa_id(self.dut_ports[0])
+ extra_option = "-Dexamples='multi_process/client_server_mp/mp_server,multi_process/client_server_mp/mp_client,multi_process/simple_mp,multi_process/symmetric_mp'"
+ self.dut.build_install_dpdk(target=self.target, extra_options=extra_option)
+ self.app_mp_client = self.dut.apps_name["mp_client"]
+ self.app_mp_server = self.dut.apps_name["mp_server"]
+ self.app_simple_mp = self.dut.apps_name["simple_mp"]
+ self.app_symmetric_mp = self.dut.apps_name["symmetric_mp"]
+
+ executions.append({"nprocs": 1, "cores": "1S/1C/1T", "pps": 0})
+ executions.append({"nprocs": 2, "cores": "1S/1C/2T", "pps": 0})
+ executions.append({"nprocs": 2, "cores": "1S/2C/1T", "pps": 0})
+ executions.append({"nprocs": 4, "cores": "1S/2C/2T", "pps": 0})
+ executions.append({"nprocs": 4, "cores": "1S/4C/1T", "pps": 0})
+ executions.append({"nprocs": 8, "cores": "1S/4C/2T", "pps": 0})
+
+ self.eal_param = ""
+ for i in self.dut_ports:
+ self.eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
+
+ self.eal_para = self.dut.create_eal_parameters(cores="1S/2C/1T")
+ # start new session to run secondary
+ self.session_secondary = self.dut.new_session()
+
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+ self.dport_info0 = self.dut.ports_info[self.dut_ports[0]]
+ self.pci0 = self.dport_info0["pci"]
+ self.tester_ifaces = [
+ self.tester.get_interface(self.dut.ports_map[port])
+ for port in self.dut_ports
+ ]
+ rxq = 1
+ self.session_list = []
+ self.logfmt = "*" * 20
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def launch_multi_testpmd(self, proc_type, queue_num, process_num, **kwargs):
+ self.session_list = [
+ self.dut.new_session("process_{}".format(i)) for i in range(process_num)
+ ]
+ self.pmd_output_list = [
+ PmdOutput(self.dut, self.session_list[i]) for i in range(process_num)
+ ]
+ self.dut.init_reserved_core()
+ proc_type_list = []
+ self.out_list = []
+ if isinstance(proc_type, list):
+ proc_type_list = copy.deepcopy(proc_type)
+ proc_type = proc_type_list[0]
+ for i in range(process_num):
+ cores = self.dut.get_reserved_core("2C", socket=0)
+ if i != 0 and proc_type_list:
+ proc_type = proc_type_list[1]
+ eal_param = "--proc-type={} -a {} --log-level=ice,7".format(
+ proc_type, self.pci0
+ )
+ param = "--rxq={0} --txq={0} --num-procs={1} --proc-id={2}".format(
+ queue_num, process_num, i
+ )
+ if kwargs.get("options") is not None:
+ param = "".join([param, kwargs.get("options")])
+ out = self.pmd_output_list[i].start_testpmd(
+ cores=cores,
+ eal_param=eal_param,
+ param=param,
+ timeout=kwargs.get("timeout", 20),
+ )
+ self.out_list.append(out)
+ self.pmd_output_list[i].execute_cmd("set fwd rxonly")
+ self.pmd_output_list[i].execute_cmd("set verbose 1")
+ self.pmd_output_list[i].execute_cmd("start")
+ self.pmd_output_list[i].execute_cmd("clear port stats all")
+
+ def get_pkt_statistic_process(self, out, **kwargs):
+ """
+ :param out: information received by testpmd process after sending packets and port statistics
+ :return: forward statistic dict, eg: {'rx-packets':1, 'tx-packets:0, 'tx-dropped':1}
+ """
+ p = re.compile(
+ r"Forward\s+Stats\s+for\s+RX\s+Port=\s+{}/Queue=([\s\d+]\d+)\s+.*\n.*RX-packets:\s+(\d+)\s+TX-packets:\s+(\d+)\s+TX-dropped:\s+(\d+)\s".format(
+ kwargs.get("port_id")
+ )
+ )
+ item_name = ["rx-packets", "tx-packets", "tx-dropped"]
+ statistic = p.findall(out)
+ if statistic:
+ rx_pkt_total, tx_pkt_total, tx_drop_total = 0, 0, 0
+ queue_set = set()
+ for item in statistic:
+ queue, rx_pkt, tx_pkt, tx_drop = map(int, item)
+ queue_set.add(queue)
+ rx_pkt_total += rx_pkt
+ tx_pkt_total += tx_pkt
+ tx_drop_total += tx_drop
+ static_dict = {
+ k: v
+ for k, v in zip(item_name, [rx_pkt_total, tx_pkt_total, tx_drop_total])
+ }
+ static_dict["queue"] = queue_set
+ return static_dict
+ else:
+ raise Exception("got wrong output, not match pattern {}".format(p.pattern))
+
+ def random_packet(self, pkt_num):
+ pkt = Packet()
+ if self.kdriver == "i40e":
+ pkt.generate_random_pkts(
+ pktnum=pkt_num,
+ dstmac="00:11:22:33:44:55",
+ random_type=["IP_RAW", "IPv6_RAW"],
+ )
+ else:
+ pkt.generate_random_pkts(
+ pktnum=pkt_num,
+ dstmac="00:11:22:33:44:55",
+ )
+ pkt.send_pkt(crb=self.tester, tx_port=self.tester_ifaces[0], count=1)
+
+ def specify_packet(self, que_num):
+ # create rule to set queue as one of each process queues
+ rule_str = "flow create 0 ingress pattern eth / ipv4 src is 192.168.{0}.3 / end actions queue index {0} / end"
+ rules = [rule_str.format(i) for i in range(que_num)]
+ fdirprocess(
+ self, self.pmd_output_list[0], self.tester_ifaces, rxq=que_num
+ ).create_rule(rules)
+ # send 1 packet for each queue,the number of packets should be received by each process is (queue_num/proc_num)
+ pkt = Packet()
+ pkt_num = que_num
+ self.logger.info("packet num:{}".format(pkt_num))
+ packets = [
+ 'Ether(dst="00:11:22:33:44:55") / IP(src="192.168.{0}.3", dst="192.168.0.21") / Raw("x" * 80)'.format(
+ i
+ )
+ for i in range(pkt_num)
+ ]
+ pkt.update_pkt(packets)
+ pkt.send_pkt(crb=self.tester, tx_port=self.tester_ifaces[0], count=1)
+
+ def _multiprocess_data_pass(self, case):
+ que_num, proc_num = case.get("queue_num"), case.get("proc_num")
+ pkt_num = case.setdefault("pkt_num", que_num)
+ step = int(que_num / proc_num)
+ proc_queue = [set(range(i, i + step)) for i in range(0, que_num, step)]
+ queue_dict = {
+ k: v
+ for k, v in zip(
+ ["process_{}".format(i) for i in range(que_num)], proc_queue
+ )
+ }
+ # start testpmd multi-process
+ self.launch_multi_testpmd(
+ proc_type=case.get("proc_type"), queue_num=que_num, process_num=proc_num
+ )
+ # send random or specify packets
+ packet_func = getattr(self, case.get("packet_type") + "_packet")
+ packet_func(pkt_num)
+ # get output for each process
+ process_static = {}
+ for i in range(len(self.pmd_output_list)):
+ out = self.pmd_output_list[i].execute_cmd("stop")
+ static = self.get_pkt_statistic_process(out, port_id=0)
+ process_static["process_{}".format(i)] = static
+ self.logger.info("process output static:{}".format(process_static))
+ # check whether each process receives packet, and ecah process receives packets with the corresponding queue
+ for k, v in process_static.items():
+ self.verify(
+ v.get("rx-packets") > 0,
+ "fail:process:{} does not receive packet".format(k),
+ )
+ self.verify(
+ v.get("queue").issubset(queue_dict.get(k)),
+ "fail: {} is not a subset of {}, "
+ "process should use its own queues".format(
+ v.get("queue"), queue_dict.get(k)
+ ),
+ )
+ self.logger.info("pass:each process receives packets and uses its own queue")
+ # check whether the sum of packets received by all processes is equal to the number of packets sent
+ received_pkts = sum(
+ int(v.get("rx-packets", 0)) for v in process_static.values()
+ )
+ self.verify(
+ received_pkts == pkt_num,
+ "the number of packets received is not equal to packets sent,"
+ "send packet:{}, received packet:{}".format(pkt_num, received_pkts),
+ )
+ self.logger.info(
+ "pass:the number of packets received is {}, equal to packets sent".format(
+ received_pkts
+ )
+ )
+
+ def check_rss(self, out, **kwargs):
+ """
+ check whether the packet directed by rss or not according to the specified parameters
+ :param out: information received by testpmd after sending packets and port statistics
+ :param kwargs: some specified parameters, such as: rxq, stats
+ :return: queue value list
+ usage:
+ check_rss(out, rxq=rxq, stats=stats)
+ """
+ self.logger.info("{0} check rss {0}".format(self.logfmt))
+ rxq = kwargs.get("rxq")
+ p = re.compile("RSS\shash=(\w+)\s-\sRSS\squeue=(\w+)")
+ pkt_info = p.findall(out)
+ self.verify(
+ pkt_info,
+ "no information matching the pattern was found,pattern:{}".format(
+ p.pattern
+ ),
+ )
+ pkt_queue = set([int(i[1], 16) for i in pkt_info])
+ if kwargs.get("stats"):
+ self.verify(
+ all([int(i[0], 16) % rxq == int(i[1], 16) for i in pkt_info]),
+ "some pkt not directed by rss.",
+ )
+ self.logger.info((GREEN("pass: all pkts directed by rss")))
+ else:
+ self.verify(
+ not any([int(i[0], 16) % rxq == int(i[1], 16) for i in pkt_info]),
+ "some pkt directed by rss, expect not directed by rss",
+ )
+ self.logger.info((GREEN("pass: no pkt directed by rss")))
+ return pkt_queue
+
+ def check_queue(self, out, check_param, **kwargs):
+ """
+ verify that queue value matches the expected value
+ :param out: information received by testpmd after sending packets and port statistics
+ :param check_param: check item name and value, eg
+ "check_param": {"port_id": 0, "queue": 2}
+ :param kwargs: some specified parameters, such as: pkt_num, port_id, stats
+ :return:
+ """
+ self.logger.info("{0} check queue {0}".format(self.logfmt))
+ queue = check_param["queue"]
+ if isinstance(check_param["queue"], int):
+ queue = [queue]
+ patt = re.compile(
+ r"port\s+{}/queue(.+?):\s+received\s+(\d+)\s+packets".format(
+ kwargs.get("port_id")
+ )
+ )
+ res = patt.findall(out)
+ if res:
+ pkt_queue = set([int(i[0]) for i in res])
+ if kwargs.get("stats"):
+ self.verify(
+ all(q in queue for q in pkt_queue),
+ "fail: queue id not matched, expect queue {}, got {}".format(
+ queue, pkt_queue
+ ),
+ )
+ self.logger.info((GREEN("pass: queue id {} matched".format(pkt_queue))))
+ else:
+ try:
+ self.verify(
+ not any(q in queue for q in pkt_queue),
+ "fail: queue id should not matched, {} should not in {}".format(
+ pkt_queue, queue
+ ),
+ )
+ self.logger.info(
+ (GREEN("pass: queue id {} not matched".format(pkt_queue)))
+ )
+ except VerifyFailure:
+ self.logger.info(
+ "queue id {} contains the queue {} specified in rule, so need to check"
+ " whether the packet directed by rss or not".format(
+ pkt_queue, queue
+ )
+ )
+ # for mismatch packet the 'stats' parameter is False, need to change to True
+ kwargs["stats"] = True
+ self.check_rss(out, **kwargs)
+
+ else:
+ raise Exception("got wrong output, not match pattern")
+
+ def check_mark_id(self, out, check_param, **kwargs):
+ """
+ verify that the mark ID matches the expected value
+ :param out: information received by testpmd after sending packets
+ :param check_param: check item name and value, eg
+ "check_param": {"port_id": 0, "mark_id": 1}
+ :param kwargs: some specified parameters,eg: stats
+ :return: None
+ usage:
+ check_mark_id(out, check_param, stats=stats)
+ """
+ self.logger.info("{0} check mark id {0}".format(self.logfmt))
+ fdir_scanner = re.compile("FDIR matched ID=(0x\w+)")
+ all_mark = fdir_scanner.findall(out)
+ stats = kwargs.get("stats")
+ if stats:
+ mark_list = set(int(i, 16) for i in all_mark)
+ self.verify(
+ all([i == check_param["mark_id"] for i in mark_list]) and mark_list,
+ "failed: some packet mark id of {} not match expect {}".format(
+ mark_list, check_param["mark_id"]
+ ),
+ )
+ self.logger.info((GREEN("pass: all packets mark id are matched ")))
+ else:
+ # for mismatch packet,verify no mark id in output of received packet
+ self.verify(
+ not all_mark, "mark id {} in output, expect no mark id".format(all_mark)
+ )
+ self.logger.info((GREEN("pass: no mark id in output")))
+
+ def check_drop(self, out, **kwargs):
+ """
+ check the drop number of packets according to the specified parameters
+ :param out: information received by testpmd after sending packets and port statistics
+ :param kwargs: some specified parameters, such as: pkt_num, port_id, stats
+ :return: None
+ usage:
+ chek_drop(out, pkt_num=pkt_num, port_id=portid, stats=stats)
+ """
+ self.logger.info("{0} check drop {0}".format(self.logfmt))
+ pkt_num = kwargs.get("pkt_num")
+ stats = kwargs.get("stats")
+ res = self.get_pkt_statistic(out, **kwargs)
+ self.verify(
+ pkt_num == res["rx-total"],
+ "failed: get wrong amount of packet {}, expected {}".format(
+ res["rx-total"], pkt_num
+ ),
+ )
+ drop_packet_num = res["rx-dropped"]
+ if stats:
+ self.verify(
+ drop_packet_num == pkt_num,
+ "failed: {} packet dropped,expect {} dropped".format(
+ drop_packet_num, pkt_num
+ ),
+ )
+ self.logger.info(
+ (
+ GREEN(
+ "pass: drop packet number {} is matched".format(drop_packet_num)
+ )
+ )
+ )
+ else:
+ self.verify(
+ drop_packet_num == 0 and res["rx-packets"] == pkt_num,
+ "failed: {} packet dropped, expect 0 packet dropped".format(
+ drop_packet_num
+ ),
+ )
+ self.logger.info(
+ (
+ GREEN(
+ "pass: drop packet number {} is matched".format(drop_packet_num)
+ )
+ )
+ )
+
+ @staticmethod
+ def get_pkt_statistic(out, **kwargs):
+ """
+ :param out: information received by testpmd after sending packets and port statistics
+ :return: rx statistic dict, eg: {'rx-packets':1, 'rx-dropped':0, 'rx-total':1}
+ """
+ p = re.compile(
+ r"Forward\sstatistics\s+for\s+port\s+{}\s+.*\n.*RX-packets:\s(\d+)\s+RX-dropped:\s(\d+)\s+RX-total:\s(\d+)\s".format(
+ kwargs.get("port_id")
+ )
+ )
+ item_name = ["rx-packets", "rx-dropped", "rx-total"]
+ statistic = p.findall(out)
+ if statistic:
+ static_dict = {
+ k: v for k, v in zip(item_name, list(map(int, list(statistic[0]))))
+ }
+ return static_dict
+ else:
+ raise Exception(
+ "got wrong output, not match pattern {}".format(p.pattern).replace(
+ "\\\\", "\\"
+ )
+ )
+
+ def send_pkt_get_output(
+ self, instance_obj, pkts, port_id=0, count=1, interval=0, get_stats=False
+ ):
+ instance_obj.pmd_output.execute_cmd("clear port stats all")
+ tx_port = self.tester_ifaces[port_id]
+ self.logger.info("----------send packet-------------")
+ self.logger.info("{}".format(pkts))
+ if not isinstance(pkts, list):
+ pkts = [pkts]
+ self.pkt.update_pkt(pkts)
+ self.pkt.send_pkt(
+ crb=self.tester,
+ tx_port=tx_port,
+ count=count,
+ interval=interval,
+ )
+ out1 = instance_obj.pmd_output.get_output(timeout=1)
+ if get_stats:
+ out2 = instance_obj.pmd_output.execute_cmd("show port stats all")
+ instance_obj.pmd_output.execute_cmd("stop")
+ else:
+ out2 = instance_obj.pmd_output.execute_cmd("stop")
+ instance_obj.pmd_output.execute_cmd("start")
+ return "".join([out1, out2])
+
+ def check_pkt_num(self, out, **kwargs):
+ """
+ check number of received packets matches the expected value
+ :param out: information received by testpmd after sending packets and port statistics
+ :param kwargs: some specified parameters, such as: pkt_num, port_id
+ :return: rx statistic dict
+ """
+ self.logger.info(
+ "{0} check pkt num for port:{1} {0}".format(
+ self.logfmt, kwargs.get("port_id")
+ )
+ )
+ pkt_num = kwargs.get("pkt_num")
+ res = self.get_pkt_statistic(out, **kwargs)
+ res_num = res["rx-total"]
+ self.verify(
+ res_num == pkt_num,
+ "fail: got wrong number of packets, expect pakcet number {}, got {}".format(
+ pkt_num, res_num
+ ),
+ )
+ self.logger.info(
+ (GREEN("pass: pkt num is {} same as expected".format(pkt_num)))
+ )
+ return res
+
+ def check_with_param(self, out, pkt_num, check_param, stats=True):
+ """
+ according to the key and value of the check parameter,
+ perform the corresponding verification in the out information
+ :param out: information received by testpmd after sending packets and port statistics
+ :param pkt_num: number of packets sent
+ :param check_param: check item name and value, eg:
+ "check_param": {"port_id": 0, "mark_id": 1, "queue": 1}
+ "check_param": {"port_id": 0, "drop": 1}
+ :param stats: effective status of rule, True or False, default is True
+ :return:
+ usage:
+ check_with_param(out, pkt_num, check_param, stats)
+ check_with_param(out, pkt_num, check_param=check_param)
+ """
+ rxq = check_param.get("rxq")
+ port_id = (
+ check_param["port_id"] if check_param.get("port_id") is not None else 0
+ )
+ match_flag = True
+ """
+ check_dict shows the supported check items,the key is item name and value represent the check priority,
+ the smaller the value, the higher the priority, priority default value is 999. if need to add new check item,
+ please add it to the dict and implement the corresponding method and named as 'check_itemname',eg: check_queue
+ """
+ self.matched_queue = []
+ default_pri = 999
+ check_dict = {
+ "queue": default_pri,
+ "drop": default_pri,
+ "mark_id": 1,
+ "rss": default_pri,
+ }
+ params = {"port_id": port_id, "rxq": rxq, "pkt_num": pkt_num, "stats": stats}
+ # sort check_param order by priority, from high to low, set priority as 999 if key not in check_dict
+ check_param = OrderedDict(
+ sorted(
+ check_param.items(),
+ key=lambda item: check_dict.get(item[0], default_pri),
+ )
+ )
+ if not check_param.get("drop"):
+ self.check_pkt_num(out, **params)
+ for k in check_param:
+ parameter = copy.deepcopy(params)
+ if k not in check_dict:
+ continue
+ func_name = "check_{}".format(k)
+ try:
+ func = getattr(self, func_name)
+ except AttributeError:
+ emsg = "{},this func is not implemented, please check!".format(
+ traceback.format_exc()
+ )
+ raise Exception(emsg)
+ else:
+ # for mismatch packet, if the check item is 'rss',should also verify the packets are distributed by rss
+ if k == "rss" and not stats:
+ parameter["stats"] = True
+ match_flag = False
+ res = func(out=out, check_param=check_param, **parameter)
+ if k == "rss" and match_flag:
+ self.matched_queue.append(res)
+
+ def destroy_rule(self, instance_obj, port_id=0, rule_id=None):
+ rule_id = 0 if rule_id is None else rule_id
+ if not isinstance(rule_id, list):
+ rule_id = [rule_id]
+ for i in rule_id:
+ out = instance_obj.pmd_output.execute_cmd(
+ "flow destroy {} rule {}".format(port_id, i)
+ )
+ p = re.compile(r"Flow rule #(\d+) destroyed")
+ m = p.search(out)
+ self.verify(m, "flow rule {} delete failed".format(rule_id))
+
+ def multiprocess_flow_data(self, case, **pmd_param):
+ que_num, proc_num = pmd_param.get("queue_num"), pmd_param.get("proc_num")
+ # start testpmd multi-process
+ self.launch_multi_testpmd(
+ proc_type=pmd_param.get("proc_type"),
+ queue_num=que_num,
+ process_num=proc_num,
+ )
+ self.pmd_output_list[0].execute_cmd("flow flush 0")
+ check_param = case["check_param"]
+ check_param["rxq"] = pmd_param.get("queue_num")
+ if check_param.get("rss"):
+ [pmd.execute_cmd("port config all rss all") for pmd in self.pmd_output_list]
+ fdir_pro = fdirprocess(
+ self,
+ self.pmd_output_list[0],
+ self.tester_ifaces,
+ rxq=pmd_param.get("queue_num"),
+ )
+ fdir_pro.create_rule(case.get("rule"))
+ # send match and mismatch packet
+ packets = [case.get("packet")["match"], case.get("packet")["mismatch"]]
+ for i in range(2):
+ out1 = self.send_pkt_get_output(fdir_pro, packets[i])
+ patt = re.compile(
+ r"port\s+{}/queue(.+?):\s+received\s+(\d+)\s+packets".format(
+ check_param.get("port_id")
+ )
+ )
+ if patt.findall(out1) and check_param.get("rss"):
+ self.logger.info(
+ "check whether the packets received by the primary process are distributed by RSS"
+ )
+ self.check_rss(out1, stats=True, **check_param)
+ for proc_pmd in self.pmd_output_list[1:]:
+ out2 = proc_pmd.get_output(timeout=1)
+ out3 = proc_pmd.execute_cmd("stop")
+ out1 = "".join([out1, out2, out3])
+ proc_pmd.execute_cmd("start")
+ if patt.findall(out2) and check_param.get("rss"):
+ self.logger.info(
+ "check whether the packets received by the secondary process are distributed by RSS"
+ )
+ self.check_rss(out2, stats=True, **check_param)
+ pkt_num = len(packets[i])
+ self.check_with_param(
+ out1,
+ pkt_num=pkt_num,
+ check_param=check_param,
+ stats=True if i == 0 else False,
+ )
+
+ def _handle_test(self, tests, instance_obj, port_id=0):
+ instance_obj.pmd_output.wait_link_status_up(port_id)
+ for test in tests:
+ if "send_packet" in test:
+ out = self.send_pkt_get_output(
+ instance_obj, test["send_packet"], port_id
+ )
+ for proc_pmd in self.pmd_output_list[1:]:
+ out1 = proc_pmd.get_output(timeout=1)
+ out = "".join([out, out1])
+ if "action" in test:
+ instance_obj.handle_actions(out, test["action"])
+
+ def multiprocess_rss_data(self, case, **pmd_param):
+ que_num, proc_num = pmd_param.get("queue_num"), pmd_param.get("proc_num")
+ # start testpmd multi-process
+ self.launch_multi_testpmd(
+ proc_type=pmd_param.get("proc_type"),
+ queue_num=que_num,
+ process_num=proc_num,
+ options=pmd_param.get("options", None),
+ )
+ self.pmd_output_list[0].execute_cmd("flow flush 0")
+ rss_pro = rssprocess(
+ self,
+ self.pmd_output_list[0],
+ self.tester_ifaces,
+ rxq=pmd_param.get("queue_num"),
+ )
+ rss_pro.error_msgs = []
+ # handle tests
+ tests = case["test"]
+ port_id = case["port_id"]
+ self.logger.info("------------handle test--------------")
+ # validate rule
+ rule = case.get("rule", None)
+ if rule:
+ rss_pro.validate_rule(rule=rule)
+ rule_ids = rss_pro.create_rule(rule=rule)
+ rss_pro.check_rule(rule_list=rule_ids)
+ self._handle_test(tests, rss_pro, port_id)
+ # handle post-test
+ if "post-test" in case:
+ self.logger.info("------------handle post-test--------------")
+ self.destroy_rule(rss_pro, port_id=port_id, rule_id=rule_ids)
+ rss_pro.check_rule(port_id=port_id, stats=False)
+ self._handle_test(case["post-test"], rss_pro, port_id)
+ if rss_pro.error_msgs:
+ self.verify(
+ False,
+ " ".join([errs.replace("'", " ") for errs in rss_pro.error_msgs[:500]]),
+ )
+
+ def rte_flow(self, case_list, func_name, **kwargs):
+ """
+ main flow of case:
+ 1. iterate the case list and do the below steps:
+ a. get the subcase name and init dict to save result
+ b. call method by func name to execute case step
+ c. record case result and err msg if case failed
+ d. clear flow rule
+ 2. calculate the case passing rate according to the result dict
+ 3. record case result and pass rate in the case log file
+ 4. verify whether the case pass rate is equal to 100, if not, mark the case as failed and raise the err msg
+ :param case_list: case list, each item is a subcase of case
+ :param func_name: hadle case method name, eg:
+ 'flow_rule_operate': a method of 'FlowRuleProcessing' class,
+ used to handle flow rule related suites,such as fdir and switch_filter
+ 'handle_rss_distribute_cases': a method of 'RssProcessing' class,
+ used to handle rss related suites
+ :return:
+ usage:
+ for flow rule related:
+ rte_flow(caselist, flow_rule_operate)
+ for rss related:
+ rte_flow(caselist, handle_rss_distribute_cases)
+ """
+ if not isinstance(case_list, list):
+ case_list = [case_list]
+ test_results = dict()
+ for case in case_list:
+ case_name = case.get("sub_casename")
+ test_results[case_name] = {}
+ try:
+ self.logger.info("{0} case_name:{1} {0}".format("*" * 20, case_name))
+ func_name(case, **kwargs)
+ except Exception:
+ test_results[case_name]["result"] = "failed"
+ test_results[case_name]["err"] = re.sub(
+ r"['\r\n]", "", str(traceback.format_exc(limit=1))
+ ).replace("\\\\", "\\")
+ self.logger.info(
+ (
+ RED(
+ "case failed:{}, err:{}".format(
+ case_name, traceback.format_exc()
+ )
+ )
+ )
+ )
+ else:
+ test_results[case_name]["result"] = "passed"
+ self.logger.info((GREEN("case passed: {}".format(case_name))))
+ finally:
+ self.session_list[0].send_command("flow flush 0", timeout=1)
+ for sess in self.session_list:
+ self.dut.close_session(sess)
+ pass_rate = (
+ round(
+ sum(1 for k in test_results if "passed" in test_results[k]["result"])
+ / len(test_results),
+ 4,
+ )
+ * 100
+ )
+ self.logger.info(
+ [
+ "{}:{}".format(sub_name, test_results[sub_name]["result"])
+ for sub_name in test_results
+ ]
+ )
+ self.logger.info("pass rate is: {}".format(pass_rate))
+ msg = [
+ "subcase_name:{}:{},err:{}".format(
+ name, test_results[name].get("result"), test_results[name].get("err")
+ )
+ for name in test_results.keys()
+ if "failed" in test_results[name]["result"]
+ ]
+ self.verify(
+ int(pass_rate) == 100,
+ "some subcases failed, detail as below:{}".format(msg),
+ )
+
+ def test_perf_multiprocess_performance(self):
+ """
+ Benchmark Multiprocess performance.
+ #"""
+ packet_count = 16
+ self.dut.send_expect("fg", "# ")
+ txPort = self.tester.get_local_port(self.dut_ports[0])
+ rxPort = self.tester.get_local_port(self.dut_ports[1])
+ mac = self.tester.get_mac(txPort)
+ dmac = self.dut.get_mac_address(self.dut_ports[0])
+ tgenInput = []
+
+ # create mutative src_ip+dst_ip package
+ for i in range(packet_count):
+ package = (
+ r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]'
+ % (mac, dmac, i + 1, i + 2)
+ )
+ self.tester.scapy_append(package)
+ pcap = os.sep.join([self.output_path, "test_%d.pcap" % i])
+ self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+ tgenInput.append([txPort, rxPort, pcap])
+ self.tester.scapy_execute()
+
+ # run multiple symmetric_mp process
+ validExecutions = []
+ for execution in executions:
+ if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+ validExecutions.append(execution)
+
+ portMask = utils.create_mask(self.dut_ports)
+
+ for n in range(len(validExecutions)):
+ execution = validExecutions[n]
+ # get coreList form execution['cores']
+ coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+ # to run a set of symmetric_mp instances, like test plan
+ dutSessionList = []
+ for index in range(len(coreList)):
+ dut_new_session = self.dut.new_session()
+ dutSessionList.append(dut_new_session)
+ # add -a option when tester and dut in same server
+ dut_new_session.send_expect(
+ self.app_symmetric_mp
+ + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d"
+ % (
+ utils.create_mask([coreList[index]]),
+ self.eal_param,
+ portMask,
+ execution["nprocs"],
+ index,
+ ),
+ "Finished Process Init",
+ )
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ execution["pps"] = pps
+
+ # close all symmetric_mp process
+ self.dut.send_expect("killall symmetric_mp", "# ")
+ # close all dut sessions
+ for dut_session in dutSessionList:
+ self.dut.close_session(dut_session)
+
+ # get rate and mpps data
+ for n in range(len(executions)):
+ self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+ self.result_table_create(
+ [
+ "Num-procs",
+ "Sockets/Cores/Threads",
+ "Num Ports",
+ "Frame Size",
+ "%-age Line Rate",
+ "Packet Rate(mpps)",
+ ]
+ )
+
+ for execution in validExecutions:
+ self.result_table_add(
+ [
+ execution["nprocs"],
+ execution["cores"],
+ 2,
+ 64,
+ execution["pps"] / float(100000000 / (8 * 84)),
+ execution["pps"] / float(1000000),
+ ]
+ )
+
+ self.result_table_print()
+
+ def test_perf_multiprocess_client_serverperformance(self):
+ """
+ Benchmark Multiprocess client-server performance.
+ """
+ self.dut.kill_all()
+ self.dut.send_expect("fg", "# ")
+ txPort = self.tester.get_local_port(self.dut_ports[0])
+ rxPort = self.tester.get_local_port(self.dut_ports[1])
+ mac = self.tester.get_mac(txPort)
+
+ self.tester.scapy_append(
+ 'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0])
+ )
+ self.tester.scapy_append('smac="%s"' % mac)
+ self.tester.scapy_append(
+ 'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]'
+ )
+
+ pcap = os.sep.join([self.output_path, "test.pcap"])
+ self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+ self.tester.scapy_execute()
+
+ validExecutions = []
+ for execution in executions:
+ if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+ validExecutions.append(execution)
+
+ for execution in validExecutions:
+ coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+ # get core with socket parameter to specified which core dut used when tester and dut in same server
+ coreMask = utils.create_mask(
+ self.dut.get_core_list("1S/1C/1T", socket=self.socket)
+ )
+ portMask = utils.create_mask(self.dut_ports)
+ # specified mp_server core and add -a option when tester and dut in same server
+ self.dut.send_expect(
+ self.app_mp_server
+ + " -n %d -c %s %s -- -p %s -n %d"
+ % (
+ self.dut.get_memory_channels(),
+ coreMask,
+ self.eal_param,
+ portMask,
+ execution["nprocs"],
+ ),
+ "Finished Process Init",
+ 20,
+ )
+ self.dut.send_expect("^Z", "\r\n")
+ self.dut.send_expect("bg", "# ")
+
+ for n in range(execution["nprocs"]):
+ time.sleep(5)
+ # use next core as mp_client core, different from mp_server
+ coreMask = utils.create_mask([str(int(coreList[n]) + 1)])
+ self.dut.send_expect(
+ self.app_mp_client
+ + " -n %d -c %s --proc-type=secondary %s -- -n %d"
+ % (self.dut.get_memory_channels(), coreMask, self.eal_param, n),
+ "Finished Process Init",
+ )
+ self.dut.send_expect("^Z", "\r\n")
+ self.dut.send_expect("bg", "# ")
+
+ tgenInput = []
+ tgenInput.append([txPort, rxPort, pcap])
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ execution["pps"] = pps
+ self.dut.kill_all()
+ time.sleep(5)
+
+ for n in range(len(executions)):
+ self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+
+ self.result_table_create(
+ [
+ "Server threads",
+ "Server Cores/Threads",
+ "Num-procs",
+ "Sockets/Cores/Threads",
+ "Num Ports",
+ "Frame Size",
+ "%-age Line Rate",
+ "Packet Rate(mpps)",
+ ]
+ )
+
+ for execution in validExecutions:
+ self.result_table_add(
+ [
+ 1,
+ "1S/1C/1T",
+ execution["nprocs"],
+ execution["cores"],
+ 2,
+ 64,
+ execution["pps"] / float(100000000 / (8 * 84)),
+ execution["pps"] / float(1000000),
+ ]
+ )
+
+ self.result_table_print()
+
+ def set_fields(self):
+ """set ip protocol field behavior"""
+ fields_config = {
+ "ip": {
+ "src": {"range": 64, "action": "inc"},
+ "dst": {"range": 64, "action": "inc"},
+ },
+ }
+
+ return fields_config
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ if self.session_list:
+ for sess in self.session_list:
+ self.dut.close_session(sess)
+ self.dut.kill_all()
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.dut.kill_all()
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts][PATCH V1 7/7] tests/checksum_offload: Separated performance cases
2023-01-09 18:46 [dts][PATCH V1 1/7] tests/efd: Separated performance cases Hongbo Li
` (4 preceding siblings ...)
2023-01-09 18:46 ` [dts][PATCH V1 6/7] tests/multiprocess: " Hongbo Li
@ 2023-01-09 18:46 ` Hongbo Li
5 siblings, 0 replies; 8+ messages in thread
From: Hongbo Li @ 2023-01-09 18:46 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance test cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/index.rst | 7 +
.../perf_checksum_offload_test_plan.rst | 67 ++
tests/TestSuite_checksum_offload.py | 54 --
tests/TestSuite_perf_checksum_offload.py | 624 ++++++++++++++++++
4 files changed, 698 insertions(+), 54 deletions(-)
create mode 100644 test_plans/perf_checksum_offload_test_plan.rst
create mode 100644 tests/TestSuite_perf_checksum_offload.py
diff --git a/test_plans/index.rst b/test_plans/index.rst
index f56f99a8..01ac637e 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -116,6 +116,13 @@ The following are the test plans for the DPDK DTS automated test system.
ntb_test_plan
nvgre_test_plan
perf_virtio_user_loopback_test_plan
+ perf_efd_test_plan
+ perf_ipfrag_test_plan
+ perf_l2fwd_test_plan
+ perf_multiprocess_test_plan
+ perf_tso_test_plan
+ perf_vxlan_test_plan
+ perf_checksum_offload_test_plan
pf_smoke_test_plan
pipeline_test_plan
pvp_virtio_user_multi_queues_port_restart_test_plan
diff --git a/test_plans/perf_checksum_offload_test_plan.rst b/test_plans/perf_checksum_offload_test_plan.rst
new file mode 100644
index 00000000..d4c0a865
--- /dev/null
+++ b/test_plans/perf_checksum_offload_test_plan.rst
@@ -0,0 +1,67 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+ Copyright(c) 2018-2019 The University of New Hampshire
+
+============================
+RX/TX Checksum Offload Tests
+============================
+
+The support of RX/TX L3/L4 Checksum offload features by Poll Mode Drivers consists in:
+
+On the RX side:
+
+- Verify IPv4 checksum by hardware for received packets.
+- Verify UDP/TCP/SCTP checksum by hardware for received packets.
+
+On the TX side:
+
+- IPv4 checksum insertion by hardware in transmitted packets.
+- IPv4/UDP checksum insertion by hardware in transmitted packets.
+- IPv4/TCP checksum insertion by hardware in transmitted packets.
+- IPv4/SCTP checksum insertion by hardware in transmitted packets (sctp
+ length in 4 bytes).
+- IPv6/UDP checksum insertion by hardware in transmitted packets.
+- IPv6/TCP checksum insertion by hardware in transmitted packets.
+- IPv6/SCTP checksum insertion by hardware in transmitted packets (sctp
+ length in 4 bytes).
+
+RX/TX side, the insertion of a L3/L4 checksum by hardware can be enabled with the
+following command of the ``testpmd`` application and running in a dedicated
+tx checksum mode::
+
+ set fwd csum
+ csum set ip|tcp|udp|sctp|outer-ip|outer-udp hw|sw port_id
+
+The transmission of packet is done with the ``start`` command of the ``testpmd``
+application that will receive packets and then transmit the packet out on all
+configured ports.
+
+
+Prerequisites
+=============
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+Assuming that ports ``0`` and ``2`` are connected to a traffic generator,
+launch the ``testpmd`` with the following arguments::
+
+ ./build/app/dpdk-testpmd -cffffff -n 1 -- -i --burst=1 --txpt=32 \
+ --txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5
+ enable-rx-cksum
+
+Set the verbose level to 1 to display information for each received packet::
+
+ testpmd> set verbose 1
+
+Due to DPDK 236bc417e2da(app/testpmd: fix MAC header in checksum forward engine) changes the checksum
+functions adds switches to control whether to exchange MAC address.
+Currently, our test scripts are based on not exchanging MAC addresses, mac-swap needs to be disabled:
+
+ testpmd> csum mac-swap off 0
+
diff --git a/tests/TestSuite_checksum_offload.py b/tests/TestSuite_checksum_offload.py
index 0214231c..9ba06a19 100644
--- a/tests/TestSuite_checksum_offload.py
+++ b/tests/TestSuite_checksum_offload.py
@@ -756,60 +756,6 @@ class TestChecksumOffload(TestCase):
self.result_table_add(result)
- def test_perf_checksum_throughtput(self):
- """
- Test checksum offload performance.
- """
- # Verify that enough ports are available
- self.verify(len(self.dut_ports) >= 2, "Insufficient ports for testing")
- self.dut.send_expect("quit", "#")
-
- # sizes = [64, 128, 256, 512, 1024]
- sizes = [64, 128]
- pkts = {
- "IP/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IP()/UDP()/("X"*(%d-46))',
- "IP/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IP()/TCP()/("X"*(%d-58))',
- "IP/SCTP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IP()/SCTP()/("X"*(%d-50+2))',
- "IPv6/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IPv6()/UDP()/("X"* (lambda x: x - 66 if x > 66 else 0)(%d))',
- "IPv6/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IPv6()/TCP()/("X"* (lambda x: x - 78 if x > 78 else 0)(%d))',
- }
-
- if self.kdriver in DRIVER_TEST_LACK_CAPA["sctp_tx_offload"]:
- del pkts["IP/SCTP"]
-
- lcore = "1S/2C/1T"
- portMask = utils.create_mask([self.dut_ports[0], self.dut_ports[1]])
- for mode in ["sw", "hw"]:
- self.logger.info("%s performance" % mode)
- tblheader = ["Ports", "S/C/T", "Packet Type", "Mode"]
- for size in sizes:
- tblheader.append("%sB mpps" % str(size))
- tblheader.append("%sB %% " % str(size))
- self.result_table_create(tblheader)
- self.pmdout.start_testpmd(
- lcore,
- "--portmask=%s" % self.portMask
- + " --enable-rx-cksum "
- + "--port-topology=loop",
- socket=self.ports_socket,
- )
-
- self.dut.send_expect("set fwd csum", "testpmd> ")
- if mode == "hw":
- self.checksum_enablehw(self.dut_ports[0])
- self.checksum_enablehw(self.dut_ports[1])
- else:
- self.checksum_enablesw(self.dut_ports[0])
- self.checksum_enablesw(self.dut_ports[1])
-
- self.dut.send_expect("start", "testpmd> ", 3)
- for ptype in list(pkts.keys()):
- self.benchmark(lcore, ptype, mode, pkts[ptype], sizes, self.nic)
-
- self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("quit", "#", 10)
- self.result_table_print()
-
def test_hardware_checksum_check_ip_rx(self):
self.tester.send_expect("scapy", ">>>")
self.checksum_enablehw(self.dut_ports[0])
diff --git a/tests/TestSuite_perf_checksum_offload.py b/tests/TestSuite_perf_checksum_offload.py
new file mode 100644
index 00000000..a2288613
--- /dev/null
+++ b/tests/TestSuite_perf_checksum_offload.py
@@ -0,0 +1,624 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+# Copyright(c) 2018-2019 The University of New Hampshire
+#
+
+"""
+DPDK Test suite.
+
+Test support of RX/TX Checksum Offload Features by Poll Mode Drivers.
+
+"""
+
+import os
+import re
+import subprocess
+import time
+from typing import List, Pattern, Tuple, Union
+
+from scapy.layers.inet import IP, TCP, UDP
+from scapy.layers.inet6 import IPv6
+from scapy.layers.l2 import GRE, Ether
+from scapy.layers.sctp import SCTP
+from scapy.layers.vxlan import VXLAN
+from scapy.packet import Raw
+from scapy.utils import rdpcap, wrpcap
+
+import framework.packet as packet
+import framework.utils as utils
+from framework.exception import VerifyFailure
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.rst import RstReport
+from framework.settings import FOLDERS
+from framework.test_capabilities import DRIVER_TEST_LACK_CAPA
+from framework.test_case import TestCase
+
+l3_proto_classes = [IP, IPv6]
+
+l4_proto_classes = [
+ UDP,
+ TCP,
+]
+
+tunnelling_proto_classes = [
+ VXLAN,
+ GRE,
+]
+
+l3_protos = ["IP", "IPv6"]
+
+l4_protos = [
+ "UDP",
+ "TCP",
+ "SCTP",
+]
+
+
+class TestChecksumOffload(TestCase):
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+ Checksum offload prerequisites.
+ """
+ # Based on h/w type, choose how many ports to use
+ self.dut_ports = self.dut.get_ports(self.nic)
+ # Verify that enough ports are available
+ self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing")
+ self.pmdout: PmdOutput = PmdOutput(self.dut)
+ self.portMask = utils.create_mask([self.dut_ports[0]])
+ self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # log debug used
+ self.count = 0
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ self.pmdout.start_testpmd(
+ "Default",
+ "--portmask=%s " % (self.portMask)
+ + " --enable-rx-cksum "
+ + "--port-topology=loop",
+ socket=self.ports_socket,
+ )
+ self.dut.send_expect("set verbose 1", "testpmd>")
+ self.dut.send_expect("set fwd csum", "testpmd>")
+ self.dut.send_expect("csum mac-swap off 0", "testpmd>")
+
+ def checksum_enablehw(self, port):
+ self.dut.send_expect("port stop all", "testpmd>")
+ self.dut.send_expect("rx_vxlan_port add 4789 0 ", "testpmd>")
+ self.dut.send_expect("csum set ip hw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set udp hw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set tcp hw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set sctp hw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set outer-ip hw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set outer-udp hw %d" % port, "testpmd>")
+ self.dut.send_expect("csum parse-tunnel on %d" % port, "testpmd>")
+ self.dut.send_expect("port start all", "testpmd>")
+
+ def checksum_enablesw(self, port):
+ self.dut.send_expect("port stop all", "testpmd>")
+ self.dut.send_expect("csum set ip sw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set udp sw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set tcp sw %d" % port, "testpmd>")
+ self.dut.send_expect("csum set sctp sw %d" % port, "testpmd>")
+ self.dut.send_expect("port start all", "testpmd>")
+
+ def get_chksum_values(self, packets_expected):
+ """
+ Validate the checksum flags.
+ """
+ checksum_pattern = re.compile("chksum.*=.*(0x[0-9a-z]+)")
+
+ chksum = dict()
+
+ self.tester.send_expect("scapy", ">>> ")
+
+ for packet_type in list(packets_expected.keys()):
+ self.tester.send_expect("p = %s" % packets_expected[packet_type], ">>>")
+ out = self.tester.send_command("p.show2()", timeout=1)
+ chksums = checksum_pattern.findall(out)
+ chksum[packet_type] = chksums
+
+ self.tester.send_expect("exit()", "#")
+
+ return chksum
+
+ def checksum_valid_flags(self, packets_sent, flag):
+ """
+ Sends packets and check the checksum valid-flags.
+ """
+ self.dut.send_expect("start", "testpmd>")
+ tx_interface = self.tester.get_interface(
+ self.tester.get_local_port(self.dut_ports[0])
+ )
+ for packet_type in list(packets_sent.keys()):
+ self.pkt = packet.Packet(pkt_str=packets_sent[packet_type])
+ self.pkt.send_pkt(self.tester, tx_interface, count=4)
+ out = self.dut.get_session_output(timeout=1)
+ lines = out.split("\r\n")
+
+ # collect the checksum result
+ for line in lines:
+ line = line.strip()
+ if len(line) != 0 and line.startswith("rx"):
+ # IPv6 don't be checksum, so always show "GOOD"
+ if packet_type.startswith("IPv6"):
+ if "RTE_MBUF_F_RX_L4_CKSUM" not in line:
+ self.verify(0, "There is no checksum flags appeared!")
+ else:
+ if flag == 1:
+ self.verify(
+ "RTE_MBUF_F_RX_L4_CKSUM_GOOD" in line,
+ "Packet Rx L4 checksum valid-flags error!",
+ )
+ elif flag == 0:
+ self.verify(
+ "RTE_MBUF_F_RX_L4_CKSUM_BAD" in line
+ or "RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN" in line,
+ "Packet Rx L4 checksum valid-flags error!",
+ )
+ else:
+ if "RTE_MBUF_F_RX_L4_CKSUM" not in line:
+ self.verify(0, "There is no L4 checksum flags appeared!")
+ elif "RTE_MBUF_F_RX_IP_CKSUM" not in line:
+ self.verify(0, "There is no IP checksum flags appeared!")
+ else:
+ if flag == 1:
+ self.verify(
+ "RTE_MBUF_F_RX_L4_CKSUM_GOOD" in line,
+ "Packet Rx L4 checksum valid-flags error!",
+ )
+ self.verify(
+ "RTE_MBUF_F_RX_IP_CKSUM_GOOD" in line,
+ "Packet Rx IP checksum valid-flags error!",
+ )
+ elif flag == 0:
+ self.verify(
+ "RTE_MBUF_F_RX_L4_CKSUM_BAD" in line
+ or "RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN" in line,
+ "Packet Rx L4 checksum valid-flags error!",
+ )
+ self.verify(
+ "RTE_MBUF_F_RX_IP_CKSUM_BAD" in line,
+ "Packet Rx IP checksum valid-flags error!",
+ )
+
+ self.dut.send_expect("stop", "testpmd>")
+
+ def checksum_validate(self, packets_sent, packets_expected):
+ """
+ Validate the checksum.
+ """
+ tx_interface = self.tester.get_interface(
+ self.tester.get_local_port(self.dut_ports[0])
+ )
+ rx_interface = self.tester.get_interface(
+ self.tester.get_local_port(self.dut_ports[0])
+ )
+
+ sniff_src = "52:00:00:00:00:00"
+ result = dict()
+
+ chksum = self.get_chksum_values(packets_expected)
+ inst = self.tester.tcpdump_sniff_packets(
+ intf=rx_interface,
+ count=len(packets_sent) * 4,
+ filters=[{"layer": "ether", "config": {"src": sniff_src}}],
+ )
+
+ self.pkt = packet.Packet()
+ for packet_type in list(packets_sent.keys()):
+ self.pkt.append_pkt(packets_sent[packet_type])
+ self.pkt.send_pkt(crb=self.tester, tx_port=tx_interface, count=4)
+
+ p = self.tester.load_tcpdump_sniff_packets(inst)
+ nr_packets = len(p)
+ print(p)
+ packets_received = [
+ p[i].sprintf("%IP.chksum%;%TCP.chksum%;%UDP.chksum%;%SCTP.chksum%")
+ for i in range(nr_packets)
+ ]
+ print(len(packets_sent), len(packets_received))
+ self.verify(
+ len(packets_sent) * 4 == len(packets_received), "Unexpected Packets Drop"
+ )
+
+ for packet_received in packets_received:
+ (
+ ip_checksum,
+ tcp_checksum,
+ udp_checksum,
+ sctp_checksum,
+ ) = packet_received.split(";")
+
+ packet_type = ""
+ l4_checksum = ""
+ if tcp_checksum != "??":
+ packet_type = "TCP"
+ l4_checksum = tcp_checksum
+ elif udp_checksum != "??":
+ packet_type = "UDP"
+ l4_checksum = udp_checksum
+ elif sctp_checksum != "??":
+ packet_type = "SCTP"
+ l4_checksum = sctp_checksum
+
+ if ip_checksum != "??":
+ packet_type = "IP/" + packet_type
+ if chksum[packet_type] != [ip_checksum, l4_checksum]:
+ result[packet_type] = packet_type + " checksum error"
+ else:
+ packet_type = "IPv6/" + packet_type
+ if chksum[packet_type] != [l4_checksum]:
+ result[packet_type] = packet_type + " checksum error"
+
+ return result
+
+ def send_scapy_packet(self, packet: str):
+ itf = self.tester.get_interface(self.tester.get_local_port(self.dut_ports[0]))
+
+ self.tester.scapy_foreground()
+ self.tester.scapy_append(f'sendp({packet}, iface="{itf}")')
+ return self.tester.scapy_execute()
+
+ def get_pkt_rx_l4_cksum(self, testpmd_output: str) -> bool:
+ return self.checksum_flags_are_good("RTE_MBUF_F_RX_L4_CKSUM_", testpmd_output)
+
+ def get_pkt_rx_ip_cksum(self, testpmd_output: str) -> bool:
+ return self.checksum_flags_are_good("RTE_MBUF_F_RX_IP_CKSUM_", testpmd_output)
+
+ def send_pkt_expect_good_bad_from_flag(
+ self, pkt_str: str, flag: str, test_name: str, should_pass: bool = True
+ ):
+ self.pmdout.get_output(timeout=1) # Remove any old output
+ self.scapy_exec(f"sendp({pkt_str}, iface=iface)")
+ testpmd_output: str = self.pmdout.get_output(timeout=1)
+ self.verify(
+ flag in testpmd_output,
+ f"Flag {flag[:-1]} not found for test {test_name}, please run test_rx_checksum_valid_flags.",
+ )
+ self.verify(
+ (flag + "UNKNOWN") not in testpmd_output,
+ f"Flag {flag[:-1]} was found to be unknown for test {test_name}, indicating a possible lack of support",
+ )
+ if should_pass:
+ if flag + "GOOD" in testpmd_output:
+ return
+ else: # flag + "BAD" in testpmd_output
+ self.verify(
+ False, f"{flag}BAD was found in output, expecting {flag}GOOD."
+ )
+ else:
+ if flag + "BAD" in testpmd_output:
+ return
+ else: # flag + "GOOD" in testpmd_output
+ self.verify(
+ False, f"{flag}GOOD was found in output, expecting {flag}BAD."
+ )
+
+ def send_pkt_expect_good_bad_from_flag_catch_failure(
+ self, pkt_str: str, flag: str, test_name: str, should_pass: bool = True
+ ) -> Union[VerifyFailure, None]:
+ try:
+ self.send_pkt_expect_good_bad_from_flag(
+ pkt_str, flag, test_name, should_pass=should_pass
+ )
+ except VerifyFailure as vf:
+ return vf
+
+ return None
+
+ def validate_checksum(self, pkt, pkt_type, inner_flag=False) -> bool:
+ """
+ @param pkt: The packet to validate the checksum of.
+ @return: Whether the checksum was valid.
+ """
+ if pkt is None:
+ return False
+ for i in range(0, len(l3_protos)):
+ if l3_protos[i] in pkt:
+ l3 = l3_protos[i]
+ for j in range(0, len(l4_protos)):
+ if l4_protos[j] in pkt:
+ layer = l4_proto_classes[j]
+ csum = pkt[layer].chksum
+ if csum is None:
+ csum = 0
+ del pkt[layer].chksum
+ # Converting it to raw will calculate the checksum
+ correct_csum = layer(bytes(Raw(pkt[layer]))).chksum
+ if correct_csum == csum:
+ # checksum value is correct
+ return False
+ else:
+ if inner_flag:
+ print(
+ "{} pkg[{}] VXLAN/{}/{} inner checksum {} is not correct {}".format(
+ pkt_type,
+ self.count,
+ l3,
+ l4_protos[j],
+ hex(correct_csum),
+ hex(csum),
+ )
+ )
+ else:
+ print(
+ "{} pkg[{}] {}/{} outer checksum {} is not correct {}".format(
+ pkt_type,
+ self.count,
+ l3,
+ l4_protos[j],
+ hex(correct_csum),
+ hex(csum),
+ )
+ )
+ return True
+ return False
+
+ def scapy_exec(self, cmd: str, timeout=1) -> str:
+ return self.tester.send_expect(cmd, ">>>", timeout=timeout)
+
+ def get_packets(self, dut_mac, tester_mac):
+ eth = Ether(dst=dut_mac, src=tester_mac)
+ packets = []
+ checksum_options = (
+ {},
+ {"chksum": 0xF},
+ )
+ # Untunneled
+ for l3 in l3_proto_classes:
+ for l4 in l4_proto_classes:
+ for chksum in checksum_options:
+ # The packet's data can be used to identify how the packet was constructed so avoid any issues with
+ # ordering
+ pkt = eth / l3() / l4(**chksum) / (b"X" * 48)
+ # Prevents the default behavior which adds DNS headers
+ if l4 == UDP:
+ pkt[UDP].dport, pkt[UDP].sport = 1001, 1001
+ packets.append(pkt)
+
+ # Tunneled
+ # VXLAN
+ for l3 in l3_proto_classes:
+ for l4 in l4_proto_classes:
+ for outer_arg in checksum_options:
+ for inner_arg in checksum_options:
+ pkt = (
+ eth
+ / l3()
+ / UDP(**outer_arg)
+ / VXLAN()
+ / Ether()
+ / l3()
+ / l4(**inner_arg)
+ / (b"Y" * 48)
+ )
+ # Prevents the default behavior which adds DNS headers
+ if l4 == UDP:
+ pkt[VXLAN][UDP].dport, pkt[VXLAN][UDP].sport = 1001, 1001
+ packets.append(pkt)
+ # GRE
+ for l3 in l3_proto_classes:
+ for l4 in l4_proto_classes:
+ for chksum in checksum_options:
+ pkt = eth / l3() / GRE() / l3() / l4(**chksum) / (b"Z" * 48)
+ # Prevents the default behavior which adds DNS headers
+ if l4 == UDP:
+ pkt[GRE][UDP].dport, pkt[GRE][UDP].sport = 1001, 1001
+ packets.append(pkt)
+
+ return packets
+
+ def send_tx_package(
+ self, packet_file_path, capture_file_path, packets, iface, dut_mac
+ ):
+ if os.path.isfile(capture_file_path):
+ os.remove(capture_file_path)
+ src_mac = "52:00:00:00:00:00"
+ self.tester.send_expect(
+ f"tcpdump -i '{iface}' ether src {src_mac} -s 0 -w {capture_file_path} -Q in &",
+ "# ",
+ )
+
+ if os.path.isfile(packet_file_path):
+ os.remove(packet_file_path)
+ wrpcap(packet_file_path, packets)
+ self.tester.session.copy_file_to(packet_file_path, packet_file_path)
+
+ # send packet
+ self.tester.send_expect("scapy", ">>>")
+ self.scapy_exec(f"packets = rdpcap('{packet_file_path}')")
+ for i in range(0, len(packets)):
+ self.scapy_exec(f"packets[{i}].show")
+ self.scapy_exec(f"sendp(packets[{i}], iface='{iface}')")
+ self.pmdout.get_output(timeout=0.5)
+ self.dut.send_expect(
+ "show port stats {}".format(self.dut_ports[0]), "testpmd>"
+ )
+ self.tester.send_expect("quit()", "# ")
+
+ time.sleep(1)
+ self.tester.send_expect("killall tcpdump", "#")
+ time.sleep(1)
+ self.tester.send_expect('echo "Cleaning buffer"', "#")
+ time.sleep(1)
+ return
+
+ def validate_packet_list_checksums(self, packets):
+ error_messages = []
+ untunnelled_error_message = (
+ f"un-tunneled checksum state for pkg[%s] with a invalid checksum."
+ )
+ vxlan_error_message = (
+ f"VXLAN tunnelled checksum state for pkg[%s] with a invalid checksum."
+ )
+ gre_error_message = (
+ f"GRE tunnelled checksum state for pkg[%s] with a invalid checksum."
+ )
+
+ for packet in packets:
+ self.count = self.count + 1
+ payload: str
+ # try:
+ payload = packet[Raw].load.decode("utf-8")
+ # # This error usually happens with tunneling protocols, and means that an additional cast is needed
+ # except UnicodeDecodeError:
+ # for proto in tunnelling_proto_classes:
+ if "X" in payload:
+ if self.validate_checksum(packet, "un-tunneled"):
+ error_messages.append(untunnelled_error_message % self.count)
+ elif "Y" in payload:
+ if self.validate_checksum(
+ packet[VXLAN][Ether], "VXLAN", inner_flag=True
+ ):
+ error_messages.append(vxlan_error_message % self.count)
+ # Intel® Ethernet 700 Series not support outer udp checksum
+ if self.is_eth_series_nic(700):
+ continue
+ if self.validate_checksum(packet, "VXLAN"):
+ error_messages.append(vxlan_error_message % self.count)
+ elif "Z" in payload:
+ if self.validate_checksum(packet, "GRE"):
+ error_messages.append(gre_error_message % self.count)
+ return error_messages
+
+ #
+ #
+ #
+ # Test Cases
+ #
+ def benchmark(self, lcore, ptype, mode, flow_format, size_list, nic):
+ """
+ Test ans report checksum offload performance for given parameters.
+ """
+ Bps = dict()
+ Pps = dict()
+ Pct = dict()
+ dmac = self.dut.get_mac_address(self.dut_ports[0])
+ dmac1 = self.dut.get_mac_address(self.dut_ports[1])
+
+ result = [2, lcore, ptype, mode]
+ for size in size_list:
+ flow = flow_format % (dmac, size)
+ pcap = os.sep.join([self.output_path, "test.pcap"])
+ self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap, flow))
+ self.tester.scapy_execute()
+ flow = flow_format % (dmac1, size)
+ pcap = os.sep.join([self.output_path, "test1.pcap"])
+ self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap, flow))
+ self.tester.scapy_execute()
+
+ tgenInput = []
+ pcap = os.sep.join([self.output_path, "test.pcap"])
+ tgenInput.append(
+ (
+ self.tester.get_local_port(self.dut_ports[0]),
+ self.tester.get_local_port(self.dut_ports[1]),
+ pcap,
+ )
+ )
+ pcap = os.sep.join([self.output_path, "test1.pcap"])
+ tgenInput.append(
+ (
+ self.tester.get_local_port(self.dut_ports[1]),
+ self.tester.get_local_port(self.dut_ports[0]),
+ pcap,
+ )
+ )
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # create an instance to set stream field setting
+ # Moved here because it messes with the ability of the functional tests to use scapy.
+ self.pktgen_helper = PacketGeneratorHelper()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ Bps[str(size)], Pps[str(size)] = self.tester.pktgen.measure_throughput(
+ stream_ids=streams
+ )
+ self.verify(Pps[str(size)] > 0, "No traffic detected")
+ Pps[str(size)] /= 1e6
+ Pct[str(size)] = (Pps[str(size)] * 100) / self.wirespeed(self.nic, size, 2)
+
+ result.append(Pps[str(size)])
+ result.append(Pct[str(size)])
+
+ self.result_table_add(result)
+
+ def test_perf_checksum_throughtput(self):
+ """
+ Test checksum offload performance.
+ """
+ # Verify that enough ports are available
+ self.verify(len(self.dut_ports) >= 2, "Insufficient ports for testing")
+ self.dut.send_expect("quit", "#")
+
+ # sizes = [64, 128, 256, 512, 1024]
+ sizes = [64, 128]
+ pkts = {
+ "IP/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IP()/UDP()/("X"*(%d-46))',
+ "IP/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IP()/TCP()/("X"*(%d-58))',
+ "IP/SCTP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IP()/SCTP()/("X"*(%d-50+2))',
+ "IPv6/UDP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IPv6()/UDP()/("X"* (lambda x: x - 66 if x > 66 else 0)(%d))',
+ "IPv6/TCP": 'Ether(dst="%s", src="52:00:00:00:00:00")/IPv6()/TCP()/("X"* (lambda x: x - 78 if x > 78 else 0)(%d))',
+ }
+
+ if self.kdriver in DRIVER_TEST_LACK_CAPA["sctp_tx_offload"]:
+ del pkts["IP/SCTP"]
+
+ lcore = "1S/2C/1T"
+ portMask = utils.create_mask([self.dut_ports[0], self.dut_ports[1]])
+ for mode in ["sw", "hw"]:
+ self.logger.info("%s performance" % mode)
+ tblheader = ["Ports", "S/C/T", "Packet Type", "Mode"]
+ for size in sizes:
+ tblheader.append("%sB mpps" % str(size))
+ tblheader.append("%sB %% " % str(size))
+ self.result_table_create(tblheader)
+ self.pmdout.start_testpmd(
+ lcore,
+ "--portmask=%s" % self.portMask
+ + " --enable-rx-cksum "
+ + "--port-topology=loop",
+ socket=self.ports_socket,
+ )
+
+ self.dut.send_expect("set fwd csum", "testpmd> ")
+ if mode == "hw":
+ self.checksum_enablehw(self.dut_ports[0])
+ self.checksum_enablehw(self.dut_ports[1])
+ else:
+ self.checksum_enablesw(self.dut_ports[0])
+ self.checksum_enablesw(self.dut_ports[1])
+
+ self.dut.send_expect("start", "testpmd> ", 3)
+ for ptype in list(pkts.keys()):
+ self.benchmark(lcore, ptype, mode, pkts[ptype], sizes, self.nic)
+
+ self.dut.send_expect("stop", "testpmd> ")
+ self.dut.send_expect("quit", "#", 10)
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.dut.send_expect("quit", "#")
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ pass
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread