* [dts][PATCH V1 2/7] tests/kni:Separated performance cases
2022-09-22 14:29 [dts][PATCH V1 1/7] tests/efd:Separated performance cases Hongbo Li
@ 2022-09-22 14:29 ` Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 3/7] tests/l2fwd:Separated " Hongbo Li
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Hongbo Li @ 2022-09-22 14:29 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/kni_test_plan.rst | 200 -----
test_plans/perf_kni_test_plan.rst | 295 +++++++
tests/TestSuite_kni.py | 487 ------------
tests/TestSuite_perf_kni.py | 1191 +++++++++++++++++++++++++++++
4 files changed, 1486 insertions(+), 687 deletions(-)
create mode 100644 test_plans/perf_kni_test_plan.rst
create mode 100644 tests/TestSuite_perf_kni.py
diff --git a/test_plans/kni_test_plan.rst b/test_plans/kni_test_plan.rst
index 3e21bbb3..050b2abe 100644
--- a/test_plans/kni_test_plan.rst
+++ b/test_plans/kni_test_plan.rst
@@ -385,203 +385,3 @@ Using ``dmesg`` to check whether kernel module is loaded with the specified
parameters. Some permutations, those with wrong values, must fail to
success. For permutations with valid parameter values, verify the application can be
successfully launched and then close the application using CTRL+C.
-
-Test Case: loopback mode performance testing
-============================================
-
-Compare performance results for loopback mode using:
-
- - lo_mode: lo_mode_fifo and lo_mode_fifo_skb.
- - kthread_mode: single and multiple.
- - Number of ports: 1 and 2.
- - Number of virtual interfaces per port: 1 and 2
- - Frame sizes: 64 and 256.
- - Cores combinations:
-
- - Different cores for Rx, Tx and Kernel.
- - Shared core between Rx and Kernel.
- - Shared cores between Rx and Tx.
- - Shared cores between Rx, Tx and Kernel.
- - Multiple cores for Kernel, implies multiple virtual interfaces per port.
-
-::
-
- insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
- insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <lo_mode and kthread_mode parameters>
- ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
-
-
-At this point, the throughput is measured and recorded for the different
-frame sizes. After this, the application is closed using CTRL+C.
-
-The measurements are presented in a table format.
-
-+------------------+--------------+-------+-----------------+--------+--------+
-| lo_mode | kthread_mode | Ports | Config | 64 | 256 |
-+==================+==============+=======+=================+========+========+
-| | | | | | |
-+------------------+--------------+-------+-----------------+--------+--------+
-
-
-Test Case: bridge mode performance testing
-==========================================
-
-Compare performance results for bridge mode using:
-
- - kthread_mode: single and multiple.
- - Number of ports: 2
- - Number of ports: 1 and 2.
- - Number of flows per port: 1 and 2
- - Number of virtual interfaces per port: 1 and 2
- - Frame size: 64.
- - Cores combinations:
-
- - Different cores for Rx, Tx and Kernel.
- - Shared core between Rx and Kernel.
- - Shared cores between Rx and Tx.
- - Shared cores between Rx, Tx and Kernel.
- - Multiple cores for Kernel, implies multiple virtual interfaces per port.
-
-The application is launched and the bridge is setup using the commands below::
-
- insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
- ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
-
- ifconfig vEth2_0 up
- ifconfig vEth3_0 up
- brctl addbr "br_kni"
- brctl addif br_kni vEth2_0
- brctl addif br_kni vEth3_0
- ifconfig br_kni up
-
-
-At this point, the throughput is measured and recorded. After this, the
-application is closed using CTRL+C and the bridge deleted::
-
- ifconfig br_kni down
- brctl delbr br_kni
-
-
-The measurements are presented in a table format.
-
-+--------------+-------+-----------------------------+-------+
-| kthread_mode | Flows | Config | 64 |
-+==============+=======+=============================+=======+
-| | | | |
-+--------------+-------+-----------------------------+-------+
-
-Test Case: bridge mode without KNI performance testing
-======================================================
-
-Compare performance results for bridge mode using only Kernel bridge, no DPDK
-support. Use:
-
- - Number of ports: 2
- - Number of flows per port: 1 and 2
- - Frame size: 64.
-
-Set up the interfaces and the bridge::
-
- rmmod rte_kni
- ifconfig vEth2_0 up
- ifconfig vEth3_0 up
- brctl addbr "br1"
- brctl addif br1 vEth2_0
- brctl addif br1 vEth3_0
- ifconfig br1 up
-
-
-At this point, the throughput is measured and recorded. After this, the
-application is closed using CTRL+C and the bridge deleted::
-
- ifconfig br1 down
- brctl delbr br1
-
-
-The measurements are presented in a table format.
-
-+-------+-------+
-| Flows | 64 |
-+=======+=======+
-| 1 | |
-+-------+-------+
-| 2 | |
-+-------+-------+
-
-Test Case: routing mode performance testing
-===========================================
-
-Compare performance results for routing mode using:
-
- - kthread_mode: single and multiple.
- - Number of ports: 2
- - Number of ports: 1 and 2.
- - Number of virtual interfaces per port: 1 and 2
- - Frame size: 64 and 256.
- - Cores combinations:
-
- - Different cores for Rx, Tx and Kernel.
- - Shared core between Rx and Kernel.
- - Shared cores between Rx and Tx.
- - Shared cores between Rx, Tx and Kernel.
- - Multiple cores for Kernel, implies multiple virtual interfaces per port.
-
-The application is launched and the bridge is setup using the commands below::
-
- echo 1 > /proc/sys/net/ipv4/ip_forward
-
- insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
- ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
-
- ifconfig vEth2_0 192.170.2.1
- ifconfig vEth3_0 192.170.3.1
- route add -net 192.170.2.0 netmask 255.255.255.0 gw 192.170.2.1
- route add -net 192.170.3.0 netmask 255.255.255.0 gw 192.170.3.1
- arp -s 192.170.2.2 vEth2_0
- arp -s 192.170.3.2 vEth3_0
-
-At this point, the throughput is measured and recorded. After this, the
-application is closed using CTRL+C.
-
-The measurements are presented in a table format.
-
-+--------------+-------+-----------------------------+-------+-------+
-| kthread_mode | Ports | Config | 64 | 256 |
-+==============+=======+=============================+=======+=======+
-| | | | | |
-+--------------+-------+-----------------------------+-------+-------+
-
-
-Test Case: routing mode without KNI performance testing
-=======================================================
-
-Compare performance results for routing mode using only Kernel, no DPDK
-support. Use:
-
- - Number of ports: 2
- - Frame size: 64 and 256
-
-Set up the interfaces and the bridge::
-
-
- echo 1 > /proc/sys/net/ipv4/ip_forward
- rmmod rte_kni
- ifconfig vEth2_0 192.170.2.1
- ifconfig vEth3_0 192.170.3.1
- route add -net 192.170.2.0 netmask 255.255.255.0 gw 192.170.2.1
- route add -net 192.170.3.0 netmask 255.255.255.0 gw 192.170.3.1
- arp -s 192.170.2.2 vEth2_0
- arp -s 192.170.3.2 vEth3_0
-
-At this point, the throughput is measured and recorded. After this, the
-application is closed using CTRL+C.
-
-The measurements are presented in a table format.
-
-+-------+-------+-------+
-| Ports | 64 | 256 |
-+=======+=======+=======+
-| 1 | | |
-+-------+-------+-------+
-| 2 | | |
-+-------+-------+-------+
diff --git a/test_plans/perf_kni_test_plan.rst b/test_plans/perf_kni_test_plan.rst
new file mode 100644
index 00000000..6e63c207
--- /dev/null
+++ b/test_plans/perf_kni_test_plan.rst
@@ -0,0 +1,295 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+================================
+Kernel NIC Interface (KNI) Tests
+================================
+
+Description
+===========
+
+This document provides the plan for testing the Kernel NIC Interface
+application with support of rte_kni kernel module.
+Kernel NIC Interface is a DPDK alternative solution to the existing linux
+tun-tap interface for the exception path. Kernel NIC Interface allows the
+standard Linux net tools(ethtool/ifconfig/tcpdump) to facilitate managing the
+DPDK port. At the same time, it add an interface with the kernel net stack.
+The test supports Multi-Thread KNI.
+
+All kni model parameter detail info on user guides:http://dpdk.org/doc/guides/sample_app_ug/kernel_nic_interface.html
+
+The ``rte_kni`` kernel module can be installed by a ``lo_mode`` parameter.
+
+loopback disabled::
+
+ insmod rte_kni.ko
+ insmod rte_kni.ko "lo_mode=lo_mode_none"
+ insmod rte_kni.ko "lo_mode=unsupported string"
+
+loopback mode=lo_mode_ring enabled::
+
+ insmod rte_kni.ko "lo_mode=lo_mode_ring"
+
+loopback mode=lo_mode_ring_skb enabled::
+
+ insmod rte_kni.ko "lo_mode=lo_mode_ring_skb"
+
+The ``rte_kni`` kernel module can also be installed by a ``kthread_mode``
+parameter. This parameter is ``single`` by default.
+
+kthread single::
+
+ insmod rte_kni.ko
+ insmod rte_kni.ko "kthread_mode=single"
+
+kthread multiple::
+
+ insmod rte_kni.ko
+ insmod rte_kni.ko "kthread_mode=multiple"
+
+
+The ``kni`` application is run with EAL parameters and parameters for the
+application itself. For details about the EAL parameters, see the relevant
+DPDK **Getting Started Guide**. This application supports two parameters for
+itself.
+
+- ``--config="(port id, rx lcore, tx lcore, kthread lcore, kthread lcore, ...)"``:
+ Port and cores selection. Kernel threads are ignored if ``kthread_mode``
+ is not ``multiple``.
+
+ports cores::
+
+ e.g.:
+
+ --config="(0,1,2),(1,3,4)" No kernel thread specified.
+ --config="(0,1,2,21),(1,3,4,23)" One kernel thread in use.
+ --config="(0,1,2,21,22),(1,3,4,23,25) Two kernel threads in use.
+
+- ``-P``: Promiscuous mode. This is off by default.
+
+Prerequisites
+=============
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+The DUT has at least 2 DPDK supported IXGBE NIC ports.
+
+The DUT has to be able to install rte_kni kernel module and launch kni
+application with a default configuration (This configuration may change form a
+system to another)::
+
+ rmmod rte_kni
+ rmmod igb_uio
+ insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
+ insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko
+ ./<build_target>/examples/dpdk-kni -c 0xa0001e -n 4 -- -P -p 0x3 --config="(0,1,2,21),(1,3,4,23)" &
+
+Case config::
+
+ For enable KNI features, need build DPDK with '-Denable_kmods=True'.
+
+Test Case: loopback mode performance testing
+============================================
+
+Compare performance results for loopback mode using:
+
+ - lo_mode: lo_mode_fifo and lo_mode_fifo_skb.
+ - kthread_mode: single and multiple.
+ - Number of ports: 1 and 2.
+ - Number of virtual interfaces per port: 1 and 2
+ - Frame sizes: 64 and 256.
+ - Cores combinations:
+
+ - Different cores for Rx, Tx and Kernel.
+ - Shared core between Rx and Kernel.
+ - Shared cores between Rx and Tx.
+ - Shared cores between Rx, Tx and Kernel.
+ - Multiple cores for Kernel, implies multiple virtual interfaces per port.
+
+::
+
+ insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
+ insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <lo_mode and kthread_mode parameters>
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+
+
+At this point, the throughput is measured and recorded for the different
+frame sizes. After this, the application is closed using CTRL+C.
+
+The measurements are presented in a table format.
+
++------------------+--------------+-------+-----------------+--------+--------+
+| lo_mode | kthread_mode | Ports | Config | 64 | 256 |
++==================+==============+=======+=================+========+========+
+| | | | | | |
++------------------+--------------+-------+-----------------+--------+--------+
+
+
+Test Case: bridge mode performance testing
+==========================================
+
+Compare performance results for bridge mode using:
+
+ - kthread_mode: single and multiple.
+ - Number of ports: 2
+ - Number of ports: 1 and 2.
+ - Number of flows per port: 1 and 2
+ - Number of virtual interfaces per port: 1 and 2
+ - Frame size: 64.
+ - Cores combinations:
+
+ - Different cores for Rx, Tx and Kernel.
+ - Shared core between Rx and Kernel.
+ - Shared cores between Rx and Tx.
+ - Shared cores between Rx, Tx and Kernel.
+ - Multiple cores for Kernel, implies multiple virtual interfaces per port.
+
+The application is launched and the bridge is setup using the commands below::
+
+ insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+
+ ifconfig vEth2_0 up
+ ifconfig vEth3_0 up
+ brctl addbr "br_kni"
+ brctl addif br_kni vEth2_0
+ brctl addif br_kni vEth3_0
+ ifconfig br_kni up
+
+
+At this point, the throughput is measured and recorded. After this, the
+application is closed using CTRL+C and the bridge deleted::
+
+ ifconfig br_kni down
+ brctl delbr br_kni
+
+
+The measurements are presented in a table format.
+
++--------------+-------+-----------------------------+-------+
+| kthread_mode | Flows | Config | 64 |
++==============+=======+=============================+=======+
+| | | | |
++--------------+-------+-----------------------------+-------+
+
+Test Case: bridge mode without KNI performance testing
+======================================================
+
+Compare performance results for bridge mode using only Kernel bridge, no DPDK
+support. Use:
+
+ - Number of ports: 2
+ - Number of flows per port: 1 and 2
+ - Frame size: 64.
+
+Set up the interfaces and the bridge::
+
+ rmmod rte_kni
+ ifconfig vEth2_0 up
+ ifconfig vEth3_0 up
+ brctl addbr "br1"
+ brctl addif br1 vEth2_0
+ brctl addif br1 vEth3_0
+ ifconfig br1 up
+
+
+At this point, the throughput is measured and recorded. After this, the
+application is closed using CTRL+C and the bridge deleted::
+
+ ifconfig br1 down
+ brctl delbr br1
+
+
+The measurements are presented in a table format.
+
++-------+-------+
+| Flows | 64 |
++=======+=======+
+| 1 | |
++-------+-------+
+| 2 | |
++-------+-------+
+
+Test Case: routing mode performance testing
+===========================================
+
+Compare performance results for routing mode using:
+
+ - kthread_mode: single and multiple.
+ - Number of ports: 2
+ - Number of ports: 1 and 2.
+ - Number of virtual interfaces per port: 1 and 2
+ - Frame size: 64 and 256.
+ - Cores combinations:
+
+ - Different cores for Rx, Tx and Kernel.
+ - Shared core between Rx and Kernel.
+ - Shared cores between Rx and Tx.
+ - Shared cores between Rx, Tx and Kernel.
+ - Multiple cores for Kernel, implies multiple virtual interfaces per port.
+
+The application is launched and the bridge is setup using the commands below::
+
+ echo 1 > /proc/sys/net/ipv4/ip_forward
+
+ insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
+ ./<build_target>/examples/dpdk-kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
+
+ ifconfig vEth2_0 192.170.2.1
+ ifconfig vEth3_0 192.170.3.1
+ route add -net 192.170.2.0 netmask 255.255.255.0 gw 192.170.2.1
+ route add -net 192.170.3.0 netmask 255.255.255.0 gw 192.170.3.1
+ arp -s 192.170.2.2 vEth2_0
+ arp -s 192.170.3.2 vEth3_0
+
+At this point, the throughput is measured and recorded. After this, the
+application is closed using CTRL+C.
+
+The measurements are presented in a table format.
+
++--------------+-------+-----------------------------+-------+-------+
+| kthread_mode | Ports | Config | 64 | 256 |
++==============+=======+=============================+=======+=======+
+| | | | | |
++--------------+-------+-----------------------------+-------+-------+
+
+
+Test Case: routing mode without KNI performance testing
+=======================================================
+
+Compare performance results for routing mode using only Kernel, no DPDK
+support. Use:
+
+ - Number of ports: 2
+ - Frame size: 64 and 256
+
+Set up the interfaces and the bridge::
+
+
+ echo 1 > /proc/sys/net/ipv4/ip_forward
+ rmmod rte_kni
+ ifconfig vEth2_0 192.170.2.1
+ ifconfig vEth3_0 192.170.3.1
+ route add -net 192.170.2.0 netmask 255.255.255.0 gw 192.170.2.1
+ route add -net 192.170.3.0 netmask 255.255.255.0 gw 192.170.3.1
+ arp -s 192.170.2.2 vEth2_0
+ arp -s 192.170.3.2 vEth3_0
+
+At this point, the throughput is measured and recorded. After this, the
+application is closed using CTRL+C.
+
+The measurements are presented in a table format.
+
++-------+-------+-------+
+| Ports | 64 | 256 |
++=======+=======+=======+
+| 1 | | |
++-------+-------+-------+
+| 2 | | |
++-------+-------+-------+
diff --git a/tests/TestSuite_kni.py b/tests/TestSuite_kni.py
index 0a8c3009..53b77db3 100644
--- a/tests/TestSuite_kni.py
+++ b/tests/TestSuite_kni.py
@@ -1061,493 +1061,6 @@ class TestKni(TestCase):
# some permutations have to fail
pass
- def test_perf_loopback(self):
- """
- KNI loopback performance
- """
- self.dut.kill_all()
-
- header = loopback_perf_results_header
- for size in packet_sizes_loopback:
- header.append("%d (pps)" % size)
-
- self.result_table_create(header)
-
- # Execute the permutations of the test
- for step in loopback_performance_steps:
-
- self.extract_ports_cores_config(step["config"])
-
- total_cores = len(
- self.config["tx_cores"]
- + self.config["rx_cores"]
- + self.config["kernel_cores"]
- )
- if total_cores > self.dut_physical_cores():
- self.logger.info(
- "Skipping step %s (%d cores needed, got %d)"
- % (step["config"], total_cores, self.dut_physical_cores())
- )
- continue
-
- self.start_kni(lo_mode=step["lo_mode"], kthread_mode=step["kthread_mode"])
-
- pps_results = []
-
- for size in packet_sizes_loopback:
-
- payload_size = size - 38
-
- ports_number = len(self.config["ports"])
-
- # Set up the flows for the ports
- tgen_input = []
- for port in self.config["ports"]:
-
- rx_mac = self.dut.get_mac_address(port)
- tx_port = self.tester.get_local_port(port)
- self.tester.scapy_append('dstmac = "%s"' % rx_mac)
- self.tester.scapy_append(
- 'flows = [Ether(dst=dstmac)/IP()/("X"*%d)]' % payload_size
- )
- pcap = os.sep.join(
- [self.output_path, "tester{0}.pcap".format(tx_port)]
- )
- self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
- self.tester.scapy_execute()
- tgen_input.append((tx_port, tx_port, pcap))
-
- time.sleep(1)
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- pps_results.append(float(pps) / 1000000)
-
- ports_number = len(self.config["ports"])
- results_row = [
- step["lo_mode"],
- step["kthread_mode"],
- ports_number,
- self.stripped_config_param(),
- ] + pps_results
- self.result_table_add(results_row)
-
- self.dut.kill_all()
-
- self.result_table_print()
-
- def test_perf_bridge(self):
- """
- KNI performance bridge mode.
- """
- self.result_table_create(bridge_perf_results_header)
-
- self.tester.scapy_append('srcmac="00:00:00:00:00:01"')
- pcap = os.sep.join([self.output_path, "kni.pcap"])
- self.tester.scapy_append(
- 'wrpcap("%s", [Ether(src=srcmac, dst="ff:ff:ff:ff:ff:ff")/IP(len=46)/UDP()/("X"*18)])'
- % pcap
- )
- self.tester.scapy_execute()
-
- for step in bridge_performance_steps:
-
- self.extract_ports_cores_config(step["config"])
-
- total_cores = len(
- self.config["tx_cores"]
- + self.config["rx_cores"]
- + self.config["kernel_cores"]
- )
- if total_cores > self.dut_physical_cores():
- self.logger.info(
- "Skipping step %s (%d cores needed, got %d)"
- % (step["config"], total_cores, self.dut_physical_cores())
- )
- continue
-
- port_virtual_interaces = []
- for port in self.config["port_details"]:
- for i in range(len(port["kernel_cores"])):
- port_virtual_interaces.append(
- self.virtual_interface_name(port["port"], i)
- )
-
- self.dut.kill_all()
- self.start_kni(lo_mode=None, kthread_mode=step["kthread_mode"])
-
- for virtual_interace in port_virtual_interaces:
- out = self.dut.send_expect("ifconfig %s up" % virtual_interace, "# ")
- self.verify("ERROR" not in out, "Virtual interface not found")
-
- self.dut.send_expect('brctl addbr "br_kni"', "# ")
-
- for virtual_interace in port_virtual_interaces:
- out = self.dut.send_expect(
- "brctl addif br_kni %s" % virtual_interace, "# "
- )
- self.verify("ERROR" not in out, "Device not found")
-
- self.dut.send_expect("ifconfig br_kni up", "# ")
-
- tx_port = self.tester.get_local_port(self.config["ports"][0])
- rx_port = self.tester.get_local_port(self.config["ports"][1])
-
- tgenInput = []
- tgenInput.append((tx_port, rx_port, pcap))
-
- if step["flows"] == 2:
- tgenInput.append((rx_port, tx_port, pcap))
- self.verify(self.dut.is_interface_up(intf="br_kni"), "br_kni should be up")
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgenInput, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
- step["pps"] = float(pps) / 10**6
-
- results_row = [
- step["kthread_mode"],
- step["flows"],
- self.stripped_config_param(),
- (float(pps) / 10**6),
- ]
-
- self.result_table_add(results_row)
-
- self.dut.send_expect("ifconfig br_kni down", "# ")
- self.dut.send_expect('brctl delbr "br_kni"', "# ", 10)
-
- self.result_table_print()
-
- def test_perf_bridge_without_kni(self):
- """
- Bridge mode performance without KNI.
- """
- self.result_table_create(bridge_perf_no_kni_results_header)
-
- self.dut.kill_all()
-
- dut_ports = self.dut.get_ports(self.nic)
-
- self.tester.scapy_append('srcmac="00:00:00:00:00:01"')
- pcap = os.sep.join([self.output_path, "kni.pcap"])
- self.tester.scapy_append(
- 'wrpcap("%s", [Ether(src=srcmac, dst="ff:ff:ff:ff:ff:ff")/IP(len=46)/UDP()/("X"*18)])'
- % pcap
- )
- self.tester.scapy_execute()
-
- allow_list = self.make_allow_list(self.target, self.nic)
- port_virtual_interaces = []
- for port in allow_list:
- information = self.dut.send_expect(
- "./usertools/dpdk-devbind.py --status | grep '%s'" % port, "# "
- )
- data = information.split(" ")
- for field in data:
- if field.rfind("if=") != -1:
- port_virtual_interaces.append(field.replace("if=", ""))
-
- self.dut.send_expect("ifconfig %s up" % port_virtual_interaces[0], "# ")
- self.dut.send_expect("ifconfig %s up" % port_virtual_interaces[1], "# ")
- self.dut.send_expect('brctl addbr "br1"', "# ")
- self.dut.send_expect("brctl addif br1 %s" % port_virtual_interaces[0], "# ")
- self.dut.send_expect("brctl addif br1 %s" % port_virtual_interaces[1], "# ")
- self.dut.send_expect("ifconfig br1 up", "# ")
- time.sleep(3)
-
- tx_port = self.tester.get_local_port(dut_ports[0])
- rx_port = self.tester.get_local_port(dut_ports[1])
-
- for flows in range(1, flows_without_kni + 1):
- tgen_input = []
- tgen_input.append((tx_port, rx_port, pcap))
-
- if flows == 2:
- tgen_input.append((rx_port, tx_port, pcap))
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- self.result_table_add([flows, float(pps) / 10**6])
-
- self.dut.send_expect("ifconfig br1 down", "# ")
- self.dut.send_expect('brctl delbr "br1"', "# ", 30)
-
- for port in allow_list:
- self.dut.send_expect(
- "./usertools/dpdk-devbind.py -b igb_uio %s" % (port), "# "
- )
- self.result_table_print()
-
- def test_perf_routing(self):
- """
- Routing performance.
- """
-
- header = routing_perf_results_header
-
- for size in packet_sizes_routing:
- header.append("%d Mpps" % size)
-
- self.result_table_create(header)
-
- self.dut.send_expect("echo 1 > /proc/sys/net/ipv4/ip_forward", "# ")
-
- # Run the test steps
- for step in routing_performance_steps:
- self.extract_ports_cores_config(step["config"])
-
- resutls_row = [
- step["kthread_mode"],
- len(self.config["ports"]),
- self.stripped_config_param(),
- ]
-
- self.dut.kill_all()
- self.start_kni()
-
- # Set up the IP addresses, routes and arp entries of the virtual
- # interfaces
- virtual_interaces = {}
- ip_subnet = 0
- for port in self.config["port_details"]:
-
- port_number = port["port"]
-
- # Get the virtual interfaces base on the number of kernel
- # lcores
- port_virtual_interaces = []
- for i in range(len(port["kernel_cores"])):
- port_virtual_interaces.append(
- self.virtual_interface_name(port_number, i)
- )
-
- virtual_interaces[port_number] = port_virtual_interaces
-
- # Setup IP, ARP and route for each virtual interface
- for interface in range(len(virtual_interaces[port_number])):
- tx_port = self.tester.get_local_port(port_number)
-
- self.dut.send_expect(
- "ifconfig %s 192.170.%d.1"
- % (virtual_interaces[port_number][interface], ip_subnet),
- "# ",
- )
- self.dut.send_expect(
- "route add -net 192.170.%d.0 netmask 255.255.255.0 gw 192.170.%d.1"
- % (ip_subnet, ip_subnet),
- "# ",
- )
- self.dut.send_expect(
- "arp -s 192.170.%d.2 %s"
- % (ip_subnet, self.tester.get_mac(tx_port)),
- "# ",
- )
- ip_subnet += 1
-
- # Get performance for each frame size
- for packet_size in packet_sizes_routing:
- payload_size = packet_size - 38
- tgen_input = []
-
- # Test one port
- tx_port = self.tester.get_local_port(self.config["ports"][0])
- rx_mac = self.dut.get_mac_address(self.config["ports"][0])
-
- port_iterator = 0
- cnt = 0
- for port in self.config["port_details"]:
- port_number = port["port"]
-
- rx_mac = self.dut.get_mac_address(port_number)
- tx_port = self.tester.get_local_port(port_number)
-
- num_interfaces_per_port = len(virtual_interaces[port_number])
-
- # Set flows from and to virtual interfaces in the same port
- src_ip_subnet = port_iterator * num_interfaces_per_port
- for interface in range(len(virtual_interaces[port_number])):
- dst_ip_subnet = (src_ip_subnet + 1) % num_interfaces_per_port
- dst_ip_subnet += port_iterator * num_interfaces_per_port
- self.tester.scapy_append("flows = []")
- self.tester.scapy_append(
- 'flows.append(Ether(dst="%s")/IP(src="192.170.%d.2",dst="192.170.%d.2")/("X"*%d))'
- % (rx_mac, src_ip_subnet, dst_ip_subnet, payload_size)
- )
- src_ip_subnet += 1
- pcap = os.sep.join(
- [self.output_path, "routePerf_{0}.pcap".format(cnt)]
- )
- self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
- self.tester.scapy_execute()
- tgen_input.append((tx_port, tx_port, pcap))
- cnt += 1
- time.sleep(1)
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- resutls_row.append(float(pps) / 10**6)
-
- self.result_table_add(resutls_row)
-
- self.result_table_print()
-
- def test_perf_routing_without_kni(self):
- """
- Routing performance without KNI.
- """
-
- header = routing_perf_no_kni_results_header
-
- for size in packet_sizes_routing:
- header.append("%d Mpps" % size)
-
- self.result_table_create(header)
-
- self.dut.kill_all()
- self.dut.send_expect("rmmod rte_kni", "# ", 20)
-
- self.dut.send_expect("systemctl stop NetworkManager.service", "# ")
-
- dut_ports = self.dut.get_ports(self.nic)
-
- allow_list = self.make_allow_list(self.target, self.nic)
- port_virtual_interaces = []
-
- for port in allow_list:
-
- # Enables the interfaces
- information = self.dut.send_expect(
- "./usertools/dpdk-devbind.py --status | grep '%s'" % port, "# "
- )
- data = information.split(" ")
- for field in data:
- if field.rfind("if=") != -1:
- interface_aux = field.replace("if=", "")
- port_virtual_interaces.append(interface_aux)
- self.dut.send_expect("ifconfig %s up" % interface_aux, "# ")
-
- self.dut.send_expect("echo 1 > /proc/sys/net/ipv4/ip_forward", "# ")
-
- for port in range(0, ports_without_kni):
- tx_port = self.tester.get_local_port(dut_ports[port])
- self.dut.send_expect(
- "ifconfig %s 192.170.%d.1 up"
- % (port_virtual_interaces[port], port + 100),
- "# ",
- )
- self.dut.send_expect(
- "route add -net 192.170.%d.0 netmask 255.255.255.0 gw 192.170.%d.1"
- % (port + 100, port + 100),
- "# ",
- )
- self.dut.send_expect(
- "arp -s 192.170.%d.2 %s" % (port + 100, self.tester.get_mac(tx_port)),
- "# ",
- )
-
- one_port_resutls_row = [1]
- two_port_resutls_row = [2]
- for packet_size in packet_sizes_routing:
-
- payload_size = packet_size - 38
- tgen_input = []
-
- # Prepare test with 1 port
- tx_port = self.tester.get_local_port(dut_ports[0])
- rx_mac = self.dut.get_mac_address(dut_ports[0])
- self.tester.scapy_append("flows = []")
- self.tester.scapy_append(
- 'flows.append(Ether(dst="%s")/IP(src="192.170.100.2",dst="192.170.100.2")/("X"*%d))'
- % (rx_mac, payload_size)
- )
- pcap = os.sep.join([self.output_path, "routePerf_1.pcap"])
- self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
- self.tester.scapy_execute()
-
- tgen_input = []
- tgen_input.append((tx_port, tx_port, pcap))
-
- # Get throughput with 1 port
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- one_port_resutls_row.append(float(pps) / 10**6)
- self.result_table_add(one_port_resutls_row)
-
- # Prepare test with 'ports_without_kni' ports
- cnt = 0
- for port in range(ports_without_kni):
- rx_mac = self.dut.get_mac_address(dut_ports[port])
- tx_port = self.tester.get_local_port(dut_ports[port])
- self.tester.scapy_append("flows = []")
- self.tester.scapy_append(
- 'flows.append(Ether(dst="%s")/IP(src="192.170.%d.2",dst="192.170.%d.2")/("X"*%d))'
- % (
- rx_mac,
- 100 + port,
- 100 + (port + 1) % ports_without_kni,
- payload_size,
- )
- )
- pcap = os.sep.join(
- [
- self.output_path,
- "routePerf_{0}_{1}.pcap".format(ports_without_kni, cnt),
- ]
- )
- tgen_input.append((tx_port, tx_port, pcap))
- self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
- self.tester.scapy_execute()
- cnt += 1
-
- # Get throughput with 'ports_without_kni' ports
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- two_port_resutls_row.append(float(pps) / 10**6)
- self.result_table_add(two_port_resutls_row)
-
- self.result_table_print()
-
- for port in allow_list:
- self.dut.send_expect(
- "./usertools/dpdk-devbind.py -b %s %s" % (self.drivername, port), "# "
- )
-
def tear_down(self):
"""
Run after each test case.
diff --git a/tests/TestSuite_perf_kni.py b/tests/TestSuite_perf_kni.py
new file mode 100644
index 00000000..52ee5a85
--- /dev/null
+++ b/tests/TestSuite_perf_kni.py
@@ -0,0 +1,1191 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2019 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+
+Test Kernel NIC Interface.
+"""
+
+import os
+import re
+import time
+from random import randint
+
+import framework.packet as packet
+import framework.utils as utils
+from framework.pktgen import PacketGeneratorHelper
+from framework.test_case import TestCase
+
+dut_ports = []
+port_virtual_interaces = []
+
+ports_without_kni = 2
+flows_without_kni = 2
+
+packet_sizes_loopback = [64, 256]
+packet_sizes_routing = [64, 256]
+
+ports_cores_template = (
+ "\(P([0123]),(C\{\d.\d.\d\}),(C\{\d.\d.\d\}),(C\{\d.\d.\d\}),?(C\{\d.\d.\d\})?\),?"
+)
+
+default_1_port_cores_config = "(P0,C{1.0.0},C{1.1.0},C{1.2.0})"
+default_2_port_cores_config = (
+ "(P0,C{1.0.0},C{1.1.0},C{1.2.0}),(P1,C{1.3.0},C{1.4.0},C{1.5.0})"
+)
+
+stress_test_iterations = 50
+stress_test_random_iterations = 50
+
+routing_performance_steps = [
+ {"kthread_mode": "single", "config": "(P0,C{1.0.0},C{1.1.0},C{1.0.1})"},
+ {"kthread_mode": "single", "config": "(P0,C{1.0.0},C{1.0.1},C{1.1.0})"},
+ {"kthread_mode": "single", "config": "(P0,C{1.0.0},C{1.1.0},C{1.2.0})"},
+ {
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.0.1})",
+ },
+ {
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.4.0})",
+ },
+ {
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.2.0})",
+ },
+ {
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.0.1})",
+ },
+ {
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.4.0})",
+ },
+ {
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.2.0})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.3.0})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.3.0})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.2.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.2.1})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.5.0})",
+ },
+ {
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0},C{1.6.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0},C{1.7.0})",
+ },
+]
+
+bridge_performance_steps = [
+ {
+ "flows": 1,
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.0.1})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.4.0})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.2.0})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.0.1})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.4.0})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.2.0})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.3.0})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.3.0})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.2.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.2.1})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.5.0})",
+ },
+ {
+ "flows": 1,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0},C{1.6.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0},C{1.7.0})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.2.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.2.1})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.5.0})",
+ },
+ {
+ "flows": 2,
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0},C{1.6.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0},C{1.7.0})",
+ },
+]
+
+loopback_performance_steps = [
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.0.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.1.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.2.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.0.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.4.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.2.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.0.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.1.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.2.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.3.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.2.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.3.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.1.1},C{1.5.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0},C{1.6.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0},C{1.7.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.0.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.1.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.2.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.0.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.4.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "single",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.2.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.0.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.1.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.1.0},C{1.2.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.0.1}),(P1,C{1.1.0},C{1.3.0},C{1.1.1})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.0.1},C{1.2.0}),(P1,C{1.1.0},C{1.1.1},C{1.3.0})",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "multiple",
+ "config": "(P0,C{1.0.0},C{1.2.0},C{1.4.0}),(P1,C{1.1.0},C{1.3.0},C{1.5.0})",
+ },
+]
+
+loopback_perf_results_header = ["lo_mode", "kthread_mode", "Ports", "Config"]
+bridge_perf_results_header = ["kthread_mode", "Flows", "Config", "64 Mpps"]
+bridge_perf_no_kni_results_header = ["Flows", "64 Mpps"]
+routing_perf_results_header = ["kthread_mode", "Ports", "Config"]
+routing_perf_no_kni_results_header = ["Ports"]
+
+stress_modes_output = [
+ {
+ "lo_mode": None,
+ "kthread_mode": None,
+ "output": "loopback disabled.*DPDK kni module loaded.*Single kernel thread",
+ },
+ {
+ "lo_mode": "lo_mode_none",
+ "kthread_mode": None,
+ "output": "loopback disabled.*DPDK kni module loaded.*Single kernel thread",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": None,
+ "output": "loopback mode=lo_mode_fifo enabled.*Single kernel thread",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": None,
+ "output": "loopback mode=lo_mode_fifo_skb enabled.*Single kernel thread",
+ },
+ {
+ "lo_mode": "lo_mode_random",
+ "kthread_mode": None,
+ "output": "Incognizant parameter, loopback disabled.*DPDK kni module loaded.*Single kernel thread",
+ },
+ {
+ "lo_mode": None,
+ "kthread_mode": "single",
+ "output": "loopback disabled.*DPDK kni module loaded.*Single kernel thread",
+ },
+ {
+ "lo_mode": None,
+ "kthread_mode": "multiple",
+ "output": "loopback disabled.*DPDK kni module loaded.*Multiple kernel thread",
+ },
+ {
+ "lo_mode": None,
+ "kthread_mode": "singlemulti",
+ "output": "KNI.* Invalid parameter for kthread_mode",
+ },
+ {
+ "lo_mode": "lo_mode_fifo",
+ "kthread_mode": "multiple",
+ "output": "loopback mode=lo_mode_fifo enabled.*Multiple kernel thread",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "multiple",
+ "output": "loopback mode=lo_mode_fifo_skb enabled.*Multiple kernel thread",
+ },
+ {
+ "lo_mode": "lo_mode_fifo_skb",
+ "kthread_mode": "singlemulti",
+ "output": "Invalid parameter for kthread_mode",
+ },
+ {
+ "lo_mode": "lo_mode_random",
+ "kthread_mode": "multiple",
+ "output": "KNI.* Incognizant parameter, loopback disabled",
+ },
+]
+
+#
+#
+# Test class.
+#
+
+
+class TestKni(TestCase):
+
+ #
+ #
+ # Utility methods and other non-test code.
+ #
+ # Insert or move non-test functions here.
+
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+
+ KNI Prerequisites
+ """
+ out = self.dut.send_expect("which brctl", "# ")
+ self.verify(
+ "no brctl" not in out,
+ "The linux tool brctl is needed to run this test suite",
+ )
+
+ out = self.dut.build_dpdk_apps("./examples/kni")
+ self.app_kni_path = self.dut.apps_name["kni"]
+ self.verify("Error" not in out, "Compilation failed")
+ p0_pci = self.dut.ports_info[0]["pci"]
+ numa_node = int(
+ self.dut.send_expect(
+ "cat /sys/bus/pci/devices/%s/numa_node" % p0_pci, "# ", 30
+ )
+ )
+ socket_id = numa_node if numa_node > 0 else 0
+ if socket_id == 0:
+ global default_1_port_cores_config
+ global default_2_port_cores_config
+ global routing_performance_steps
+ global bridge_performance_steps
+ global loopback_performance_steps
+
+ default_1_port_cores_config = default_1_port_cores_config.replace(
+ "C{1.", "C{0."
+ )
+ default_2_port_cores_config = default_1_port_cores_config.replace(
+ "C{1.", "C{0."
+ )
+ for i in range(len(routing_performance_steps)):
+ routing_performance_steps[i]["config"] = routing_performance_steps[i][
+ "config"
+ ].replace("C{1.", "C{0.")
+ for j in range(len(bridge_performance_steps)):
+ bridge_performance_steps[j]["config"] = bridge_performance_steps[j][
+ "config"
+ ].replace("C{1.", "C{0.")
+ for k in range(len(loopback_performance_steps)):
+ loopback_performance_steps[k]["config"] = loopback_performance_steps[k][
+ "config"
+ ].replace("C{1.", "C{0.")
+
+ self.extract_ports_cores_config(default_1_port_cores_config)
+ out = self.start_kni()
+ self.verify("Error" not in out, "Error found during kni start")
+ out = self.dut.send_expect("cat /etc/os-release", "# ")
+ if "Ubuntu" in out:
+ self.dut.send_expect("ufw disable", "# ")
+ else:
+ self.dut.send_expect("service iptables stop", "# ")
+ self.dut.send_expect("service firewalld stop", "# ")
+
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+ # enable dut ipv6
+ self.dut.enable_ipv6("all")
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def start_kni(self, lo_mode=None, kthread_mode=None):
+ """
+ Insert the igb_uio and rte_kni modules with passed parameters and launch
+ kni application.
+ """
+ module_param = ""
+ if lo_mode is not None:
+ module_param += "lo_mode=%s " % lo_mode
+
+ if kthread_mode is not None:
+ module_param += "kthread_mode=%s" % kthread_mode
+ self.dut.kill_all()
+ out = self.dut.send_expect("rmmod rte_kni", "# ", 10)
+ self.verify("in use" not in out, "Error unloading KNI module: " + out)
+ if self.drivername == "igb_uio":
+ self.dut.send_expect("rmmod igb_uio", "# ", 5)
+ self.dut.send_expect(
+ "insmod ./%s/kmod/igb_uio.ko" % (self.target), "# ", 20
+ )
+ self.dut.bind_interfaces_linux(self.drivername)
+ out = self.dut.send_expect(
+ "insmod ./%s/kmod/rte_kni.ko %s" % (self.target, module_param), "# ", 10
+ )
+
+ self.verify("Error" not in out, "Error loading KNI module: " + out)
+
+ port_mask = utils.create_mask(self.config["ports"])
+ core_mask = utils.create_mask(
+ self.config["rx_cores"]
+ + self.config["tx_cores"]
+ + self.config["kernel_cores"]
+ )
+
+ config_param = self.build_config_param()
+
+ eal_para = self.dut.create_eal_parameters(
+ cores=self.config["rx_cores"]
+ + self.config["tx_cores"]
+ + self.config["kernel_cores"]
+ )
+ out_kni = self.dut.send_expect(
+ "./%s %s -- -P -p %s %s -m &"
+ % (self.app_kni_path, eal_para, port_mask, config_param),
+ "Link [Uu]p",
+ 20,
+ )
+
+ time.sleep(5)
+ if kthread_mode == "single":
+ kthread_mask = utils.create_mask(self.config["kernel_cores"])
+ out = self.dut.send_expect(
+ "taskset -p `pgrep -fl kni_single | awk '{print $1}'`", "#"
+ )
+ self.verify("current affinity mask" in out, "Unable to set core affinity")
+
+ return out_kni
+
+ def extract_ports_cores_config(self, ports_cores_config):
+ """
+ Parses a ports/cores configuration string into the 'self.config' variable.
+ """
+ ports_cores_pattern = re.compile(ports_cores_template)
+ port_configs = ports_cores_pattern.findall(ports_cores_config)
+ dut_ports = self.dut.get_ports(self.nic)
+
+ config = {}
+ ports = []
+ rx_cores = []
+ tx_cores = []
+ k_cores = []
+ port_details = []
+ for port_config in port_configs:
+ details = {}
+
+ port_number = int(port_config[0])
+ self.verify(port_number < len(dut_ports), "Not enough ports available")
+
+ ports.append(dut_ports[port_number])
+ details["port"] = dut_ports[port_number]
+ rx_cores.append(self.dut.get_lcore_id(port_config[1]))
+ details["rx_core"] = self.dut.get_lcore_id(port_config[1])
+ tx_cores.append(self.dut.get_lcore_id(port_config[2]))
+ details["tx_core"] = self.dut.get_lcore_id(port_config[2])
+
+ details["kernel_cores"] = []
+ for k_core in port_config[3:]:
+ if k_core != "":
+ k_cores.append(self.dut.get_lcore_id(k_core))
+ details["kernel_cores"].append(self.dut.get_lcore_id(k_core))
+
+ port_details.append(details)
+
+ config["ports"] = ports
+ config["rx_cores"] = rx_cores
+ config["tx_cores"] = tx_cores
+ config["kernel_cores"] = k_cores
+ config["port_details"] = port_details
+
+ self.config = config
+
+ def build_config_param(self):
+ """
+ Formats the '--conf=(xxx)' parameter for kni application calls
+ """
+ config_param = '--config="%s'
+ port_cores = "(%s,%s,%s)"
+ port_cores_kernel = "(%s,%s,%s,"
+
+ for port in self.config["port_details"]:
+
+ # Kernel threads specified
+ if len(port["kernel_cores"]) > 0:
+
+ port_config = port_cores_kernel % (
+ port["port"],
+ port["rx_core"],
+ port["tx_core"],
+ )
+
+ for core in port["kernel_cores"]:
+ port_config += str(core) + ","
+
+ port_config = port_config[:-1] + ")"
+
+ # No kernel threads specified
+ else:
+ port_config = port_cores % (
+ port["port"],
+ port["rx_core"],
+ port["tx_core"],
+ )
+
+ config_param = config_param % port_config + ",%s"
+
+ config_param = config_param.replace(",%s", '"')
+
+ return config_param
+
+ def stripped_config_param(self):
+ """
+ Removes the '--config=' prefix from the config string.
+ Used for reporting.
+ """
+ config_param = self.build_config_param()
+ config_param = config_param.replace('--config="', "")
+ config_param = config_param.replace('"', "")
+ return config_param
+
+ def virtual_interface_name(self, port, sub_port=0):
+ """
+ Given a port and subport name, formats the virtual interface name.
+ """
+ return "vEth%d_%d" % (port, sub_port)
+
+ def dut_physical_cores(self):
+ """
+ Returns the number of physical cores in socket 0.
+ """
+ dut_cores = self.dut.get_all_cores()
+
+ first_core = dut_cores[0]
+ cores = []
+
+ for core in dut_cores[1:]:
+ if core["core"] not in cores and core["socket"] == first_core["socket"]:
+ cores.append(core["core"])
+
+ return len(cores)
+
+ def make_allow_list(self, target, nic):
+ """
+ Create allow list with ports.
+ """
+ allow_list = []
+ dut_ports = self.dut.get_ports(self.nic)
+ self.dut.restore_interfaces()
+ allPort = self.dut.ports_info
+ if self.drivername in ["igb_uio"]:
+ self.dut.send_expect("insmod ./" + self.target + "/kmod/igb_uio.ko", "#")
+ for port in range(0, len(allPort)):
+ if port in dut_ports:
+ allow_list.append(allPort[port]["pci"])
+ return allow_list
+
+ #
+ #
+ #
+ # Test cases.
+ #
+
+ def test_perf_loopback(self):
+ """
+ KNI loopback performance
+ """
+ self.dut.kill_all()
+
+ header = loopback_perf_results_header
+ for size in packet_sizes_loopback:
+ header.append("%d (pps)" % size)
+
+ self.result_table_create(header)
+
+ # Execute the permutations of the test
+ for step in loopback_performance_steps:
+
+ self.extract_ports_cores_config(step["config"])
+
+ total_cores = len(
+ self.config["tx_cores"]
+ + self.config["rx_cores"]
+ + self.config["kernel_cores"]
+ )
+ if total_cores > self.dut_physical_cores():
+ self.logger.info(
+ "Skipping step %s (%d cores needed, got %d)"
+ % (step["config"], total_cores, self.dut_physical_cores())
+ )
+ continue
+
+ self.start_kni(lo_mode=step["lo_mode"], kthread_mode=step["kthread_mode"])
+
+ pps_results = []
+
+ for size in packet_sizes_loopback:
+
+ payload_size = size - 38
+
+ ports_number = len(self.config["ports"])
+
+ # Set up the flows for the ports
+ tgen_input = []
+ for port in self.config["ports"]:
+
+ rx_mac = self.dut.get_mac_address(port)
+ tx_port = self.tester.get_local_port(port)
+ self.tester.scapy_append('dstmac = "%s"' % rx_mac)
+ self.tester.scapy_append(
+ 'flows = [Ether(dst=dstmac)/IP()/("X"*%d)]' % payload_size
+ )
+ pcap = os.sep.join(
+ [self.output_path, "tester{0}.pcap".format(tx_port)]
+ )
+ self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
+ self.tester.scapy_execute()
+ tgen_input.append((tx_port, tx_port, pcap))
+
+ time.sleep(1)
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ pps_results.append(float(pps) / 1000000)
+
+ ports_number = len(self.config["ports"])
+ results_row = [
+ step["lo_mode"],
+ step["kthread_mode"],
+ ports_number,
+ self.stripped_config_param(),
+ ] + pps_results
+ self.result_table_add(results_row)
+
+ self.dut.kill_all()
+
+ self.result_table_print()
+
+ def test_perf_bridge(self):
+ """
+ KNI performance bridge mode.
+ """
+ self.result_table_create(bridge_perf_results_header)
+
+ self.tester.scapy_append('srcmac="00:00:00:00:00:01"')
+ pcap = os.sep.join([self.output_path, "kni.pcap"])
+ self.tester.scapy_append(
+ 'wrpcap("%s", [Ether(src=srcmac, dst="ff:ff:ff:ff:ff:ff")/IP(len=46)/UDP()/("X"*18)])'
+ % pcap
+ )
+ self.tester.scapy_execute()
+
+ for step in bridge_performance_steps:
+
+ self.extract_ports_cores_config(step["config"])
+
+ total_cores = len(
+ self.config["tx_cores"]
+ + self.config["rx_cores"]
+ + self.config["kernel_cores"]
+ )
+ if total_cores > self.dut_physical_cores():
+ self.logger.info(
+ "Skipping step %s (%d cores needed, got %d)"
+ % (step["config"], total_cores, self.dut_physical_cores())
+ )
+ continue
+
+ port_virtual_interaces = []
+ for port in self.config["port_details"]:
+ for i in range(len(port["kernel_cores"])):
+ port_virtual_interaces.append(
+ self.virtual_interface_name(port["port"], i)
+ )
+
+ self.dut.kill_all()
+ self.start_kni(lo_mode=None, kthread_mode=step["kthread_mode"])
+
+ for virtual_interace in port_virtual_interaces:
+ out = self.dut.send_expect("ifconfig %s up" % virtual_interace, "# ")
+ self.verify("ERROR" not in out, "Virtual interface not found")
+
+ self.dut.send_expect('brctl addbr "br_kni"', "# ")
+
+ for virtual_interace in port_virtual_interaces:
+ out = self.dut.send_expect(
+ "brctl addif br_kni %s" % virtual_interace, "# "
+ )
+ self.verify("ERROR" not in out, "Device not found")
+
+ self.dut.send_expect("ifconfig br_kni up", "# ")
+
+ tx_port = self.tester.get_local_port(self.config["ports"][0])
+ rx_port = self.tester.get_local_port(self.config["ports"][1])
+
+ tgenInput = []
+ tgenInput.append((tx_port, rx_port, pcap))
+
+ if step["flows"] == 2:
+ tgenInput.append((rx_port, tx_port, pcap))
+ self.verify(self.dut.is_interface_up(intf="br_kni"), "br_kni should be up")
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+ step["pps"] = float(pps) / 10**6
+
+ results_row = [
+ step["kthread_mode"],
+ step["flows"],
+ self.stripped_config_param(),
+ (float(pps) / 10**6),
+ ]
+
+ self.result_table_add(results_row)
+
+ self.dut.send_expect("ifconfig br_kni down", "# ")
+ self.dut.send_expect('brctl delbr "br_kni"', "# ", 10)
+
+ self.result_table_print()
+
+ def test_perf_bridge_without_kni(self):
+ """
+ Bridge mode performance without KNI.
+ """
+ self.result_table_create(bridge_perf_no_kni_results_header)
+
+ self.dut.kill_all()
+
+ dut_ports = self.dut.get_ports(self.nic)
+
+ self.tester.scapy_append('srcmac="00:00:00:00:00:01"')
+ pcap = os.sep.join([self.output_path, "kni.pcap"])
+ self.tester.scapy_append(
+ 'wrpcap("%s", [Ether(src=srcmac, dst="ff:ff:ff:ff:ff:ff")/IP(len=46)/UDP()/("X"*18)])'
+ % pcap
+ )
+ self.tester.scapy_execute()
+
+ allow_list = self.make_allow_list(self.target, self.nic)
+ port_virtual_interaces = []
+ for port in allow_list:
+ information = self.dut.send_expect(
+ "./usertools/dpdk-devbind.py --status | grep '%s'" % port, "# "
+ )
+ data = information.split(" ")
+ for field in data:
+ if field.rfind("if=") != -1:
+ port_virtual_interaces.append(field.replace("if=", ""))
+
+ self.dut.send_expect("ifconfig %s up" % port_virtual_interaces[0], "# ")
+ self.dut.send_expect("ifconfig %s up" % port_virtual_interaces[1], "# ")
+ self.dut.send_expect('brctl addbr "br1"', "# ")
+ self.dut.send_expect("brctl addif br1 %s" % port_virtual_interaces[0], "# ")
+ self.dut.send_expect("brctl addif br1 %s" % port_virtual_interaces[1], "# ")
+ self.dut.send_expect("ifconfig br1 up", "# ")
+ time.sleep(3)
+
+ tx_port = self.tester.get_local_port(dut_ports[0])
+ rx_port = self.tester.get_local_port(dut_ports[1])
+
+ for flows in range(1, flows_without_kni + 1):
+ tgen_input = []
+ tgen_input.append((tx_port, rx_port, pcap))
+
+ if flows == 2:
+ tgen_input.append((rx_port, tx_port, pcap))
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ self.result_table_add([flows, float(pps) / 10**6])
+
+ self.dut.send_expect("ifconfig br1 down", "# ")
+ self.dut.send_expect('brctl delbr "br1"', "# ", 30)
+
+ for port in allow_list:
+ self.dut.send_expect(
+ "./usertools/dpdk-devbind.py -b igb_uio %s" % (port), "# "
+ )
+ self.result_table_print()
+
+ def test_perf_routing(self):
+ """
+ Routing performance.
+ """
+
+ header = routing_perf_results_header
+
+ for size in packet_sizes_routing:
+ header.append("%d Mpps" % size)
+
+ self.result_table_create(header)
+
+ self.dut.send_expect("echo 1 > /proc/sys/net/ipv4/ip_forward", "# ")
+
+ # Run the test steps
+ for step in routing_performance_steps:
+ self.extract_ports_cores_config(step["config"])
+
+ resutls_row = [
+ step["kthread_mode"],
+ len(self.config["ports"]),
+ self.stripped_config_param(),
+ ]
+
+ self.dut.kill_all()
+ self.start_kni()
+
+ # Set up the IP addresses, routes and arp entries of the virtual
+ # interfaces
+ virtual_interaces = {}
+ ip_subnet = 0
+ for port in self.config["port_details"]:
+
+ port_number = port["port"]
+
+ # Get the virtual interfaces base on the number of kernel
+ # lcores
+ port_virtual_interaces = []
+ for i in range(len(port["kernel_cores"])):
+ port_virtual_interaces.append(
+ self.virtual_interface_name(port_number, i)
+ )
+
+ virtual_interaces[port_number] = port_virtual_interaces
+
+ # Setup IP, ARP and route for each virtual interface
+ for interface in range(len(virtual_interaces[port_number])):
+ tx_port = self.tester.get_local_port(port_number)
+
+ self.dut.send_expect(
+ "ifconfig %s 192.170.%d.1"
+ % (virtual_interaces[port_number][interface], ip_subnet),
+ "# ",
+ )
+ self.dut.send_expect(
+ "route add -net 192.170.%d.0 netmask 255.255.255.0 gw 192.170.%d.1"
+ % (ip_subnet, ip_subnet),
+ "# ",
+ )
+ self.dut.send_expect(
+ "arp -s 192.170.%d.2 %s"
+ % (ip_subnet, self.tester.get_mac(tx_port)),
+ "# ",
+ )
+ ip_subnet += 1
+
+ # Get performance for each frame size
+ for packet_size in packet_sizes_routing:
+ payload_size = packet_size - 38
+ tgen_input = []
+
+ # Test one port
+ tx_port = self.tester.get_local_port(self.config["ports"][0])
+ rx_mac = self.dut.get_mac_address(self.config["ports"][0])
+
+ port_iterator = 0
+ cnt = 0
+ for port in self.config["port_details"]:
+ port_number = port["port"]
+
+ rx_mac = self.dut.get_mac_address(port_number)
+ tx_port = self.tester.get_local_port(port_number)
+
+ num_interfaces_per_port = len(virtual_interaces[port_number])
+
+ # Set flows from and to virtual interfaces in the same port
+ src_ip_subnet = port_iterator * num_interfaces_per_port
+ for interface in range(len(virtual_interaces[port_number])):
+ dst_ip_subnet = (src_ip_subnet + 1) % num_interfaces_per_port
+ dst_ip_subnet += port_iterator * num_interfaces_per_port
+ self.tester.scapy_append("flows = []")
+ self.tester.scapy_append(
+ 'flows.append(Ether(dst="%s")/IP(src="192.170.%d.2",dst="192.170.%d.2")/("X"*%d))'
+ % (rx_mac, src_ip_subnet, dst_ip_subnet, payload_size)
+ )
+ src_ip_subnet += 1
+ pcap = os.sep.join(
+ [self.output_path, "routePerf_{0}.pcap".format(cnt)]
+ )
+ self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
+ self.tester.scapy_execute()
+ tgen_input.append((tx_port, tx_port, pcap))
+ cnt += 1
+ time.sleep(1)
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ resutls_row.append(float(pps) / 10**6)
+
+ self.result_table_add(resutls_row)
+
+ self.result_table_print()
+
+ def test_perf_routing_without_kni(self):
+ """
+ Routing performance without KNI.
+ """
+
+ header = routing_perf_no_kni_results_header
+
+ for size in packet_sizes_routing:
+ header.append("%d Mpps" % size)
+
+ self.result_table_create(header)
+
+ self.dut.kill_all()
+ self.dut.send_expect("rmmod rte_kni", "# ", 20)
+
+ self.dut.send_expect("systemctl stop NetworkManager.service", "# ")
+
+ dut_ports = self.dut.get_ports(self.nic)
+
+ allow_list = self.make_allow_list(self.target, self.nic)
+ port_virtual_interaces = []
+
+ for port in allow_list:
+
+ # Enables the interfaces
+ information = self.dut.send_expect(
+ "./usertools/dpdk-devbind.py --status | grep '%s'" % port, "# "
+ )
+ data = information.split(" ")
+ for field in data:
+ if field.rfind("if=") != -1:
+ interface_aux = field.replace("if=", "")
+ port_virtual_interaces.append(interface_aux)
+ self.dut.send_expect("ifconfig %s up" % interface_aux, "# ")
+
+ self.dut.send_expect("echo 1 > /proc/sys/net/ipv4/ip_forward", "# ")
+
+ for port in range(0, ports_without_kni):
+ tx_port = self.tester.get_local_port(dut_ports[port])
+ self.dut.send_expect(
+ "ifconfig %s 192.170.%d.1 up"
+ % (port_virtual_interaces[port], port + 100),
+ "# ",
+ )
+ self.dut.send_expect(
+ "route add -net 192.170.%d.0 netmask 255.255.255.0 gw 192.170.%d.1"
+ % (port + 100, port + 100),
+ "# ",
+ )
+ self.dut.send_expect(
+ "arp -s 192.170.%d.2 %s" % (port + 100, self.tester.get_mac(tx_port)),
+ "# ",
+ )
+
+ one_port_resutls_row = [1]
+ two_port_resutls_row = [2]
+ for packet_size in packet_sizes_routing:
+
+ payload_size = packet_size - 38
+ tgen_input = []
+
+ # Prepare test with 1 port
+ tx_port = self.tester.get_local_port(dut_ports[0])
+ rx_mac = self.dut.get_mac_address(dut_ports[0])
+ self.tester.scapy_append("flows = []")
+ self.tester.scapy_append(
+ 'flows.append(Ether(dst="%s")/IP(src="192.170.100.2",dst="192.170.100.2")/("X"*%d))'
+ % (rx_mac, payload_size)
+ )
+ pcap = os.sep.join([self.output_path, "routePerf_1.pcap"])
+ self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
+ self.tester.scapy_execute()
+
+ tgen_input = []
+ tgen_input.append((tx_port, tx_port, pcap))
+
+ # Get throughput with 1 port
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ one_port_resutls_row.append(float(pps) / 10**6)
+ self.result_table_add(one_port_resutls_row)
+
+ # Prepare test with 'ports_without_kni' ports
+ cnt = 0
+ for port in range(ports_without_kni):
+ rx_mac = self.dut.get_mac_address(dut_ports[port])
+ tx_port = self.tester.get_local_port(dut_ports[port])
+ self.tester.scapy_append("flows = []")
+ self.tester.scapy_append(
+ 'flows.append(Ether(dst="%s")/IP(src="192.170.%d.2",dst="192.170.%d.2")/("X"*%d))'
+ % (
+ rx_mac,
+ 100 + port,
+ 100 + (port + 1) % ports_without_kni,
+ payload_size,
+ )
+ )
+ pcap = os.sep.join(
+ [
+ self.output_path,
+ "routePerf_{0}_{1}.pcap".format(ports_without_kni, cnt),
+ ]
+ )
+ tgen_input.append((tx_port, tx_port, pcap))
+ self.tester.scapy_append('wrpcap("%s",flows)' % pcap)
+ self.tester.scapy_execute()
+ cnt += 1
+
+ # Get throughput with 'ports_without_kni' ports
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ two_port_resutls_row.append(float(pps) / 10**6)
+ self.result_table_add(two_port_resutls_row)
+
+ self.result_table_print()
+
+ for port in allow_list:
+ self.dut.send_expect(
+ "./usertools/dpdk-devbind.py -b %s %s" % (self.drivername, port), "# "
+ )
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ if self._suite_result.test_case == "test_ping":
+ for port in self.config["ports"]:
+ tx_port = self.tester.get_local_port(port)
+ tx_interface = self.tester.get_interface(tx_port)
+ self.tester.send_expect("ip addr flush %s" % tx_interface, "# ")
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.dut.kill_all()
+ self.dut.send_expect("rmmod rte_kni", "# ", 10)
+ # disable dut ipv6
+ self.dut.disable_ipv6("all")
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 3/7] tests/l2fwd:Separated performance cases
2022-09-22 14:29 [dts][PATCH V1 1/7] tests/efd:Separated performance cases Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 2/7] tests/kni:Separated " Hongbo Li
@ 2022-09-22 14:29 ` Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 4/7] tests/tso:Separated " Hongbo Li
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Hongbo Li @ 2022-09-22 14:29 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/l2fwd_test_plan.rst | 36 ------
test_plans/perf_l2fwd_test_plan.rst | 90 +++++++++++++
tests/TestSuite_l2fwd.py | 106 ---------------
tests/TestSuite_perf_l2fwd.py | 192 ++++++++++++++++++++++++++++
4 files changed, 282 insertions(+), 142 deletions(-)
create mode 100644 test_plans/perf_l2fwd_test_plan.rst
create mode 100644 tests/TestSuite_perf_l2fwd.py
diff --git a/test_plans/l2fwd_test_plan.rst b/test_plans/l2fwd_test_plan.rst
index d8803cb5..1bf9ad45 100644
--- a/test_plans/l2fwd_test_plan.rst
+++ b/test_plans/l2fwd_test_plan.rst
@@ -69,39 +69,3 @@ Trigger the packet generator of bursting packets from ``port A``, then check if
``port 0`` could receive them and ``port 1`` could forward them back. Stop it
and then trigger the packet generator of bursting packets from ``port B``, then
check if ``port 1`` could receive them and ``port 0`` could forward them back.
-
-Test Case: ``64/128/256/512/1024/1500`` bytes packet forwarding test
-====================================================================
-
-Set the packet stream to be sent out from packet generator before testing as below.
-
-+-------+---------+---------+---------+-----------+
-| Frame | 1q | 2q | 4q | 8 q |
-| Size | | | | |
-+-------+---------+---------+---------+-----------+
-| 64 | | | | |
-+-------+---------+---------+---------+-----------+
-| 65 | | | | |
-+-------+---------+---------+---------+-----------+
-| 128 | | | | |
-+-------+---------+---------+---------+-----------+
-| 256 | | | | |
-+-------+---------+---------+---------+-----------+
-| 512 | | | | |
-+-------+---------+---------+---------+-----------+
-| 1024 | | | | |
-+-------+---------+---------+---------+-----------+
-| 1280 | | | | |
-+-------+---------+---------+---------+-----------+
-| 1518 | | | | |
-+-------+---------+---------+---------+-----------+
-
-Then run the test application as below::
-
- $ ./x86_64-native-linuxapp-gcc/examples/dpdk-l2fwd -n 2 -c f -- -q 1 -p 0x3
-
-The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
-
-Trigger the packet generator of bursting packets to the port 0 and 1 on the onboard
-NIC to be tested. Then measure the forwarding throughput for different packet sizes
-and different number of queues.
diff --git a/test_plans/perf_l2fwd_test_plan.rst b/test_plans/perf_l2fwd_test_plan.rst
new file mode 100644
index 00000000..e5fbaf30
--- /dev/null
+++ b/test_plans/perf_l2fwd_test_plan.rst
@@ -0,0 +1,90 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+===================
+L2 Forwarding Tests
+===================
+
+This test application is a basic packet processing application using Intel®
+DPDK. It is a layer-2 (L2) forwarding application which takes traffic from
+a single RX port and transmits it with few modification on a single TX port.
+
+For a packet received on a RX port (RX_PORT), it would be transmitted from a
+TX port (TX_PORT=RX_PORT+1) if RX_PORT is even; otherwise from a TX port
+(TX_PORT=RX_PORT-1) if RX_PORT is odd. Before being transmitted, the source
+mac address of the packet would be replaced by the mac address of the TX port,
+while the destination mac address would be replaced by 00:09:c0:00:00:TX_PORT_ID.
+
+The test application should be run with the wanted paired ports configured using
+the coremask parameter via the command line. i.e. port 0 and 1 is a valid pair,
+while port 1 and 2 isn't. The test is performed by running the test application
+and using a traffic generator. Tests are run with receiving a variety of size of
+packets generated by the traffic generator and forwarding back to the traffic
+generator. The packet loss and the throughput are the right ones need to be
+measured.
+
+The ``l2fwd`` application is run with EAL parameters and parameters for
+the application itself. For details about the EAL parameters, see the relevant
+DPDK **Getting Started Guide**. This application supports two parameters for
+itself.
+
+- ``-p PORTMASK``: hexadecimal bitmask of ports to configure
+- ``-q NQ``: number of queue per lcore (default is 1)
+
+Prerequisites
+=============
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+Assume port 0 and 1 are connected to the traffic generator, to run the test
+application in linuxapp environment with 4 lcores, 2 ports and 8 RX queues
+per lcore::
+
+ $ ./x86_64-native-linuxapp-gcc/examples/dpdk-l2fwd -n 1 -c f -- -q 8 -p 0x3
+
+Also, if the ports to be tested are different, the port mask should be changed.
+The lcore used to run the test application and the number of queue used for a
+lcore could be changed. For benchmarking, the EAL parameters and the parameters
+for the application itself for different test cases should be the same.
+
+Test Case: ``64/128/256/512/1024/1500`` bytes packet forwarding test
+====================================================================
+
+Set the packet stream to be sent out from packet generator before testing as below.
+
++-------+---------+---------+---------+-----------+
+| Frame | 1q | 2q | 4q | 8 q |
+| Size | | | | |
++-------+---------+---------+---------+-----------+
+| 64 | | | | |
++-------+---------+---------+---------+-----------+
+| 65 | | | | |
++-------+---------+---------+---------+-----------+
+| 128 | | | | |
++-------+---------+---------+---------+-----------+
+| 256 | | | | |
++-------+---------+---------+---------+-----------+
+| 512 | | | | |
++-------+---------+---------+---------+-----------+
+| 1024 | | | | |
++-------+---------+---------+---------+-----------+
+| 1280 | | | | |
++-------+---------+---------+---------+-----------+
+| 1518 | | | | |
++-------+---------+---------+---------+-----------+
+
+Then run the test application as below::
+
+ $ ./x86_64-native-linuxapp-gcc/examples/dpdk-l2fwd -n 2 -c f -- -q 1 -p 0x3
+
+The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
+
+Trigger the packet generator of bursting packets to the port 0 and 1 on the onboard
+NIC to be tested. Then measure the forwarding throughput for different packet sizes
+and different number of queues.
diff --git a/tests/TestSuite_l2fwd.py b/tests/TestSuite_l2fwd.py
index 9e8fd2d5..7594958b 100644
--- a/tests/TestSuite_l2fwd.py
+++ b/tests/TestSuite_l2fwd.py
@@ -150,112 +150,6 @@ class TestL2fwd(TestCase):
self.quit_l2fwd()
- def test_perf_l2fwd_performance(self):
- """
- Benchmark performance for frame_sizes.
- """
- ports = []
- for port in range(self.number_of_ports):
- ports.append(self.dut_ports[port])
-
- port_mask = utils.create_mask(ports)
- cores = self.dut.get_core_list(self.core_config, socket=self.ports_socket)
-
- eal_params = self.dut.create_eal_parameters(cores=cores)
- eal_param = ""
- for i in ports:
- eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
-
- for frame_size in self.frame_sizes:
-
- payload_size = frame_size - self.headers_size
-
- tgen_input = []
- cnt = 1
- for port in range(self.number_of_ports):
- rx_port = self.tester.get_local_port(
- self.dut_ports[port % self.number_of_ports]
- )
- tx_port = self.tester.get_local_port(
- self.dut_ports[(port + 1) % self.number_of_ports]
- )
- destination_mac = self.dut.get_mac_address(
- self.dut_ports[(port + 1) % self.number_of_ports]
- )
- pcap = os.sep.join(
- [self.output_path, "l2fwd_{0}_{1}.pcap".format(port, cnt)]
- )
- self.tester.scapy_append(
- 'wrpcap("%s", [Ether(dst="%s")/IP()/UDP()/("X"*%d)])'
- % (pcap, destination_mac, payload_size)
- )
- tgen_input.append((tx_port, rx_port, pcap))
- time.sleep(3)
- self.tester.scapy_execute()
- cnt += 1
-
- for queues in self.test_queues:
-
- command_line = "./%s %s %s -- -q %s -p %s &" % (
- self.app_l2fwd_path,
- eal_params,
- eal_param,
- str(queues["queues"]),
- port_mask,
- )
-
- # self.dut.send_expect(command_line, "memory mapped", 60)
- self.dut.send_expect(command_line, "L2FWD: entering main loop", 60)
- # wait 5 second after l2fwd boot up.
- # It is aimed to make sure trex detect link up status.
- if self.tester.is_pktgen:
- time.sleep(5)
- info = (
- "Executing l2fwd using %s queues, frame size %d and %s setup.\n"
- % (queues["queues"], frame_size, self.core_config)
- )
-
- self.logger.info(info)
- self.rst_report(info, annex=True)
- self.rst_report(command_line + "\n\n", frame=True, annex=True)
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- Mpps = pps / 1000000.0
- queues["Mpps"][frame_size] = Mpps
- queues["pct"][frame_size] = (
- Mpps
- * 100
- / float(self.wirespeed(self.nic, frame_size, self.number_of_ports))
- )
-
- self.quit_l2fwd()
-
- # Look for transmission error in the results
- for frame_size in self.frame_sizes:
- for n in range(len(self.test_queues)):
- self.verify(
- self.test_queues[n]["Mpps"][frame_size] > 0, "No traffic detected"
- )
-
- # Prepare the results for table
- for frame_size in self.frame_sizes:
- results_row = []
- results_row.append(frame_size)
- for queue in self.test_queues:
- results_row.append(queue["Mpps"][frame_size])
- results_row.append(queue["pct"][frame_size])
-
- self.result_table_add(results_row)
-
- self.result_table_print()
-
def tear_down(self):
"""
Run after each test case.
diff --git a/tests/TestSuite_perf_l2fwd.py b/tests/TestSuite_perf_l2fwd.py
new file mode 100644
index 00000000..b1674580
--- /dev/null
+++ b/tests/TestSuite_perf_l2fwd.py
@@ -0,0 +1,192 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2019 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Test Layer-2 Forwarding support
+"""
+import os
+import time
+
+import framework.utils as utils
+from framework.pktgen import PacketGeneratorHelper
+from framework.settings import HEADER_SIZE
+from framework.test_case import TestCase
+
+
+class TestL2fwd(TestCase):
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+
+ L2fwd prerequisites.
+ """
+ self.frame_sizes = [64, 65, 128, 256, 512, 1024, 1280, 1518]
+
+ self.test_queues = [
+ {"queues": 1, "Mpps": {}, "pct": {}},
+ {"queues": 2, "Mpps": {}, "pct": {}},
+ {"queues": 4, "Mpps": {}, "pct": {}},
+ {"queues": 8, "Mpps": {}, "pct": {}},
+ ]
+
+ self.core_config = "1S/4C/1T"
+ self.number_of_ports = 2
+ self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["udp"]
+
+ self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
+
+ self.verify(
+ len(self.dut_ports) >= self.number_of_ports,
+ "Not enough ports for " + self.nic,
+ )
+
+ self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+
+ # compile
+ out = self.dut.build_dpdk_apps("./examples/l2fwd")
+ self.app_l2fwd_path = self.dut.apps_name["l2fwd"]
+ self.verify("Error" not in out, "Compilation error")
+ self.verify("No such" not in out, "Compilation error")
+
+ self.table_header = ["Frame"]
+ for queue in self.test_queues:
+ self.table_header.append("%d queues Mpps" % queue["queues"])
+ self.table_header.append("% linerate")
+
+ self.result_table_create(self.table_header)
+
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def test_perf_l2fwd_performance(self):
+ """
+ Benchmark performance for frame_sizes.
+ """
+ ports = []
+ for port in range(self.number_of_ports):
+ ports.append(self.dut_ports[port])
+
+ port_mask = utils.create_mask(ports)
+ cores = self.dut.get_core_list(self.core_config, socket=self.ports_socket)
+
+ eal_params = self.dut.create_eal_parameters(cores=cores)
+ eal_param = ""
+ for i in ports:
+ eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
+
+ for frame_size in self.frame_sizes:
+
+ payload_size = frame_size - self.headers_size
+
+ tgen_input = []
+ cnt = 1
+ for port in range(self.number_of_ports):
+ rx_port = self.tester.get_local_port(
+ self.dut_ports[port % self.number_of_ports]
+ )
+ tx_port = self.tester.get_local_port(
+ self.dut_ports[(port + 1) % self.number_of_ports]
+ )
+ destination_mac = self.dut.get_mac_address(
+ self.dut_ports[(port + 1) % self.number_of_ports]
+ )
+ pcap = os.sep.join(
+ [self.output_path, "l2fwd_{0}_{1}.pcap".format(port, cnt)]
+ )
+ self.tester.scapy_append(
+ 'wrpcap("%s", [Ether(dst="%s")/IP()/UDP()/("X"*%d)])'
+ % (pcap, destination_mac, payload_size)
+ )
+ tgen_input.append((tx_port, rx_port, pcap))
+ time.sleep(3)
+ self.tester.scapy_execute()
+ cnt += 1
+
+ for queues in self.test_queues:
+
+ command_line = "./%s %s %s -- -q %s -p %s &" % (
+ self.app_l2fwd_path,
+ eal_params,
+ eal_param,
+ str(queues["queues"]),
+ port_mask,
+ )
+
+ # self.dut.send_expect(command_line, "memory mapped", 60)
+ self.dut.send_expect(command_line, "L2FWD: entering main loop", 60)
+ # wait 5 second after l2fwd boot up.
+ # It is aimed to make sure trex detect link up status.
+ if self.tester.is_pktgen:
+ time.sleep(5)
+ info = (
+ "Executing l2fwd using %s queues, frame size %d and %s setup.\n"
+ % (queues["queues"], frame_size, self.core_config)
+ )
+
+ self.logger.info(info)
+ self.rst_report(info, annex=True)
+ self.rst_report(command_line + "\n\n", frame=True, annex=True)
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ Mpps = pps / 1000000.0
+ queues["Mpps"][frame_size] = Mpps
+ queues["pct"][frame_size] = (
+ Mpps
+ * 100
+ / float(self.wirespeed(self.nic, frame_size, self.number_of_ports))
+ )
+
+ self.quit_l2fwd()
+
+ # Look for transmission error in the results
+ for frame_size in self.frame_sizes:
+ for n in range(len(self.test_queues)):
+ self.verify(
+ self.test_queues[n]["Mpps"][frame_size] > 0, "No traffic detected"
+ )
+
+ # Prepare the results for table
+ for frame_size in self.frame_sizes:
+ results_row = []
+ results_row.append(frame_size)
+ for queue in self.test_queues:
+ results_row.append(queue["Mpps"][frame_size])
+ results_row.append(queue["pct"][frame_size])
+
+ self.result_table_add(results_row)
+
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.dut.send_expect("fg", "l2fwd|# ", 5)
+ self.dut.send_expect("^C", "# ", 5)
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ pass
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 4/7] tests/tso:Separated performance cases
2022-09-22 14:29 [dts][PATCH V1 1/7] tests/efd:Separated performance cases Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 2/7] tests/kni:Separated " Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 3/7] tests/l2fwd:Separated " Hongbo Li
@ 2022-09-22 14:29 ` Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 5/7] tests/vxlan:Separated " Hongbo Li
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Hongbo Li @ 2022-09-22 14:29 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/perf_tso_test_plan.rst | 82 +++++++++++
test_plans/tso_test_plan.rst | 35 -----
tests/TestSuite_perf_tso.py | 221 ++++++++++++++++++++++++++++++
tests/TestSuite_tso.py | 128 -----------------
4 files changed, 303 insertions(+), 163 deletions(-)
create mode 100644 test_plans/perf_tso_test_plan.rst
create mode 100644 tests/TestSuite_perf_tso.py
diff --git a/test_plans/perf_tso_test_plan.rst b/test_plans/perf_tso_test_plan.rst
new file mode 100644
index 00000000..9dcdd4c3
--- /dev/null
+++ b/test_plans/perf_tso_test_plan.rst
@@ -0,0 +1,82 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2015-2017 Intel Corporation
+
+=========================================
+Transmit Segmentation Offload (TSO) Tests
+=========================================
+
+Description
+===========
+
+This document provides the plan for testing the TSO (Transmit Segmentation
+Offload, also called Large Send offload - LSO) feature of
+Intel Ethernet Controller, including Intel 82599 10GbE Ethernet Controller and
+Intel® Ethernet Converged Network Adapter XL710-QDA2. TSO enables the TCP/IP stack to
+pass the network device a larger ULP datagram than the Maximum Transmit
+Unit Size (MTU). NIC divides the large ULP datagram to multiple segments
+according to the MTU size.
+
+
+Prerequisites
+=============
+
+The DUT must take one of the Ethernet controller ports connected to a port on another
+device that is controlled by the Scapy packet generator.
+
+The Ethernet interface identifier of the port that Scapy will use must be known.
+On tester, all offload feature should be disabled on tx port, and start rx port capture::
+
+ ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
+ ip l set <tx port> up
+ tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
+
+
+On DUT, run pmd with parameter "--enable-rx-cksum". Then enable TSO on tx port
+and checksum on rx port. The test commands is below::
+
+ #enable hw checksum on rx port
+ tx_checksum set ip hw 0
+ tx_checksum set udp hw 0
+ tx_checksum set tcp hw 0
+ tx_checksum set sctp hw 0
+ set fwd csum
+
+ # enable TSO on tx port
+ *tso set 800 1
+
+
+
+Test case: TSO performance
+==========================
+
+Set the packet stream as below before testing
+
++-------+---------+---------+---------+----------+----------+
+| Frame | 1S/1C/1T| 1S/1C/1T| 1S/2C/1T| 1S/2C/2T | 1S/2C/2T |
+| Size | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 64 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 65 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 128 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 256 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 512 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 1024 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 1280 | | | | | |
++-------+---------+---------+---------+----------+----------+
+| 1518 | | | | | |
++-------+---------+---------+---------+----------+----------+
+
+Then run the test application as below::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
+ --burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
+ --txfreet=32 --txrst=32 --enable-rx-cksum
+
+The -n command is used to select the number of memory channels. It should match the
+number of memory channels on that setup.
diff --git a/test_plans/tso_test_plan.rst b/test_plans/tso_test_plan.rst
index d0f96b2b..7b7077b7 100644
--- a/test_plans/tso_test_plan.rst
+++ b/test_plans/tso_test_plan.rst
@@ -131,38 +131,3 @@ Test nvgre() in scapy::
sendp([Ether(dst="%s",src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2",proto=47)/NVGRE()/Ether(dst=%s,src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport="1021",dport="1021")/("X"*%s)], iface="%s")
-Test case: TSO performance
-==========================
-
-Set the packet stream to be sent out from packet generator before testing as
-below.
-
-+-------+---------+---------+---------+----------+----------+
-| Frame | 1S/1C/1T| 1S/1C/1T| 1S/2C/1T| 1S/2C/2T | 1S/2C/2T |
-| Size | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 64 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 65 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 128 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 256 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 512 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 1024 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 1280 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-| 1518 | | | | | |
-+-------+---------+---------+---------+----------+----------+
-
-Then run the test application as below::
-
- ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
- --burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
- --txfreet=32 --txrst=32 --enable-rx-cksum
-
-The -n command is used to select the number of memory channels. It should match the
-number of memory channels on that setup.
diff --git a/tests/TestSuite_perf_tso.py b/tests/TestSuite_perf_tso.py
new file mode 100644
index 00000000..3821ecd1
--- /dev/null
+++ b/tests/TestSuite_perf_tso.py
@@ -0,0 +1,221 @@
+import os
+import time
+
+import framework.utils as utils
+from framework.pktgen import PacketGeneratorHelper
+from framework.settings import HEADER_SIZE
+from framework.test_case import TestCase
+
+DEFAULT_MUT = 1500
+TSO_MTU = 9000
+
+
+class TestTSO(TestCase):
+ dut_ports = []
+
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+ """
+ # Based on h/w type, choose how many ports to use
+ self.dut_ports = self.dut.get_ports(self.nic)
+
+ # Verify that enough ports are available
+ self.verify(len(self.dut_ports) >= 2, "Insufficient ports for testing")
+
+ # Verify that enough threads are available
+ self.all_cores_mask = utils.create_mask(self.dut.get_core_list("all"))
+ self.portMask = utils.create_mask([self.dut_ports[0], self.dut_ports[1]])
+ self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+
+ self.loading_sizes = [128, 800, 801, 1700, 2500]
+ self.rxfreet_values = [0, 8, 16, 32, 64, 128]
+
+ self.test_cycles = [{"cores": "", "Mpps": {}, "pct": {}}]
+
+ self.table_header = ["Frame Size"]
+ for test_cycle in self.test_cycles:
+ self.table_header.append("%s Mpps" % test_cycle["cores"])
+ self.table_header.append("% linerate")
+
+ self.eal_param = self.dut.create_eal_parameters(
+ cores="1S/2C/1T", socket=self.ports_socket, ports=self.dut_ports
+ )
+
+ self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"]
+ self.tester.send_expect(
+ "ifconfig %s mtu %s"
+ % (
+ self.tester.get_interface(
+ self.tester.get_local_port(self.dut_ports[0])
+ ),
+ TSO_MTU,
+ ),
+ "# ",
+ )
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+ self.path = self.dut.apps_name["test-pmd"]
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def test_perf_TSO_2ports(self):
+ """
+ TSO Performance Benchmarking with 2 ports.
+ """
+
+ # prepare traffic generator input
+ tgen_input = []
+
+ # run testpmd for each core config
+ for test_cycle in self.test_cycles:
+ core_config = test_cycle["cores"]
+ cores = self.dut.get_core_list(core_config, socket=self.ports_socket)
+ self.coreMask = utils.create_mask(cores)
+ if len(cores) > 2:
+ queues = len(cores) // 2
+ else:
+ queues = 1
+
+ command_line = (
+ "%s %s -- -i --rxd=512 --txd=512 --burst=32 --rxfreet=64 --mbcache=128 --portmask=%s --max-pkt-len=%s --txpt=36 --txht=0 --txwt=0 --txfreet=32 --txrst=32 "
+ % (self.path, self.eal_param, self.portMask, TSO_MTU)
+ )
+ info = "Executing PMD using %s\n" % test_cycle["cores"]
+ self.logger.info(info)
+ self.rst_report(info, annex=True)
+ self.rst_report(command_line + "\n\n", frame=True, annex=True)
+
+ self.dut.send_expect(command_line, "testpmd> ", 120)
+ self.dut.send_expect("port stop all", "testpmd> ", 120)
+ self.dut.send_expect(
+ "csum set ip hw %d" % self.dut_ports[0], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set udp hw %d" % self.dut_ports[0], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set tcp hw %d" % self.dut_ports[0], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set sctp hw %d" % self.dut_ports[0], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set outer-ip hw %d" % self.dut_ports[0], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum parse-tunnel on %d" % self.dut_ports[0], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set ip hw %d" % self.dut_ports[1], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set udp hw %d" % self.dut_ports[1], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set tcp hw %d" % self.dut_ports[1], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set sctp hw %d" % self.dut_ports[1], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum set outer-ip hw %d" % self.dut_ports[1], "testpmd> ", 120
+ )
+ self.dut.send_expect(
+ "csum parse-tunnel on %d" % self.dut_ports[1], "testpmd> ", 120
+ )
+ self.dut.send_expect("tso set 800 %d" % self.dut_ports[1], "testpmd> ", 120)
+ self.dut.send_expect("set fwd csum", "testpmd> ", 120)
+ self.dut.send_expect("port start all", "testpmd> ", 120)
+ self.dut.send_expect("set promisc all off", "testpmd> ", 120)
+ self.dut.send_expect("start", "testpmd> ")
+ for loading_size in self.loading_sizes:
+ frame_size = loading_size + self.headers_size
+ wirespeed = self.wirespeed(self.nic, frame_size, 2)
+
+ # create pcap file
+ self.logger.info("Running with frame size %d " % frame_size)
+ payload_size = frame_size - self.headers_size
+ for _port in range(2):
+ mac = self.dut.get_mac_address(self.dut_ports[_port])
+
+ pcap = os.sep.join([self.output_path, "dts{0}.pcap".format(_port)])
+ self.tester.scapy_append(
+ 'wrpcap("%s", [Ether(dst="%s",src="52:00:00:00:00:01")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/("X"*%d)])'
+ % (pcap, mac, payload_size)
+ )
+ tgen_input.append(
+ (
+ self.tester.get_local_port(self.dut_ports[_port]),
+ self.tester.get_local_port(self.dut_ports[1 - _port]),
+ "%s" % pcap,
+ )
+ )
+ self.tester.scapy_execute()
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgen_input, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ pps /= 1000000.0
+ test_cycle["Mpps"][loading_size] = pps
+ test_cycle["pct"][loading_size] = pps * 100 // wirespeed
+
+ self.dut.send_expect("stop", "testpmd> ")
+ self.dut.send_expect("quit", "# ", 30)
+ time.sleep(5)
+
+ for n in range(len(self.test_cycles)):
+ for loading_size in self.loading_sizes:
+ self.verify(
+ self.test_cycles[n]["Mpps"][loading_size] > 0, "No traffic detected"
+ )
+
+ # Print results
+ self.result_table_create(self.table_header)
+ for loading_size in self.loading_sizes:
+ table_row = [loading_size]
+ for test_cycle in self.test_cycles:
+ table_row.append(test_cycle["Mpps"][loading_size])
+ table_row.append(test_cycle["pct"][loading_size])
+
+ self.result_table_add(table_row)
+
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.dut.send_expect("quit", "# ")
+ self.dut.kill_all()
+ time.sleep(2)
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.tester.send_expect(
+ "ifconfig %s mtu %s"
+ % (
+ self.tester.get_interface(
+ self.tester.get_local_port(self.dut_ports[0])
+ ),
+ DEFAULT_MUT,
+ ),
+ "# ",
+ )
diff --git a/tests/TestSuite_tso.py b/tests/TestSuite_tso.py
index 49db011d..c4f25afc 100644
--- a/tests/TestSuite_tso.py
+++ b/tests/TestSuite_tso.py
@@ -487,134 +487,6 @@ class TestTSO(TestCase):
)
self.get_chksum_value_and_verify(dump_pcap, save_file, Nic_list)
- def test_perf_TSO_2ports(self):
- """
- TSO Performance Benchmarking with 2 ports.
- """
-
- # prepare traffic generator input
- tgen_input = []
-
- # run testpmd for each core config
- for test_cycle in self.test_cycles:
- core_config = test_cycle["cores"]
- cores = self.dut.get_core_list(core_config, socket=self.ports_socket)
- self.coreMask = utils.create_mask(cores)
- if len(cores) > 2:
- queues = len(cores) // 2
- else:
- queues = 1
-
- command_line = (
- "%s %s -- -i --rxd=512 --txd=512 --burst=32 --rxfreet=64 --mbcache=128 --portmask=%s --max-pkt-len=%s --txpt=36 --txht=0 --txwt=0 --txfreet=32 --txrst=32 "
- % (self.path, self.eal_param, self.portMask, TSO_MTU)
- )
- info = "Executing PMD using %s\n" % test_cycle["cores"]
- self.logger.info(info)
- self.rst_report(info, annex=True)
- self.rst_report(command_line + "\n\n", frame=True, annex=True)
-
- self.dut.send_expect(command_line, "testpmd> ", 120)
- self.dut.send_expect("port stop all", "testpmd> ", 120)
- self.dut.send_expect(
- "csum set ip hw %d" % self.dut_ports[0], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set udp hw %d" % self.dut_ports[0], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set tcp hw %d" % self.dut_ports[0], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set sctp hw %d" % self.dut_ports[0], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set outer-ip hw %d" % self.dut_ports[0], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum parse-tunnel on %d" % self.dut_ports[0], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set ip hw %d" % self.dut_ports[1], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set udp hw %d" % self.dut_ports[1], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set tcp hw %d" % self.dut_ports[1], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set sctp hw %d" % self.dut_ports[1], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum set outer-ip hw %d" % self.dut_ports[1], "testpmd> ", 120
- )
- self.dut.send_expect(
- "csum parse-tunnel on %d" % self.dut_ports[1], "testpmd> ", 120
- )
- self.dut.send_expect("tso set 800 %d" % self.dut_ports[1], "testpmd> ", 120)
- self.dut.send_expect("set fwd csum", "testpmd> ", 120)
- self.dut.send_expect("port start all", "testpmd> ", 120)
- self.dut.send_expect("set promisc all off", "testpmd> ", 120)
- self.dut.send_expect("start", "testpmd> ")
- for loading_size in self.loading_sizes:
- frame_size = loading_size + self.headers_size
- wirespeed = self.wirespeed(self.nic, frame_size, 2)
-
- # create pcap file
- self.logger.info("Running with frame size %d " % frame_size)
- payload_size = frame_size - self.headers_size
- for _port in range(2):
- mac = self.dut.get_mac_address(self.dut_ports[_port])
-
- pcap = os.sep.join([self.output_path, "dts{0}.pcap".format(_port)])
- self.tester.scapy_append(
- 'wrpcap("%s", [Ether(dst="%s",src="52:00:00:00:00:01")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport=1021,dport=1021)/("X"*%d)])'
- % (pcap, mac, payload_size)
- )
- tgen_input.append(
- (
- self.tester.get_local_port(self.dut_ports[_port]),
- self.tester.get_local_port(self.dut_ports[1 - _port]),
- "%s" % pcap,
- )
- )
- self.tester.scapy_execute()
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgen_input, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- pps /= 1000000.0
- test_cycle["Mpps"][loading_size] = pps
- test_cycle["pct"][loading_size] = pps * 100 // wirespeed
-
- self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("quit", "# ", 30)
- time.sleep(5)
-
- for n in range(len(self.test_cycles)):
- for loading_size in self.loading_sizes:
- self.verify(
- self.test_cycles[n]["Mpps"][loading_size] > 0, "No traffic detected"
- )
-
- # Print results
- self.result_table_create(self.table_header)
- for loading_size in self.loading_sizes:
- table_row = [loading_size]
- for test_cycle in self.test_cycles:
- table_row.append(test_cycle["Mpps"][loading_size])
- table_row.append(test_cycle["pct"][loading_size])
-
- self.result_table_add(table_row)
-
- self.result_table_print()
-
def tear_down(self):
"""
Run after each test case.
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 5/7] tests/vxlan:Separated performance cases
2022-09-22 14:29 [dts][PATCH V1 1/7] tests/efd:Separated performance cases Hongbo Li
` (2 preceding siblings ...)
2022-09-22 14:29 ` [dts][PATCH V1 4/7] tests/tso:Separated " Hongbo Li
@ 2022-09-22 14:29 ` Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 6/7] tests/ipfrag:Separated " Hongbo Li
2022-09-22 14:29 ` [PATCH V1 7/7] tests/multiprocess:Separated " Hongbo Li
5 siblings, 0 replies; 7+ messages in thread
From: Hongbo Li @ 2022-09-22 14:29 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/perf_vxlan_test_plan.rst | 84 ++++
test_plans/vxlan_test_plan.rst | 57 ---
tests/TestSuite_perf_vxlan.py | 691 ++++++++++++++++++++++++++++
tests/TestSuite_vxlan.py | 213 ---------
4 files changed, 775 insertions(+), 270 deletions(-)
create mode 100644 test_plans/perf_vxlan_test_plan.rst
create mode 100644 tests/TestSuite_perf_vxlan.py
diff --git a/test_plans/perf_vxlan_test_plan.rst b/test_plans/perf_vxlan_test_plan.rst
new file mode 100644
index 00000000..18347a94
--- /dev/null
+++ b/test_plans/perf_vxlan_test_plan.rst
@@ -0,0 +1,84 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2014-2017 Intel Corporation
+
+======================================
+Intel® Ethernet 700 Series Vxlan Tests
+======================================
+Cloud providers build virtual network overlays over existing network
+infrastructure that provide tenant isolation and scaling. Tunneling
+layers added to the packets carry the virtual networking frames over
+existing Layer 2 and IP networks. Conceptually, this is similar to
+creating virtual private networks over the Internet. Intel® Ethernet
+700 Series will process these tunneling layers by the hardware.
+
+This document provides test plan for Intel® Ethernet 700 Series vxlan
+packet detecting, checksum computing and filtering.
+
+Prerequisites
+=============
+1x Intel® X710 (Intel® Ethernet 700 Series) NICs (2x 40GbE full duplex
+optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
+
+1x Intel® XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC)
+plugged into the available PCIe Gen3 8-lane slot.
+
+DUT board must be two sockets system and each cpu have more than 8 lcores.
+
+
+Test Case: Vxlan Checksum Offload Performance Benchmarking
+==========================================================
+The throughput is measured for each of these cases for vxlan tx checksum
+offload of "all by software", "L3 offload by hardware", "L4 offload by
+hardware", "l3&l4 offload by hardware".
+
+The results are printed in the following table:
+
++----------------+--------+--------+------------+
+| Calculate Type | Queues | Mpps | % linerate |
++================+========+========+============+
+| SOFTWARE ALL | Single | | |
++----------------+--------+--------+------------+
+| HW L4 | Single | | |
++----------------+--------+--------+------------+
+| HW L3&L4 | Single | | |
++----------------+--------+--------+------------+
+| SOFTWARE ALL | Multi | | |
++----------------+--------+--------+------------+
+| HW L4 | Multi | | |
++----------------+--------+--------+------------+
+| HW L3&L4 | Multi | | |
++----------------+--------+--------+------------+
+
+Test Case: Vxlan Tunnel filter Performance Benchmarking
+=======================================================
+The throughput is measured for different Vxlan tunnel filter types.
+Queue single mean there's only one flow and forwarded to the first queue.
+Queue multi mean there are two flows and configure to different queues.
+
++--------+------------------+--------+--------+------------+
+| Packet | Filter | Queue | Mpps | % linerate |
++========+==================+========+========+============+
+| Normal | None | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | None | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan-tenid | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-tenid | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | omac-imac-tenid | Single | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-ivlan-tenid | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac-tenid | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | imac | Multi | | |
++--------+------------------+--------+--------+------------+
+| Vxlan | omac-imac-tenid | Multi | | |
++--------+------------------+--------+--------+------------+
diff --git a/test_plans/vxlan_test_plan.rst b/test_plans/vxlan_test_plan.rst
index d59cee31..96fdea0e 100644
--- a/test_plans/vxlan_test_plan.rst
+++ b/test_plans/vxlan_test_plan.rst
@@ -307,60 +307,3 @@ Add Cloud filter with invalid vni "16777216" will be failed.
Add Cloud filter with invalid queue id "64" will be failed.
-Test Case: Vxlan Checksum Offload Performance Benchmarking
-==========================================================
-The throughput is measured for each of these cases for vxlan tx checksum
-offload of "all by software", "L3 offload by hardware", "L4 offload by
-hardware", "l3&l4 offload by hardware".
-
-The results are printed in the following table:
-
-+----------------+--------+--------+------------+
-| Calculate Type | Queues | Mpps | % linerate |
-+================+========+========+============+
-| SOFTWARE ALL | Single | | |
-+----------------+--------+--------+------------+
-| HW L4 | Single | | |
-+----------------+--------+--------+------------+
-| HW L3&L4 | Single | | |
-+----------------+--------+--------+------------+
-| SOFTWARE ALL | Multi | | |
-+----------------+--------+--------+------------+
-| HW L4 | Multi | | |
-+----------------+--------+--------+------------+
-| HW L3&L4 | Multi | | |
-+----------------+--------+--------+------------+
-
-Test Case: Vxlan Tunnel filter Performance Benchmarking
-=======================================================
-The throughput is measured for different Vxlan tunnel filter types.
-Queue single mean there's only one flow and forwarded to the first queue.
-Queue multi mean there are two flows and configure to different queues.
-
-+--------+------------------+--------+--------+------------+
-| Packet | Filter | Queue | Mpps | % linerate |
-+========+==================+========+========+============+
-| Normal | None | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | None | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan-tenid | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-tenid | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | omac-imac-tenid | Single | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-ivlan-tenid | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac-tenid | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | imac | Multi | | |
-+--------+------------------+--------+--------+------------+
-| Vxlan | omac-imac-tenid | Multi | | |
-+--------+------------------+--------+--------+------------+
diff --git a/tests/TestSuite_perf_vxlan.py b/tests/TestSuite_perf_vxlan.py
new file mode 100644
index 00000000..4bd50c21
--- /dev/null
+++ b/tests/TestSuite_perf_vxlan.py
@@ -0,0 +1,691 @@
+import os
+import re
+import string
+import time
+from random import randint
+
+from scapy.config import conf
+from scapy.layers.inet import IP, TCP, UDP, Ether
+from scapy.layers.inet6 import IPv6
+from scapy.layers.l2 import Dot1Q
+from scapy.layers.sctp import SCTP, SCTPChunkData
+from scapy.layers.vxlan import VXLAN
+from scapy.route import *
+from scapy.sendrecv import sniff
+from scapy.utils import rdpcap, wrpcap
+
+import framework.packet as packet
+import framework.utils as utils
+from framework.packet import IncreaseIP, IncreaseIPv6
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.settings import FOLDERS, HEADER_SIZE
+from framework.test_case import TestCase
+
+VXLAN_PORT = 4789
+PACKET_LEN = 128
+MAX_TXQ_RXQ = 4
+BIDIRECT = True
+
+
+class VxlanTestConfig(object):
+
+ """
+ Module for config/create/transmit vxlan packet
+ """
+
+ def __init__(self, test_case, **kwargs):
+ self.test_case = test_case
+ self.init()
+ for name in kwargs:
+ setattr(self, name, kwargs[name])
+ self.pkt_obj = packet.Packet()
+
+ def init(self):
+ self.packets_config()
+
+ def packets_config(self):
+ """
+ Default vxlan packet format
+ """
+ self.pcap_file = packet.TMP_PATH + "vxlan.pcap"
+ self.capture_file = packet.TMP_PATH + "vxlan_capture.pcap"
+ self.outer_mac_src = "00:00:10:00:00:00"
+ self.outer_mac_dst = "11:22:33:44:55:66"
+ self.outer_vlan = "N/A"
+ self.outer_ip_src = "192.168.1.1"
+ self.outer_ip_dst = "192.168.1.2"
+ self.outer_ip_invalid = 0
+ self.outer_ip6_src = "N/A"
+ self.outer_ip6_dst = "N/A"
+ self.outer_ip6_invalid = 0
+ self.outer_udp_src = 63
+ self.outer_udp_dst = VXLAN_PORT
+ self.outer_udp_invalid = 0
+ self.vni = 1
+ self.inner_mac_src = "00:00:20:00:00:00"
+ self.inner_mac_dst = "00:00:20:00:00:01"
+ self.inner_vlan = "N/A"
+ self.inner_ip_src = "192.168.2.1"
+ self.inner_ip_dst = "192.168.2.2"
+ self.inner_ip_invalid = 0
+ self.inner_ip6_src = "N/A"
+ self.inner_ip6_dst = "N/A"
+ self.inner_ip6_invalid = 0
+ self.payload_size = 18
+ self.inner_l4_type = "UDP"
+ self.inner_l4_invalid = 0
+
+ def packet_type(self):
+ """
+ Return vxlan packet type
+ """
+ if self.outer_udp_dst != VXLAN_PORT:
+ if self.outer_ip6_src != "N/A":
+ return "L3_IPV6_EXT_UNKNOWN"
+ else:
+ return "L3_IPV4_EXT_UNKNOWN"
+ else:
+ if self.inner_ip6_src != "N/A":
+ return "L3_IPV6_EXT_UNKNOWN"
+ else:
+ return "L3_IPV4_EXT_UNKNOWN"
+
+ def create_pcap(self):
+ """
+ Create pcap file and copy it to tester if configured
+ Return scapy packet object for later usage
+ """
+ if self.inner_l4_type == "SCTP":
+ self.inner_payload = SCTPChunkData(data="X" * 16)
+ else:
+ self.inner_payload = "X" * self.payload_size
+
+ if self.inner_l4_type == "TCP":
+ l4_pro = TCP()
+ elif self.inner_l4_type == "SCTP":
+ l4_pro = SCTP()
+ else:
+ l4_pro = UDP()
+
+ if self.inner_ip6_src != "N/A":
+ inner_l3 = IPv6()
+ else:
+ inner_l3 = IP()
+
+ if self.inner_vlan != "N/A":
+ inner = Ether() / Dot1Q() / inner_l3 / l4_pro / self.inner_payload
+ inner[Dot1Q].vlan = self.inner_vlan
+ else:
+ inner = Ether() / inner_l3 / l4_pro / self.inner_payload
+
+ if self.inner_ip6_src != "N/A":
+ inner[inner_l3.name].src = self.inner_ip6_src
+ inner[inner_l3.name].dst = self.inner_ip6_dst
+ else:
+ inner[inner_l3.name].src = self.inner_ip_src
+ inner[inner_l3.name].dst = self.inner_ip_dst
+
+ if self.inner_ip_invalid == 1:
+ inner[inner_l3.name].chksum = 0
+
+ # when udp checksum is 0, will skip checksum
+ if self.inner_l4_invalid == 1:
+ if self.inner_l4_type == "SCTP":
+ inner[SCTP].chksum = 0
+ else:
+ inner[self.inner_l4_type].chksum = 1
+
+ inner[Ether].src = self.inner_mac_src
+ inner[Ether].dst = self.inner_mac_dst
+
+ if self.outer_ip6_src != "N/A":
+ outer_l3 = IPv6()
+ else:
+ outer_l3 = IP()
+
+ if self.outer_vlan != "N/A":
+ outer = Ether() / Dot1Q() / outer_l3 / UDP()
+ outer[Dot1Q].vlan = self.outer_vlan
+ else:
+ outer = Ether() / outer_l3 / UDP()
+
+ outer[Ether].src = self.outer_mac_src
+ outer[Ether].dst = self.outer_mac_dst
+
+ if self.outer_ip6_src != "N/A":
+ outer[outer_l3.name].src = self.outer_ip6_src
+ outer[outer_l3.name].dst = self.outer_ip6_dst
+ else:
+ outer[outer_l3.name].src = self.outer_ip_src
+ outer[outer_l3.name].dst = self.outer_ip_dst
+
+ outer[UDP].sport = self.outer_udp_src
+ outer[UDP].dport = self.outer_udp_dst
+
+ if self.outer_ip_invalid == 1:
+ outer[outer_l3.name].chksum = 0
+ # when udp checksum is 0, will skip checksum
+ if self.outer_udp_invalid == 1:
+ outer[UDP].chksum = 1
+
+ if self.outer_udp_dst == VXLAN_PORT:
+ self.pkt = outer / VXLAN(vni=self.vni) / inner
+ else:
+ self.pkt = outer / ("X" * self.payload_size)
+
+ wrpcap(self.pcap_file, self.pkt)
+
+ return self.pkt
+
+ def get_chksums(self, pkt=None):
+ """
+ get chksum values of Outer and Inner packet L3&L4
+ Skip outer udp for it will be calculated by software
+ """
+ chk_sums = {}
+ if pkt is None:
+ pkt = rdpcap(self.pcap_file)
+ else:
+ pkt = pkt.pktgen.pkt
+
+ time.sleep(1)
+ if pkt[0].guess_payload_class(pkt[0]).name == "802.1Q":
+ payload = pkt[0][Dot1Q]
+ else:
+ payload = pkt[0]
+
+ if payload.guess_payload_class(payload).name == "IP":
+ chk_sums["outer_ip"] = hex(payload[IP].chksum)
+
+ if pkt[0].haslayer("VXLAN") == 1:
+ inner = pkt[0]["VXLAN"]
+ if inner.haslayer(IP) == 1:
+ chk_sums["inner_ip"] = hex(inner[IP].chksum)
+ if inner[IP].proto == 6:
+ chk_sums["inner_tcp"] = hex(inner[TCP].chksum)
+ if inner[IP].proto == 17:
+ chk_sums["inner_udp"] = hex(inner[UDP].chksum)
+ if inner[IP].proto == 132:
+ chk_sums["inner_sctp"] = hex(inner[SCTP].chksum)
+ elif inner.haslayer(IPv6) == 1:
+ if inner[IPv6].nh == 6:
+ chk_sums["inner_tcp"] = hex(inner[TCP].chksum)
+ if inner[IPv6].nh == 17:
+ chk_sums["inner_udp"] = hex(inner[UDP].chksum)
+ # scapy can not get sctp checksum, so extracted manually
+ if inner[IPv6].nh == 59:
+ load = str(inner[IPv6].payload)
+ chk_sums["inner_sctp"] = hex(
+ (ord(load[8]) << 24)
+ | (ord(load[9]) << 16)
+ | (ord(load[10]) << 8)
+ | (ord(load[11]))
+ )
+
+ return chk_sums
+
+ def send_pcap(self, iface=""):
+ """
+ Send vxlan pcap file by iface
+ """
+ del self.pkt_obj.pktgen.pkts[:]
+ self.pkt_obj.pktgen.assign_pkt(self.pkt)
+ self.pkt_obj.pktgen.update_pkts()
+ self.pkt_obj.send_pkt(crb=self.test_case.tester, tx_port=iface)
+
+ def pcap_len(self):
+ """
+ Return length of pcap packet, will plus 4 bytes crc
+ """
+ # add four bytes crc
+ return len(self.pkt) + 4
+
+
+class TestVxlan(TestCase):
+ def set_up_all(self):
+ """
+ vxlan Prerequisites
+ """
+ # this feature only enable in Intel® Ethernet 700 Series now
+ if self.nic in [
+ "I40E_10G-SFP_XL710",
+ "I40E_40G-QSFP_A",
+ "I40E_40G-QSFP_B",
+ "I40E_25G-25G_SFP28",
+ "I40E_10G-SFP_X722",
+ "I40E_10G-10G_BASE_T_X722",
+ "I40E_10G-10G_BASE_T_BC",
+ ]:
+ self.compile_switch = "CONFIG_RTE_LIBRTE_I40E_INC_VECTOR"
+ elif self.nic in ["IXGBE_10G-X550T", "IXGBE_10G-X550EM_X_10G_T"]:
+ self.compile_switch = "CONFIG_RTE_IXGBE_INC_VECTOR"
+ elif self.nic in ["ICE_25G-E810C_SFP", "ICE_100G-E810C_QSFP"]:
+ print("Intel® Ethernet 700 Series support default none VECTOR")
+ else:
+ self.verify(False, "%s not support this vxlan" % self.nic)
+ # Based on h/w type, choose how many ports to use
+ ports = self.dut.get_ports()
+
+ # Verify that enough ports are available
+ self.verify(len(ports) >= 2, "Insufficient ports for testing")
+ global valports
+ valports = [_ for _ in ports if self.tester.get_local_port(_) != -1]
+
+ self.portMask = utils.create_mask(valports[:2])
+
+ # Verify that enough threads are available
+ netdev = self.dut.ports_info[ports[0]]["port"]
+ self.ports_socket = netdev.socket
+
+ # start testpmd
+ self.pmdout = PmdOutput(self.dut)
+
+ # init port config
+ self.dut_port = valports[0]
+ self.dut_port_mac = self.dut.get_mac_address(self.dut_port)
+ tester_port = self.tester.get_local_port(self.dut_port)
+ self.tester_iface = self.tester.get_interface(tester_port)
+ self.recv_port = valports[1]
+ tester_recv_port = self.tester.get_local_port(self.recv_port)
+ self.recv_iface = self.tester.get_interface(tester_recv_port)
+
+ # invalid parameter
+ self.invalid_mac = "00:00:00:00:01"
+ self.invalid_ip = "192.168.1.256"
+ self.invalid_vlan = 4097
+ self.invalid_queue = 64
+ self.path = self.dut.apps_name["test-pmd"]
+
+ # vxlan payload length for performance test
+ # inner packet not contain crc, should need add four
+ self.vxlan_payload = (
+ PACKET_LEN
+ - HEADER_SIZE["eth"]
+ - HEADER_SIZE["ip"]
+ - HEADER_SIZE["udp"]
+ - HEADER_SIZE["vxlan"]
+ - HEADER_SIZE["eth"]
+ - HEADER_SIZE["ip"]
+ - HEADER_SIZE["udp"]
+ + 4
+ )
+
+ self.cal_type = [
+ {
+ "Type": "SOFTWARE ALL",
+ "csum": [],
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L4",
+ "csum": ["udp"],
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L3&L4",
+ "csum": ["ip", "udp", "outer-ip"],
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "SOFTWARE ALL",
+ "csum": [],
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L4",
+ "csum": ["udp"],
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Type": "HW L3&L4",
+ "csum": ["ip", "udp", "outer-ip"],
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ ]
+
+ self.chksum_header = ["Calculate Type"]
+ self.chksum_header.append("Queues")
+ self.chksum_header.append("Mpps")
+ self.chksum_header.append("% linerate")
+
+ # tunnel filter performance test
+ self.default_vlan = 1
+ self.tunnel_multiqueue = 2
+ self.tunnel_header = ["Packet", "Filter", "Queue", "Mpps", "% linerate"]
+ self.tunnel_perf = [
+ {
+ "Packet": "Normal",
+ "tunnel_filter": "None",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "None",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan-tenid",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-tenid",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "omac-imac-tenid",
+ "recvqueue": "Single",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "None",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-ivlan-tenid",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac-tenid",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "imac",
+ "recvqueue": "Multi",
+ "Mpps": {},
+ "pct": {},
+ },
+ {
+ "Packet": "VXLAN",
+ "tunnel_filter": "omac-imac-tenid",
+ "recvqueue": "Multi",
+ },
+ ]
+
+ self.pktgen_helper = PacketGeneratorHelper()
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def test_perf_vxlan_tunnelfilter_performance_2ports(self):
+ self.result_table_create(self.tunnel_header)
+ core_list = self.dut.get_core_list(
+ "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
+ )
+
+ pmd_temp = (
+ "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+ )
+
+ for perf_config in self.tunnel_perf:
+ tun_filter = perf_config["tunnel_filter"]
+ recv_queue = perf_config["recvqueue"]
+ print(
+ (
+ utils.GREEN(
+ "Measure tunnel performance of [%s %s %s]"
+ % (perf_config["Packet"], tun_filter, recv_queue)
+ )
+ )
+ )
+
+ if tun_filter == "None" and recv_queue == "Multi":
+ pmd_temp = (
+ "./%s %s -- -i --rss-udp --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+ )
+
+ self.eal_para = self.dut.create_eal_parameters(cores=core_list)
+ pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
+ self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
+
+ # config flow
+ self.config_tunnelfilter(
+ self.dut_port, self.recv_port, perf_config, "flow1.pcap"
+ )
+ # config the flows
+ tgen_input = []
+ tgen_input.append(
+ (
+ self.tester.get_local_port(self.dut_port),
+ self.tester.get_local_port(self.recv_port),
+ "flow1.pcap",
+ )
+ )
+
+ if BIDIRECT:
+ self.config_tunnelfilter(
+ self.recv_port, self.dut_port, perf_config, "flow2.pcap"
+ )
+ tgen_input.append(
+ (
+ self.tester.get_local_port(self.recv_port),
+ self.tester.get_local_port(self.dut_port),
+ "flow2.pcap",
+ )
+ )
+
+ self.dut.send_expect("set fwd io", "testpmd>", 10)
+ self.dut.send_expect("start", "testpmd>", 10)
+ self.pmdout.wait_link_status_up(self.dut_port)
+ if BIDIRECT:
+ wirespeed = self.wirespeed(self.nic, PACKET_LEN, 2)
+ else:
+ wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
+
+ # run traffic generator
+ use_vm = True if recv_queue == "Multi" and tun_filter == "None" else False
+ _, pps = self.suite_measure_throughput(tgen_input, use_vm=use_vm)
+
+ pps /= 1000000.0
+ perf_config["Mpps"] = pps
+ perf_config["pct"] = pps * 100 / wirespeed
+
+ out = self.dut.send_expect("stop", "testpmd>", 10)
+ self.dut.send_expect("quit", "# ", 10)
+
+ # verify every queue work fine
+ check_queue = 0
+ if recv_queue == "Multi":
+ for queue in range(check_queue):
+ self.verify(
+ "Queue= %d -> TX Port" % (queue) in out,
+ "Queue %d no traffic" % queue,
+ )
+
+ table_row = [
+ perf_config["Packet"],
+ tun_filter,
+ recv_queue,
+ perf_config["Mpps"],
+ perf_config["pct"],
+ ]
+
+ self.result_table_add(table_row)
+
+ self.result_table_print()
+
+ def test_perf_vxlan_checksum_performance_2ports(self):
+ self.result_table_create(self.chksum_header)
+ vxlan = VxlanTestConfig(self, payload_size=self.vxlan_payload)
+ vxlan.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
+ vxlan.pcap_file = "vxlan1.pcap"
+ vxlan.inner_mac_dst = "00:00:20:00:00:01"
+ vxlan.create_pcap()
+
+ vxlan_queue = VxlanTestConfig(self, payload_size=self.vxlan_payload)
+ vxlan_queue.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
+ vxlan_queue.pcap_file = "vxlan1_1.pcap"
+ vxlan_queue.inner_mac_dst = "00:00:20:00:00:02"
+ vxlan_queue.create_pcap()
+
+ # socket/core/thread
+ core_list = self.dut.get_core_list(
+ "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
+ )
+ core_mask = utils.create_mask(core_list)
+
+ self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
+ tx_port = self.tester.get_local_port(self.dut_ports[0])
+ rx_port = self.tester.get_local_port(self.dut_ports[1])
+
+ for cal in self.cal_type:
+ recv_queue = cal["recvqueue"]
+ print(
+ (
+ utils.GREEN(
+ "Measure checksum performance of [%s %s %s]"
+ % (cal["Type"], recv_queue, cal["csum"])
+ )
+ )
+ )
+
+ # configure flows
+ tgen_input = []
+ if recv_queue == "Multi":
+ tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
+ tgen_input.append((tx_port, rx_port, "vxlan1_1.pcap"))
+ else:
+ tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
+
+ # multi queue and signle queue commands
+ if recv_queue == "Multi":
+ pmd_temp = "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+ else:
+ pmd_temp = "./%s %s -- -i --nb-cores=2 --portmask=%s"
+
+ self.eal_para = self.dut.create_eal_parameters(cores=core_list)
+ pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
+
+ self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
+ self.dut.send_expect("set fwd csum", "testpmd>", 10)
+ self.enable_vxlan(self.dut_port)
+ self.enable_vxlan(self.recv_port)
+ self.pmdout.wait_link_status_up(self.dut_port)
+
+ # redirect flow to another queue by tunnel filter
+ rule_config = {
+ "dut_port": self.dut_port,
+ "outer_mac_dst": vxlan.outer_mac_dst,
+ "inner_mac_dst": vxlan.inner_mac_dst,
+ "inner_ip_dst": vxlan.inner_ip_dst,
+ "inner_vlan": 0,
+ "tun_filter": "imac",
+ "vni": vxlan.vni,
+ "queue": 0,
+ }
+ self.perf_tunnel_filter_set_rule(rule_config)
+
+ if recv_queue == "Multi":
+ rule_config = {
+ "dut_port": self.dut_port,
+ "outer_mac_dst": vxlan_queue.outer_mac_dst,
+ "inner_mac_dst": vxlan_queue.inner_mac_dst,
+ "inner_ip_dst": vxlan_queue.inner_ip_dst,
+ "inner_vlan": 0,
+ "tun_filter": "imac",
+ "vni": vxlan.vni,
+ "queue": 1,
+ }
+ self.perf_tunnel_filter_set_rule(rule_config)
+
+ for pro in cal["csum"]:
+ self.csum_set_type(pro, self.dut_port)
+ self.csum_set_type(pro, self.recv_port)
+
+ self.dut.send_expect("start", "testpmd>", 10)
+
+ wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
+
+ # run traffic generator
+ _, pps = self.suite_measure_throughput(tgen_input)
+
+ pps /= 1000000.0
+ cal["Mpps"] = pps
+ cal["pct"] = pps * 100 / wirespeed
+
+ out = self.dut.send_expect("stop", "testpmd>", 10)
+ self.dut.send_expect("quit", "# ", 10)
+
+ # verify every queue work fine
+ check_queue = 1
+ if recv_queue == "Multi":
+ for queue in range(check_queue):
+ self.verify(
+ "Queue= %d -> TX Port" % (queue) in out,
+ "Queue %d no traffic" % queue,
+ )
+
+ table_row = [cal["Type"], recv_queue, cal["Mpps"], cal["pct"]]
+ self.result_table_add(table_row)
+
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.dut.kill_all()
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ pass
diff --git a/tests/TestSuite_vxlan.py b/tests/TestSuite_vxlan.py
index c69d7903..5dd49ecd 100644
--- a/tests/TestSuite_vxlan.py
+++ b/tests/TestSuite_vxlan.py
@@ -1163,219 +1163,6 @@ class TestVxlan(TestCase):
wrpcap(dest_pcap, pkts)
- def test_perf_vxlan_tunnelfilter_performance_2ports(self):
- self.result_table_create(self.tunnel_header)
- core_list = self.dut.get_core_list(
- "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
- )
-
- pmd_temp = (
- "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
- )
-
- for perf_config in self.tunnel_perf:
- tun_filter = perf_config["tunnel_filter"]
- recv_queue = perf_config["recvqueue"]
- print(
- (
- utils.GREEN(
- "Measure tunnel performance of [%s %s %s]"
- % (perf_config["Packet"], tun_filter, recv_queue)
- )
- )
- )
-
- if tun_filter == "None" and recv_queue == "Multi":
- pmd_temp = (
- "./%s %s -- -i --rss-udp --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
- )
-
- self.eal_para = self.dut.create_eal_parameters(cores=core_list)
- pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
- self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
-
- # config flow
- self.config_tunnelfilter(
- self.dut_port, self.recv_port, perf_config, "flow1.pcap"
- )
- # config the flows
- tgen_input = []
- tgen_input.append(
- (
- self.tester.get_local_port(self.dut_port),
- self.tester.get_local_port(self.recv_port),
- "flow1.pcap",
- )
- )
-
- if BIDIRECT:
- self.config_tunnelfilter(
- self.recv_port, self.dut_port, perf_config, "flow2.pcap"
- )
- tgen_input.append(
- (
- self.tester.get_local_port(self.recv_port),
- self.tester.get_local_port(self.dut_port),
- "flow2.pcap",
- )
- )
-
- self.dut.send_expect("set fwd io", "testpmd>", 10)
- self.dut.send_expect("start", "testpmd>", 10)
- self.pmdout.wait_link_status_up(self.dut_port)
- if BIDIRECT:
- wirespeed = self.wirespeed(self.nic, PACKET_LEN, 2)
- else:
- wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
-
- # run traffic generator
- use_vm = True if recv_queue == "Multi" and tun_filter == "None" else False
- _, pps = self.suite_measure_throughput(tgen_input, use_vm=use_vm)
-
- pps /= 1000000.0
- perf_config["Mpps"] = pps
- perf_config["pct"] = pps * 100 / wirespeed
-
- out = self.dut.send_expect("stop", "testpmd>", 10)
- self.dut.send_expect("quit", "# ", 10)
-
- # verify every queue work fine
- check_queue = 0
- if recv_queue == "Multi":
- for queue in range(check_queue):
- self.verify(
- "Queue= %d -> TX Port" % (queue) in out,
- "Queue %d no traffic" % queue,
- )
-
- table_row = [
- perf_config["Packet"],
- tun_filter,
- recv_queue,
- perf_config["Mpps"],
- perf_config["pct"],
- ]
-
- self.result_table_add(table_row)
-
- self.result_table_print()
-
- def test_perf_vxlan_checksum_performance_2ports(self):
- self.result_table_create(self.chksum_header)
- vxlan = VxlanTestConfig(self, payload_size=self.vxlan_payload)
- vxlan.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
- vxlan.pcap_file = "vxlan1.pcap"
- vxlan.inner_mac_dst = "00:00:20:00:00:01"
- vxlan.create_pcap()
-
- vxlan_queue = VxlanTestConfig(self, payload_size=self.vxlan_payload)
- vxlan_queue.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
- vxlan_queue.pcap_file = "vxlan1_1.pcap"
- vxlan_queue.inner_mac_dst = "00:00:20:00:00:02"
- vxlan_queue.create_pcap()
-
- # socket/core/thread
- core_list = self.dut.get_core_list(
- "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
- )
- core_mask = utils.create_mask(core_list)
-
- self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
- tx_port = self.tester.get_local_port(self.dut_ports[0])
- rx_port = self.tester.get_local_port(self.dut_ports[1])
-
- for cal in self.cal_type:
- recv_queue = cal["recvqueue"]
- print(
- (
- utils.GREEN(
- "Measure checksum performance of [%s %s %s]"
- % (cal["Type"], recv_queue, cal["csum"])
- )
- )
- )
-
- # configure flows
- tgen_input = []
- if recv_queue == "Multi":
- tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
- tgen_input.append((tx_port, rx_port, "vxlan1_1.pcap"))
- else:
- tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
-
- # multi queue and signle queue commands
- if recv_queue == "Multi":
- pmd_temp = "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
- else:
- pmd_temp = "./%s %s -- -i --nb-cores=2 --portmask=%s"
-
- self.eal_para = self.dut.create_eal_parameters(cores=core_list)
- pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
-
- self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
- self.dut.send_expect("set fwd csum", "testpmd>", 10)
- self.enable_vxlan(self.dut_port)
- self.enable_vxlan(self.recv_port)
- self.pmdout.wait_link_status_up(self.dut_port)
-
- # redirect flow to another queue by tunnel filter
- rule_config = {
- "dut_port": self.dut_port,
- "outer_mac_dst": vxlan.outer_mac_dst,
- "inner_mac_dst": vxlan.inner_mac_dst,
- "inner_ip_dst": vxlan.inner_ip_dst,
- "inner_vlan": 0,
- "tun_filter": "imac",
- "vni": vxlan.vni,
- "queue": 0,
- }
- self.perf_tunnel_filter_set_rule(rule_config)
-
- if recv_queue == "Multi":
- rule_config = {
- "dut_port": self.dut_port,
- "outer_mac_dst": vxlan_queue.outer_mac_dst,
- "inner_mac_dst": vxlan_queue.inner_mac_dst,
- "inner_ip_dst": vxlan_queue.inner_ip_dst,
- "inner_vlan": 0,
- "tun_filter": "imac",
- "vni": vxlan.vni,
- "queue": 1,
- }
- self.perf_tunnel_filter_set_rule(rule_config)
-
- for pro in cal["csum"]:
- self.csum_set_type(pro, self.dut_port)
- self.csum_set_type(pro, self.recv_port)
-
- self.dut.send_expect("start", "testpmd>", 10)
-
- wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
-
- # run traffic generator
- _, pps = self.suite_measure_throughput(tgen_input)
-
- pps /= 1000000.0
- cal["Mpps"] = pps
- cal["pct"] = pps * 100 / wirespeed
-
- out = self.dut.send_expect("stop", "testpmd>", 10)
- self.dut.send_expect("quit", "# ", 10)
-
- # verify every queue work fine
- check_queue = 1
- if recv_queue == "Multi":
- for queue in range(check_queue):
- self.verify(
- "Queue= %d -> TX Port" % (queue) in out,
- "Queue %d no traffic" % queue,
- )
-
- table_row = [cal["Type"], recv_queue, cal["Mpps"], cal["pct"]]
- self.result_table_add(table_row)
-
- self.result_table_print()
-
def enable_vxlan(self, port):
self.dut.send_expect(
"rx_vxlan_port add %d %d" % (VXLAN_PORT, port), "testpmd>", 10
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [dts][PATCH V1 6/7] tests/ipfrag:Separated performance cases
2022-09-22 14:29 [dts][PATCH V1 1/7] tests/efd:Separated performance cases Hongbo Li
` (3 preceding siblings ...)
2022-09-22 14:29 ` [dts][PATCH V1 5/7] tests/vxlan:Separated " Hongbo Li
@ 2022-09-22 14:29 ` Hongbo Li
2022-09-22 14:29 ` [PATCH V1 7/7] tests/multiprocess:Separated " Hongbo Li
5 siblings, 0 replies; 7+ messages in thread
From: Hongbo Li @ 2022-09-22 14:29 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/ipfrag_test_plan.rst | 31 ---
test_plans/perf_ipfrag_test_plan.rst | 140 ++++++++++
tests/TestSuite_ipfrag.py | 119 ---------
tests/TestSuite_perf_ipfrag.py | 386 +++++++++++++++++++++++++++
4 files changed, 526 insertions(+), 150 deletions(-)
create mode 100644 test_plans/perf_ipfrag_test_plan.rst
create mode 100644 tests/TestSuite_perf_ipfrag.py
diff --git a/test_plans/ipfrag_test_plan.rst b/test_plans/ipfrag_test_plan.rst
index 4cf8c436..52f6a973 100644
--- a/test_plans/ipfrag_test_plan.rst
+++ b/test_plans/ipfrag_test_plan.rst
@@ -128,34 +128,3 @@ For each of them check that:
#. Check number of output packets.
#. Check header of each output packet: length, ID, fragment offset, flags.
#. Check payload: size and contents as expected, not corrupted.
-
-
-
-Test Case 4: Throughput test
-============================
-
-The test report should provide the throughput rate measurements (in mpps and % of the line rate for 2x NIC ports)
-for the following input frame sizes: 64 bytes, 1518 bytes, 1519 bytes, 2K, 9k.
-
-The following configurations should be tested:
-
-|
-
-+----------+-------------------------+----------------------+
-|# of ports| Socket/Core/HyperThread|Total # of sw threads |
-+----------+-------------------------+----------------------+
-| 2 | 1S/1C/1T | 1 |
-+----------+-------------------------+----------------------+
-| 2 | 1S/1C/2T | 2 |
-+----------+-------------------------+----------------------+
-| 2 | 1S/2C/1T | 2 |
-+----------+-------------------------+----------------------+
-| 2 | 2S/1C/1T | 2 |
-+----------+-------------------------+----------------------+
-
-|
-
-Command line::
-
- ./x86_64-native-linuxapp-gcc/examples/dpdk-ip_fragmentation -c <LCOREMASK> -n 4 -- [-P] -p PORTMASK
- -q <NUM_OF_PORTS_PER_THREAD>
diff --git a/test_plans/perf_ipfrag_test_plan.rst b/test_plans/perf_ipfrag_test_plan.rst
new file mode 100644
index 00000000..3502f9ab
--- /dev/null
+++ b/test_plans/perf_ipfrag_test_plan.rst
@@ -0,0 +1,140 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2011-2017 Intel Corporation
+
+======================
+IP fragmentation Tests
+======================
+
+The IP fragmentation results are produced using ''ip_fragmentation'' application.
+The test application should run with both IPv4 and IPv6 fragmentation.
+
+Prerequisites
+=============
+
+1. Hardware requirements:
+
+ - For each CPU socket, each memory channel should be populated with at least 1x DIMM
+ - Board is populated with at least 2x 1GbE or 10GbE ports. Special PCIe restrictions may
+ be required for performance. For example, the following requirements should be
+ met for Intel 82599 NICs:
+
+ - NICs are plugged into PCIe Gen2 or Gen3 slots
+ - For PCIe Gen2 slots, the number of lanes should be 8x or higher
+ - A single port from each NIC should be used, so for 2x ports, 2x NICs should
+ be used
+
+ - NIC ports connected to traffic generator. It is assumed that the NIC ports
+ P0, P1, P2, P3 (as identified by the DPDK application) are connected to the
+ traffic generator ports TG0, TG1, TG2, TG3. The application-side port mask of
+ NIC ports P0, P1, P2, P3 is noted as PORTMASK in this section.
+ Traffic generator should support sending jumbo frames with size up to 9K.
+
+2. BIOS requirements:
+
+ - Intel Hyper-Threading Technology is ENABLED
+ - Hardware Prefetcher is DISABLED
+ - Adjacent Cache Line Prefetch is DISABLED
+ - Direct Cache Access is DISABLED
+
+3. Linux kernel requirements:
+
+ - Linux kernel has the following features enabled: huge page support, UIO, HPET
+ - Appropriate number of huge pages are reserved at kernel boot time
+ - The IDs of the hardware threads (logical cores) per each CPU socket can be
+ determined by parsing the file /proc/cpuinfo. The naming convention for the
+ logical cores is: C{x.y.z} = hyper-thread z of physical core y of CPU socket x,
+ with typical values of x = 0 .. 3, y = 0 .. 7, z = 0 .. 1. Logical cores
+ C{0.0.1} and C{0.0.1} should be avoided while executing the test, as they are
+ used by the Linux kernel for running regular processes.
+
+4. Software application requirements
+
+5. If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+ using vfio, use the following commands to load the vfio driver and bind it
+ to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+ - The test can be run with IPv4 package. The LPM table used for IPv4 packet routing is:
+
+ +-------+-------------------------------------+-----------+
+ |Entry #|LPM prefix (IP/length) |Output port|
+ +-------+-------------------------------------+-----------+
+ | 0 | 100.10.0.0/16 | P2 |
+ +-------+-------------------------------------+-----------+
+ | 1 | 100.20.0.0/16 | P2 |
+ +-------+-------------------------------------+-----------+
+ | 2 | 100.30.0.0/16 | P0 |
+ +-------+-------------------------------------+-----------+
+ | 3 | 100.40.0.0/16 | P0 |
+ +-------+-------------------------------------+-----------+
+
+
+ - The test can be run with IPv6 package, which follows rules below.
+
+ - There is no support for Hop-by-Hop or Routing extension headers in the packet
+ to be fragmented. All other optional headers, which are not part of the
+ unfragmentable part of the IPv6 packet are supported.
+
+ - When a fragment is generated, its identification field in the IPv6
+ fragmentation extension header is set to 0. This is not RFC compliant, but
+ proper identification number generation is out of the scope of the application
+ and routers in an IPv6 path are not allowed to fragment in the first place...
+ Generating that identification number is the job of a proper IP stack.
+
+ - The LPM table used for IPv6 packet routing is:
+
+ +-------+-------------------------------------+-----------+
+ |Entry #|LPM prefix (IP/length) |Output port|
+ +-------+-------------------------------------+-----------+
+ | 0 | 101:101:101:101:101:101:101:101/48| P2 |
+ +-------+-------------------------------------+-----------+
+ | 1 | 201:101:101:101:101:101:101:101/48| P2 |
+ +-------+-------------------------------------+-----------+
+ | 2 | 301:101:101:101:101:101:101:101/48| P0 |
+ +-------+-------------------------------------+-----------+
+ | 3 | 401:101:101:101:101:101:101:101/48| P0 |
+ +-------+-------------------------------------+-----------+
+
+ The following items are configured through the command line interface of the application:
+
+ - The set of one or several RX queues to be enabled for each NIC port
+ - The set of logical cores to execute the packet forwarding task
+ - Mapping of the NIC RX queues to logical cores handling them.
+
+6. Compile examples/ip_fragmentation::
+
+ meson configure -Dexamples=ip_fragmentation x86_64-native-linuxapp-gcc
+ ninja -C x86_64-native-linuxapp-gcc
+
+
+Test Case : Throughput test
+============================
+
+The test report should provide the throughput rate measurements (in mpps and % of the line rate for 2x NIC ports)
+for the following input frame sizes: 64 bytes, 1518 bytes, 1519 bytes, 2K, 9k.
+
+The following configurations should be tested:
+
+|
+
++----------+-------------------------+----------------------+
+|# of ports| Socket/Core/HyperThread|Total # of sw threads |
++----------+-------------------------+----------------------+
+| 2 | 1S/1C/1T | 1 |
++----------+-------------------------+----------------------+
+| 2 | 1S/1C/2T | 2 |
++----------+-------------------------+----------------------+
+| 2 | 1S/2C/1T | 2 |
++----------+-------------------------+----------------------+
+| 2 | 2S/1C/1T | 2 |
++----------+-------------------------+----------------------+
+
+|
+
+Command line::
+
+ ./x86_64-native-linuxapp-gcc/examples/dpdk-ip_fragmentation -c <LCOREMASK> -n 4 -- [-P] -p PORTMASK
+ -q <NUM_OF_PORTS_PER_THREAD>
diff --git a/tests/TestSuite_ipfrag.py b/tests/TestSuite_ipfrag.py
index 62170865..1f6d74f1 100644
--- a/tests/TestSuite_ipfrag.py
+++ b/tests/TestSuite_ipfrag.py
@@ -274,125 +274,6 @@ class TestIpfrag(TestCase):
self.functional_check_ipv4(sizelist, 1, "frag")
self.functional_check_ipv6(sizelist, 1, "frag")
- def benchmark(self, index, lcore, num_pthreads, size_list):
- """
- Just Test IPv4 Throughput for selected parameters.
- """
-
- Bps = dict()
- Pps = dict()
- Pct = dict()
-
- if int(lcore[0]) == 1:
- eal_param = self.dut.create_eal_parameters(
- cores=lcore, socket=self.ports_socket, ports=self.ports
- )
- else:
- eal_param = self.dut.create_eal_parameters(cores=lcore, ports=self.ports)
- portmask = utils.create_mask([P0, P1])
- self.dut.send_expect("^c", "# ", 120)
- self.dut.send_expect(
- "%s %s -- -p %s -q %s"
- % (self.app_ip_fragmentation_path, eal_param, portmask, num_pthreads),
- "IP_FRAG:",
- 120,
- )
- result = [2, lcore, num_pthreads]
- for size in size_list:
- dmac = self.dut.get_mac_address(P0)
- flows_p0 = [
- 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.10.0.1", flags=0)/("X"*%d)'
- % (dmac, size - 38),
- 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.20.0.1", flags=0)/("X"*%d)'
- % (dmac, size - 38),
- 'Ether(dst="%s")/IPv6(dst="101:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
- % (dmac, size - 58),
- 'Ether(dst="%s")/IPv6(dst="201:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
- % (dmac, size - 58),
- ]
-
- # reserved for rx/tx bidirection test
- dmac = self.dut.get_mac_address(P1)
- flows_p1 = [
- 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.30.0.1", flags=0)/("X"*%d)'
- % (dmac, size - 38),
- 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.40.0.1", flags=0)/("X"*%d)'
- % (dmac, size - 38),
- 'Ether(dst="%s")/IPv6(dst="301:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
- % (dmac, size - 58),
- 'Ether(dst="%s")/IPv6(dst="401:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
- % (dmac, size - 58),
- ]
- flow_len = len(flows_p0)
- tgenInput = []
- for i in range(flow_len):
-
- pcap0 = os.sep.join([self.output_path, "p0_{}.pcap".format(i)])
- self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap0, flows_p0[i]))
- pcap1 = os.sep.join([self.output_path, "p1_{}.pcap".format(i)])
- self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap1, flows_p1[i]))
- self.tester.scapy_execute()
-
- tgenInput.append(
- (
- self.tester.get_local_port(P0),
- self.tester.get_local_port(P1),
- pcap0,
- )
- )
- tgenInput.append(
- (
- self.tester.get_local_port(P1),
- self.tester.get_local_port(P0),
- pcap1,
- )
- )
-
- factor = (size + 1517) / 1518
- # wireSpd = 2 * 10000.0 / ((20 + size) * 8)
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgenInput, 100, None, self.tester.pktgen
- )
- Bps[str(size)], Pps[str(size)] = self.tester.pktgen.measure_throughput(
- stream_ids=streams
- )
-
- self.verify(Pps[str(size)] > 0, "No traffic detected")
- Pps[str(size)] *= 1.0 / factor / 1000000
- Pct[str(size)] = (1.0 * Bps[str(size)] * 100) / (2 * 10000000000)
-
- result.append(Pps[str(size)])
- result.append(Pct[str(size)])
-
- self.result_table_add(result)
-
- self.dut.send_expect("^C", "#")
-
- def test_perf_ipfrag_throughtput(self):
- """
- Performance test for 64, 1518, 1519, 2k and 9k.
- """
- sizes = [64, 1518, 1519, 2000, 9000]
-
- tblheader = ["Ports", "S/C/T", "SW threads"]
- for size in sizes:
- tblheader.append("%dB Mpps" % size)
- tblheader.append("%d" % size)
-
- self.result_table_create(tblheader)
-
- lcores = [("1S/1C/1T", 2), ("1S/1C/2T", 2), ("1S/2C/1T", 2), ("2S/1C/1T", 2)]
- index = 1
- for (lcore, numThr) in lcores:
- self.benchmark(index, lcore, numThr, sizes)
- index += 1
-
- self.result_table_print()
-
def tear_down(self):
"""
Run after each test case.
diff --git a/tests/TestSuite_perf_ipfrag.py b/tests/TestSuite_perf_ipfrag.py
new file mode 100644
index 00000000..e1b0f242
--- /dev/null
+++ b/tests/TestSuite_perf_ipfrag.py
@@ -0,0 +1,386 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Test IPv4 fragmentation features in DPDK.
+"""
+
+import os
+import re
+import string
+import time
+
+import framework.utils as utils
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.settings import HEADER_SIZE
+from framework.test_case import TestCase
+
+lpm_table_ipv6 = [
+ "{{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+ "{{4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+ "{{5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{6,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P1}",
+ "{{7,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+ "{{8,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}, 48, P0}",
+]
+
+
+class TestIpfrag(TestCase):
+ def portRepl(self, match):
+ """
+ Function to replace P([0123]) pattern in tables
+ """
+
+ portid = match.group(1)
+ self.verify(int(portid) in range(4), "invalid port id")
+ return "%s" % eval("P" + str(portid))
+
+ def set_up_all(self):
+ """
+ ip_fragmentation Prerequisites
+ """
+
+ # Based on h/w type, choose how many ports to use
+ self.ports = self.dut.get_ports()
+
+ # Verify that enough ports are available
+ self.verify(len(self.ports) >= 2, "Insufficient ports for testing")
+
+ self.ports_socket = self.dut.get_numa_id(self.ports[0])
+
+ # Verify that enough threads are available
+ cores = self.dut.get_core_list("1S/1C/1T")
+ self.verify(cores is not None, "Insufficient cores for speed testing")
+
+ global P0, P1
+ P0 = self.ports[0]
+ P1 = self.ports[1]
+
+ # make application
+ out = self.dut.build_dpdk_apps("examples/ip_fragmentation")
+ self.verify("Error" not in out, "compilation error 1")
+ self.verify("No such file" not in out, "compilation error 2")
+
+ self.eal_para = self.dut.create_eal_parameters(
+ cores="1S/1C/2T", socket=self.ports_socket, ports=self.ports
+ )
+ portmask = utils.create_mask([P0, P1])
+ numPortThread = len([P0, P1]) / len(cores)
+
+ # run ipv4_frag
+ self.app_ip_fragmentation_path = self.dut.apps_name["ip_fragmentation"]
+ self.dut.send_expect(
+ "%s %s -- -p %s -q %s"
+ % (
+ self.app_ip_fragmentation_path,
+ self.eal_para,
+ portmask,
+ int(numPortThread),
+ ),
+ "Link [Uu]p",
+ 120,
+ )
+
+ time.sleep(2)
+ self.txItf = self.tester.get_interface(self.tester.get_local_port(P0))
+ self.rxItf = self.tester.get_interface(self.tester.get_local_port(P1))
+ self.dmac = self.dut.get_mac_address(P0)
+
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+
+ def functional_check_ipv4(self, pkt_sizes, burst=1, flag=None):
+ """
+ Perform functional fragmentation checks.
+ """
+ for size in pkt_sizes[::burst]:
+ # simulate to set TG properties
+ if flag == "frag":
+ # do fragment, each packet max length 1518 - 18 - 20 = 1480
+ expPkts = int((size - HEADER_SIZE["eth"] - HEADER_SIZE["ip"]) / 1480)
+ if (size - HEADER_SIZE["eth"] - HEADER_SIZE["ip"]) % 1480:
+ expPkts += 1
+ val = 0
+ elif flag == "nofrag":
+ expPkts = 0
+ val = 2
+ else:
+ expPkts = 1
+ val = 2
+
+ inst = self.tester.tcpdump_sniff_packets(intf=self.rxItf)
+ # send packet
+ for times in range(burst):
+ pkt_size = pkt_sizes[pkt_sizes.index(size) + times]
+ pkt = Packet(pkt_type="UDP", pkt_len=pkt_size)
+ pkt.config_layer("ether", {"dst": "%s" % self.dmac})
+ pkt.config_layer(
+ "ipv4", {"dst": "100.20.0.1", "src": "1.2.3.4", "flags": val}
+ )
+ pkt.send_pkt(self.tester, tx_port=self.txItf)
+
+ # verify normal packet just by number, verify fragment packet by all elements
+ pkts = self.tester.load_tcpdump_sniff_packets(inst)
+ self.verify(
+ len(pkts) == expPkts,
+ "in functional_check_ipv4(): failed on forward packet size "
+ + str(size),
+ )
+ if flag == "frag":
+ idx = 1
+ for i in range(len(pkts)):
+ pkt_id = pkts.strip_element_layer3("id", p_index=i)
+ if idx == 1:
+ prev_idx = pkt_id
+ self.verify(
+ prev_idx == pkt_id, "Fragmented packets index not match"
+ )
+ prev_idx = pkt_id
+
+ # last flags should be 0
+ flags = pkts.strip_element_layer3("flags", p_index=i)
+ if idx == expPkts:
+ self.verify(
+ flags == 0, "Fragmented last packet flags not match"
+ )
+ else:
+ self.verify(flags == 1, "Fragmented packets flags not match")
+
+ # fragment offset should be correct
+ frag = pkts.strip_element_layer3("frag", p_index=i)
+ self.verify(
+ (frag == ((idx - 1) * 185)), "Fragment packet frag not match"
+ )
+ idx += 1
+
+ def functional_check_ipv6(self, pkt_sizes, burst=1, flag=None, funtion=None):
+ """
+ Perform functional fragmentation checks.
+ """
+ for size in pkt_sizes[::burst]:
+ # simulate to set TG properties
+ if flag == "frag":
+ # each packet max len: 1518 - 18 (eth) - 40 (ipv6) - 8 (ipv6 ext hdr) = 1452
+ expPkts = int((size - HEADER_SIZE["eth"] - HEADER_SIZE["ipv6"]) / 1452)
+ if (size - HEADER_SIZE["eth"] - HEADER_SIZE["ipv6"]) % 1452:
+ expPkts += 1
+ val = 0
+ else:
+ expPkts = 1
+ val = 2
+
+ inst = self.tester.tcpdump_sniff_packets(intf=self.rxItf)
+ # send packet
+ for times in range(burst):
+ pkt_size = pkt_sizes[pkt_sizes.index(size) + times]
+ pkt = Packet(pkt_type="IPv6_UDP", pkt_len=pkt_size)
+ pkt.config_layer("ether", {"dst": "%s" % self.dmac})
+ pkt.config_layer(
+ "ipv6",
+ {
+ "dst": "201:101:101:101:101:101:101:101",
+ "src": "ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80",
+ },
+ )
+ pkt.send_pkt(self.tester, tx_port=self.txItf)
+
+ # verify normal packet just by number, verify fragment packet by all elements
+ pkts = self.tester.load_tcpdump_sniff_packets(inst)
+ self.verify(
+ len(pkts) == expPkts,
+ "In functional_check_ipv6(): failed on forward packet size "
+ + str(size),
+ )
+ if flag == "frag":
+ idx = 1
+ for i in range(len(pkts)):
+ pkt_id = pkts.strip_element_layer4("id", p_index=i)
+ if idx == 1:
+ prev_idx = pkt_id
+ self.verify(
+ prev_idx == pkt_id, "Fragmented packets index not match"
+ )
+ prev_idx = pkt_id
+
+ # last flags should be 0
+ flags = pkts.strip_element_layer4("m", p_index=i)
+ if idx == expPkts:
+ self.verify(
+ flags == 0, "Fragmented last packet flags not match"
+ )
+ else:
+ self.verify(flags == 1, "Fragmented packets flags not match")
+
+ # fragment offset should be correct
+ frag = pkts.strip_element_layer4("offset", p_index=i)
+ self.verify(
+ (frag == int((idx - 1) * 181)), "Fragment packet frag not match"
+ )
+ idx += 1
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ self.tester.send_expect(
+ "ifconfig %s mtu 9200"
+ % self.tester.get_interface(self.tester.get_local_port(P0)),
+ "#",
+ )
+ self.tester.send_expect(
+ "ifconfig %s mtu 9200"
+ % self.tester.get_interface(self.tester.get_local_port(P1)),
+ "#",
+ )
+
+ def benchmark(self, index, lcore, num_pthreads, size_list):
+ """
+ Just Test IPv4 Throughput for selected parameters.
+ """
+
+ Bps = dict()
+ Pps = dict()
+ Pct = dict()
+
+ if int(lcore[0]) == 1:
+ eal_param = self.dut.create_eal_parameters(
+ cores=lcore, socket=self.ports_socket, ports=self.ports
+ )
+ else:
+ eal_param = self.dut.create_eal_parameters(cores=lcore, ports=self.ports)
+ portmask = utils.create_mask([P0, P1])
+ self.dut.send_expect("^c", "# ", 120)
+ self.dut.send_expect(
+ "%s %s -- -p %s -q %s"
+ % (self.app_ip_fragmentation_path, eal_param, portmask, num_pthreads),
+ "IP_FRAG:",
+ 120,
+ )
+ result = [2, lcore, num_pthreads]
+ for size in size_list:
+ dmac = self.dut.get_mac_address(P0)
+ flows_p0 = [
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.10.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.20.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IPv6(dst="101:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ 'Ether(dst="%s")/IPv6(dst="201:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ ]
+
+ # reserved for rx/tx bidirection test
+ dmac = self.dut.get_mac_address(P1)
+ flows_p1 = [
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.30.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IP(src="1.2.3.4", dst="100.40.0.1", flags=0)/("X"*%d)'
+ % (dmac, size - 38),
+ 'Ether(dst="%s")/IPv6(dst="301:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ 'Ether(dst="%s")/IPv6(dst="401:101:101:101:101:101:101:101",src="ee80:ee80:ee80:ee80:ee80:ee80:ee80:ee80")/Raw(load="X"*%d)'
+ % (dmac, size - 58),
+ ]
+ flow_len = len(flows_p0)
+ tgenInput = []
+ for i in range(flow_len):
+
+ pcap0 = os.sep.join([self.output_path, "p0_{}.pcap".format(i)])
+ self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap0, flows_p0[i]))
+ pcap1 = os.sep.join([self.output_path, "p1_{}.pcap".format(i)])
+ self.tester.scapy_append('wrpcap("%s", [%s])' % (pcap1, flows_p1[i]))
+ self.tester.scapy_execute()
+
+ tgenInput.append(
+ (
+ self.tester.get_local_port(P0),
+ self.tester.get_local_port(P1),
+ pcap0,
+ )
+ )
+ tgenInput.append(
+ (
+ self.tester.get_local_port(P1),
+ self.tester.get_local_port(P0),
+ pcap1,
+ )
+ )
+
+ factor = (size + 1517) / 1518
+ # wireSpd = 2 * 10000.0 / ((20 + size) * 8)
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ Bps[str(size)], Pps[str(size)] = self.tester.pktgen.measure_throughput(
+ stream_ids=streams
+ )
+
+ self.verify(Pps[str(size)] > 0, "No traffic detected")
+ Pps[str(size)] *= 1.0 / factor / 1000000
+ Pct[str(size)] = (1.0 * Bps[str(size)] * 100) / (2 * 10000000000)
+
+ result.append(Pps[str(size)])
+ result.append(Pct[str(size)])
+
+ self.result_table_add(result)
+
+ self.dut.send_expect("^C", "#")
+
+ def test_perf_ipfrag_throughtput(self):
+ """
+ Performance test for 64, 1518, 1519, 2k and 9k.
+ """
+ sizes = [64, 1518, 1519, 2000, 9000]
+
+ tblheader = ["Ports", "S/C/T", "SW threads"]
+ for size in sizes:
+ tblheader.append("%dB Mpps" % size)
+ tblheader.append("%d" % size)
+
+ self.result_table_create(tblheader)
+
+ lcores = [("1S/1C/1T", 2), ("1S/1C/2T", 2), ("1S/2C/1T", 2), ("2S/1C/1T", 2)]
+ index = 1
+ for (lcore, numThr) in lcores:
+ self.benchmark(index, lcore, numThr, sizes)
+ index += 1
+
+ self.result_table_print()
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.tester.send_expect(
+ "ifconfig %s mtu 1500"
+ % self.tester.get_interface(self.tester.get_local_port(P0)),
+ "#",
+ )
+ self.tester.send_expect(
+ "ifconfig %s mtu 1500"
+ % self.tester.get_interface(self.tester.get_local_port(P1)),
+ "#",
+ )
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.dut.send_expect("^C", "#")
+ pass
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH V1 7/7] tests/multiprocess:Separated performance cases
2022-09-22 14:29 [dts][PATCH V1 1/7] tests/efd:Separated performance cases Hongbo Li
` (4 preceding siblings ...)
2022-09-22 14:29 ` [dts][PATCH V1 6/7] tests/ipfrag:Separated " Hongbo Li
@ 2022-09-22 14:29 ` Hongbo Li
5 siblings, 0 replies; 7+ messages in thread
From: Hongbo Li @ 2022-09-22 14:29 UTC (permalink / raw)
To: dts; +Cc: Hongbo Li
Separated performance cases
Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
test_plans/index.rst | 7 +
test_plans/multiprocess_test_plan.rst | 48 ---
test_plans/perf_multiprocess_test_plan.rst | 194 ++++++++++++
tests/TestSuite_multiprocess.py | 210 -------------
tests/TestSuite_perf_multiprocess.py | 333 +++++++++++++++++++++
5 files changed, 534 insertions(+), 258 deletions(-)
create mode 100644 test_plans/perf_multiprocess_test_plan.rst
create mode 100644 tests/TestSuite_perf_multiprocess.py
diff --git a/test_plans/index.rst b/test_plans/index.rst
index 8e2634bd..a834d767 100644
--- a/test_plans/index.rst
+++ b/test_plans/index.rst
@@ -108,6 +108,13 @@ The following are the test plans for the DPDK DTS automated test system.
ntb_test_plan
nvgre_test_plan
perf_virtio_user_loopback_test_plan
+ perf_efd_test_plan
+ perf_ipfrag_test_plan
+ perf_kni_test_plan
+ perf_l2fwd_test_plan
+ perf_multiprocess_test_plan
+ perf_tso_test_plan
+ perf_vxlan_test_plan
pf_smoke_test_plan
pipeline_test_plan
pvp_virtio_user_multi_queues_port_restart_test_plan
diff --git a/test_plans/multiprocess_test_plan.rst b/test_plans/multiprocess_test_plan.rst
index bfef1ca9..699938ed 100644
--- a/test_plans/multiprocess_test_plan.rst
+++ b/test_plans/multiprocess_test_plan.rst
@@ -196,26 +196,6 @@ run should remain the same, except for the ``num-procs`` value, which should be
adjusted appropriately.
-Test Case: Performance Tests
-----------------------------
-
-Run the multiprocess application using standard IP traffic - varying source
-and destination address information to allow RSS to evenly distribute packets
-among RX queues. Record traffic throughput results as below.
-
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Num-procs | 1 | 2 | 2 | 4 | 4 | 8 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| %-age Line Rate | X | X | X | X | X | X |
-+-------------------+-----+-----+-----+-----+-----+-----+
-| Packet Rate(mpps) | X | X | X | X | X | X |
-+-------------------+-----+-----+-----+-----+-----+-----+
Test Case: Function Tests
-------------------------
@@ -294,34 +274,6 @@ An example commands to run 8 client processes is as follows::
root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 &
root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 &
-Test Case: Performance Measurement
-----------------------------------
-
-- On the traffic generator set up a traffic flow in both directions specifying
- IP traffic.
-- Run the server and client applications as above.
-- Start the traffic and record the throughput for transmitted and received packets.
-
-An example set of results is shown below.
-
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Server threads | 1 | 1 | 1 | 1 | 1 | 1 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Num-clients | 1 | 2 | 2 | 4 | 4 | 8 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| %-age Line Rate | X | X | X | X | X | X |
-+----------------------+-----+-----+-----+-----+-----+-----+
-| Packet Rate(mpps) | X | X | X | X | X | X |
-+----------------------+-----+-----+-----+-----+-----+-----+
-
Test Case: Function Tests
-------------------------
start server process and 2 client process, send some packets, the number of packets is a random value between 20 and 256.
diff --git a/test_plans/perf_multiprocess_test_plan.rst b/test_plans/perf_multiprocess_test_plan.rst
new file mode 100644
index 00000000..4cca63de
--- /dev/null
+++ b/test_plans/perf_multiprocess_test_plan.rst
@@ -0,0 +1,194 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2017 Intel Corporation
+
+=======================================
+Sample Application Tests: Multi-Process
+=======================================
+
+Simple MP Application Test
+==========================
+
+Description
+-----------
+
+This test is a basic multi-process test which demonstrates the basics of sharing
+information between DPDK processes. The same application binary is run
+twice - once as a primary instance, and once as a secondary instance. Messages
+are sent from primary to secondary and vice versa, demonstrating the processes
+are sharing memory and can communicate using rte_ring structures.
+
+Prerequisites
+-------------
+
+If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When
+using vfio, use the following commands to load the vfio driver and bind it
+to the device under test::
+
+ modprobe vfio
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
+
+Assuming that a DPDK build has been set up and the multi-process sample
+applications have been built.
+
+Symmetric MP Application Test
+=============================
+
+Description
+-----------
+
+This test is a multi-process test which demonstrates how multiple processes can
+work together to perform packet I/O and packet processing in parallel, much as
+other example application work by using multiple threads. In this example, each
+process reads packets from all network ports being used - though from a different
+RX queue in each case. Those packets are then forwarded by each process which
+sends them out by writing them directly to a suitable TX queue.
+
+Prerequisites
+-------------
+
+Assuming that an Intel� DPDK build has been set up and the multi-process sample
+applications have been built. It is also assumed that a traffic generator has
+been configured and plugged in to the NIC ports 0 and 1.
+
+Test Methodology
+----------------
+
+As with the simple_mp example, the first instance of the symmetric_mp process
+must be run as the primary instance, though with a number of other application
+specific parameters also provided after the EAL arguments. These additional
+parameters are:
+
+* -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the
+ system are to be used. For example: -p 3 to use ports 0 and 1 only.
+* --num-procs <N>, where N is the total number of symmetric_mp instances that
+ will be run side-by-side to perform packet processing. This parameter is used to
+ configure the appropriate number of receive queues on each network port.
+* --proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of
+ processes, specified above). This identifies which symmetric_mp instance is being
+ run, so that each process can read a unique receive queue on each network port.
+
+The secondary symmetric_mp instances must also have these parameters specified,
+and the first two must be the same as those passed to the primary instance, or errors
+result.
+
+For example, to run a set of four symmetric_mp instances, running on lcores 1-4, all
+performing level-2 forwarding of packets between ports 0 and 1, the following
+commands can be used (assuming run as root)::
+
+ ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 2 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0
+ ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1
+ ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 8 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2
+ ./x86_64-native-linuxapp-gcc/examples/dpdk-symmetric_mp -c 10 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3
+
+To run only 1 or 2 instances, the above parameters to the 1 or 2 instances being
+run should remain the same, except for the ``num-procs`` value, which should be
+adjusted appropriately.
+
+
+Test Case: Performance Tests
+----------------------------
+
+Run the multiprocess application using standard IP traffic - varying source
+and destination address information to allow RSS to evenly distribute packets
+among RX queues. Record traffic throughput results as below.
+
++-------------------+-----+-----+-----+-----+-----+-----+
+| Num-procs | 1 | 2 | 2 | 4 | 4 | 8 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
++-------------------+-----+-----+-----+-----+-----+-----+
+| %-age Line Rate | X | X | X | X | X | X |
++-------------------+-----+-----+-----+-----+-----+-----+
+| Packet Rate(mpps) | X | X | X | X | X | X |
++-------------------+-----+-----+-----+-----+-----+-----+
+
+Client Server Multiprocess Tests
+================================
+
+Description
+-----------
+
+The client-server sample application demonstrates the ability of Intel� DPDK
+to use multiple processes in which a server process performs packet I/O and one
+or multiple client processes perform packet processing. The server process
+controls load balancing on the traffic received from a number of input ports to
+a user-specified number of clients. The client processes forward the received
+traffic, outputting the packets directly by writing them to the TX rings of the
+outgoing ports.
+
+Prerequisites
+-------------
+
+Assuming that an Intel� DPDK build has been set up and the multi-process
+sample application has been built.
+Also assuming a traffic generator is connected to the ports "0" and "1".
+
+It is important to run the server application before the client application,
+as the server application manages both the NIC ports with packet transmission
+and reception, as well as shared memory areas and client queues.
+
+Run the Server Application:
+
+- Provide the core mask on which the server process is to run using -c, e.g. -c 3 (bitmask number).
+- Set the number of ports to be engaged using -p, e.g. -p 3 refers to ports 0 & 1.
+- Define the maximum number of clients using -n, e.g. -n 8.
+
+The command line below is an example on how to start the server process on
+logical core 2 to handle a maximum of 8 client processes configured to
+run on socket 0 to handle traffic from NIC ports 0 and 1::
+
+ root@host:mp_server# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_server -c 2 -- -p 3 -n 8
+
+NOTE: If an additional second core is given in the coremask to the server process
+that second core will be used to print statistics. When benchmarking, only a
+single lcore is needed for the server process
+
+Run the Client application:
+
+- In another terminal run the client application.
+- Give each client a distinct core mask with -c.
+- Give each client a unique client-id with -n.
+
+An example commands to run 8 client processes is as follows::
+
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40 --proc-type=secondary -- -n 0 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100 --proc-type=secondary -- -n 1 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 400 --proc-type=secondary -- -n 2 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 1000 --proc-type=secondary -- -n 3 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 4000 --proc-type=secondary -- -n 4 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 10000 --proc-type=secondary -- -n 5 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 40000 --proc-type=secondary -- -n 6 &
+ root@host:mp_client# ./x86_64-native-linuxapp-gcc/examples/dpdk-mp_client -c 100000 --proc-type=secondary -- -n 7 &
+
+Test Case: Performance Measurement
+----------------------------------
+
+- On the traffic generator set up a traffic flow in both directions specifying
+ IP traffic.
+- Run the server and client applications as above.
+- Start the traffic and record the throughput for transmitted and received packets.
+
+An example set of results is shown below.
+
++----------------------+-----+-----+-----+-----+-----+-----+
+| Server threads | 1 | 1 | 1 | 1 | 1 | 1 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Num-clients | 1 | 2 | 2 | 4 | 4 | 8 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
++----------------------+-----+-----+-----+-----+-----+-----+
+| %-age Line Rate | X | X | X | X | X | X |
++----------------------+-----+-----+-----+-----+-----+-----+
+| Packet Rate(mpps) | X | X | X | X | X | X |
++----------------------+-----+-----+-----+-----+-----+-----+
diff --git a/tests/TestSuite_multiprocess.py b/tests/TestSuite_multiprocess.py
index da382a41..ed0933b6 100644
--- a/tests/TestSuite_multiprocess.py
+++ b/tests/TestSuite_multiprocess.py
@@ -1689,216 +1689,6 @@ class TestMultiprocess(TestCase):
}
self.rte_flow(mac_ipv4_symmetric, self.multiprocess_rss_data, **pmd_param)
- def test_perf_multiprocess_performance(self):
- """
- Benchmark Multiprocess performance.
- #"""
- packet_count = 16
- self.dut.send_expect("fg", "# ")
- txPort = self.tester.get_local_port(self.dut_ports[0])
- rxPort = self.tester.get_local_port(self.dut_ports[1])
- mac = self.tester.get_mac(txPort)
- dmac = self.dut.get_mac_address(self.dut_ports[0])
- tgenInput = []
-
- # create mutative src_ip+dst_ip package
- for i in range(packet_count):
- package = (
- r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]'
- % (mac, dmac, i + 1, i + 2)
- )
- self.tester.scapy_append(package)
- pcap = os.sep.join([self.output_path, "test_%d.pcap" % i])
- self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
- tgenInput.append([txPort, rxPort, pcap])
- self.tester.scapy_execute()
-
- # run multiple symmetric_mp process
- validExecutions = []
- for execution in executions:
- if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
- validExecutions.append(execution)
-
- portMask = utils.create_mask(self.dut_ports)
-
- for n in range(len(validExecutions)):
- execution = validExecutions[n]
- # get coreList form execution['cores']
- coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
- # to run a set of symmetric_mp instances, like test plan
- dutSessionList = []
- for index in range(len(coreList)):
- dut_new_session = self.dut.new_session()
- dutSessionList.append(dut_new_session)
- # add -a option when tester and dut in same server
- dut_new_session.send_expect(
- self.app_symmetric_mp
- + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d"
- % (
- utils.create_mask([coreList[index]]),
- self.eal_param,
- portMask,
- execution["nprocs"],
- index,
- ),
- "Finished Process Init",
- )
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgenInput, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- execution["pps"] = pps
-
- # close all symmetric_mp process
- self.dut.send_expect("killall symmetric_mp", "# ")
- # close all dut sessions
- for dut_session in dutSessionList:
- self.dut.close_session(dut_session)
-
- # get rate and mpps data
- for n in range(len(executions)):
- self.verify(executions[n]["pps"] is not 0, "No traffic detected")
- self.result_table_create(
- [
- "Num-procs",
- "Sockets/Cores/Threads",
- "Num Ports",
- "Frame Size",
- "%-age Line Rate",
- "Packet Rate(mpps)",
- ]
- )
-
- for execution in validExecutions:
- self.result_table_add(
- [
- execution["nprocs"],
- execution["cores"],
- 2,
- 64,
- execution["pps"] / float(100000000 / (8 * 84)),
- execution["pps"] / float(1000000),
- ]
- )
-
- self.result_table_print()
-
- def test_perf_multiprocess_client_serverperformance(self):
- """
- Benchmark Multiprocess client-server performance.
- """
- self.dut.kill_all()
- self.dut.send_expect("fg", "# ")
- txPort = self.tester.get_local_port(self.dut_ports[0])
- rxPort = self.tester.get_local_port(self.dut_ports[1])
- mac = self.tester.get_mac(txPort)
-
- self.tester.scapy_append(
- 'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0])
- )
- self.tester.scapy_append('smac="%s"' % mac)
- self.tester.scapy_append(
- 'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]'
- )
-
- pcap = os.sep.join([self.output_path, "test.pcap"])
- self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
- self.tester.scapy_execute()
-
- validExecutions = []
- for execution in executions:
- if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
- validExecutions.append(execution)
-
- for execution in validExecutions:
- coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
- # get core with socket parameter to specified which core dut used when tester and dut in same server
- coreMask = utils.create_mask(
- self.dut.get_core_list("1S/1C/1T", socket=self.socket)
- )
- portMask = utils.create_mask(self.dut_ports)
- # specified mp_server core and add -a option when tester and dut in same server
- self.dut.send_expect(
- self.app_mp_server
- + " -n %d -c %s %s -- -p %s -n %d"
- % (
- self.dut.get_memory_channels(),
- coreMask,
- self.eal_param,
- portMask,
- execution["nprocs"],
- ),
- "Finished Process Init",
- 20,
- )
- self.dut.send_expect("^Z", "\r\n")
- self.dut.send_expect("bg", "# ")
-
- for n in range(execution["nprocs"]):
- time.sleep(5)
- # use next core as mp_client core, different from mp_server
- coreMask = utils.create_mask([str(int(coreList[n]) + 1)])
- self.dut.send_expect(
- self.app_mp_client
- + " -n %d -c %s --proc-type=secondary %s -- -n %d"
- % (self.dut.get_memory_channels(), coreMask, self.eal_param, n),
- "Finished Process Init",
- )
- self.dut.send_expect("^Z", "\r\n")
- self.dut.send_expect("bg", "# ")
-
- tgenInput = []
- tgenInput.append([txPort, rxPort, pcap])
-
- # clear streams before add new streams
- self.tester.pktgen.clear_streams()
- # run packet generator
- streams = self.pktgen_helper.prepare_stream_from_tginput(
- tgenInput, 100, None, self.tester.pktgen
- )
- _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
-
- execution["pps"] = pps
- self.dut.kill_all()
- time.sleep(5)
-
- for n in range(len(executions)):
- self.verify(executions[n]["pps"] is not 0, "No traffic detected")
-
- self.result_table_create(
- [
- "Server threads",
- "Server Cores/Threads",
- "Num-procs",
- "Sockets/Cores/Threads",
- "Num Ports",
- "Frame Size",
- "%-age Line Rate",
- "Packet Rate(mpps)",
- ]
- )
-
- for execution in validExecutions:
- self.result_table_add(
- [
- 1,
- "1S/1C/1T",
- execution["nprocs"],
- execution["cores"],
- 2,
- 64,
- execution["pps"] / float(100000000 / (8 * 84)),
- execution["pps"] / float(1000000),
- ]
- )
-
- self.result_table_print()
-
def set_fields(self):
"""set ip protocol field behavior"""
fields_config = {
diff --git a/tests/TestSuite_perf_multiprocess.py b/tests/TestSuite_perf_multiprocess.py
new file mode 100644
index 00000000..d03bb2f6
--- /dev/null
+++ b/tests/TestSuite_perf_multiprocess.py
@@ -0,0 +1,333 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2010-2014 Intel Corporation
+#
+
+"""
+DPDK Test suite.
+Multi-process Test.
+"""
+
+import copy
+import os
+import random
+import re
+import time
+import traceback
+from collections import OrderedDict
+
+import framework.utils as utils
+from framework.exception import VerifyFailure
+from framework.packet import Packet
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.test_case import TestCase, check_supported_nic
+from framework.utils import GREEN, RED
+
+from .rte_flow_common import FdirProcessing as fdirprocess
+from .rte_flow_common import RssProcessing as rssprocess
+
+executions = []
+
+
+class TestMultiprocess(TestCase):
+
+ support_nic = ["ICE_100G-E810C_QSFP", "ICE_25G-E810C_SFP", "ICE_25G-E810_XXV_SFP"]
+
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+
+ Multiprocess prerequisites.
+ Requirements:
+ OS is not freeBSD
+ DUT core number >= 4
+ multi_process build pass
+ """
+ # self.verify('bsdapp' not in self.target, "Multiprocess not support freebsd")
+
+ self.verify(len(self.dut.get_all_cores()) >= 4, "Not enough Cores")
+ self.pkt = Packet()
+ self.dut_ports = self.dut.get_ports()
+ self.socket = self.dut.get_numa_id(self.dut_ports[0])
+ extra_option = "-Dexamples='multi_process/client_server_mp/mp_server,multi_process/client_server_mp/mp_client,multi_process/simple_mp,multi_process/symmetric_mp'"
+ self.dut.build_install_dpdk(target=self.target, extra_options=extra_option)
+ self.app_mp_client = self.dut.apps_name["mp_client"]
+ self.app_mp_server = self.dut.apps_name["mp_server"]
+ self.app_simple_mp = self.dut.apps_name["simple_mp"]
+ self.app_symmetric_mp = self.dut.apps_name["symmetric_mp"]
+
+ executions.append({"nprocs": 1, "cores": "1S/1C/1T", "pps": 0})
+ executions.append({"nprocs": 2, "cores": "1S/1C/2T", "pps": 0})
+ executions.append({"nprocs": 2, "cores": "1S/2C/1T", "pps": 0})
+ executions.append({"nprocs": 4, "cores": "1S/2C/2T", "pps": 0})
+ executions.append({"nprocs": 4, "cores": "1S/4C/1T", "pps": 0})
+ executions.append({"nprocs": 8, "cores": "1S/4C/2T", "pps": 0})
+
+ self.eal_param = ""
+ for i in self.dut_ports:
+ self.eal_param += " -a %s" % self.dut.ports_info[i]["pci"]
+
+ self.eal_para = self.dut.create_eal_parameters(cores="1S/2C/1T")
+ # start new session to run secondary
+ self.session_secondary = self.dut.new_session()
+
+ # get dts output path
+ if self.logger.log_path.startswith(os.sep):
+ self.output_path = self.logger.log_path
+ else:
+ cur_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+ self.output_path = os.sep.join([cur_path, self.logger.log_path])
+ # create an instance to set stream field setting
+ self.pktgen_helper = PacketGeneratorHelper()
+ self.dport_info0 = self.dut.ports_info[self.dut_ports[0]]
+ self.pci0 = self.dport_info0["pci"]
+ self.tester_ifaces = [
+ self.tester.get_interface(self.dut.ports_map[port])
+ for port in self.dut_ports
+ ]
+ rxq = 1
+ self.session_list = []
+ self.logfmt = "*" * 20
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def test_perf_multiprocess_performance(self):
+ """
+ Benchmark Multiprocess performance.
+ #"""
+ packet_count = 16
+ self.dut.send_expect("fg", "# ")
+ txPort = self.tester.get_local_port(self.dut_ports[0])
+ rxPort = self.tester.get_local_port(self.dut_ports[1])
+ mac = self.tester.get_mac(txPort)
+ dmac = self.dut.get_mac_address(self.dut_ports[0])
+ tgenInput = []
+
+ # create mutative src_ip+dst_ip package
+ for i in range(packet_count):
+ package = (
+ r'flows = [Ether(src="%s", dst="%s")/IP(src="192.168.1.%d", dst="192.168.1.%d")/("X"*26)]'
+ % (mac, dmac, i + 1, i + 2)
+ )
+ self.tester.scapy_append(package)
+ pcap = os.sep.join([self.output_path, "test_%d.pcap" % i])
+ self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+ tgenInput.append([txPort, rxPort, pcap])
+ self.tester.scapy_execute()
+
+ # run multiple symmetric_mp process
+ validExecutions = []
+ for execution in executions:
+ if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+ validExecutions.append(execution)
+
+ portMask = utils.create_mask(self.dut_ports)
+
+ for n in range(len(validExecutions)):
+ execution = validExecutions[n]
+ # get coreList form execution['cores']
+ coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+ # to run a set of symmetric_mp instances, like test plan
+ dutSessionList = []
+ for index in range(len(coreList)):
+ dut_new_session = self.dut.new_session()
+ dutSessionList.append(dut_new_session)
+ # add -a option when tester and dut in same server
+ dut_new_session.send_expect(
+ self.app_symmetric_mp
+ + " -c %s --proc-type=auto %s -- -p %s --num-procs=%d --proc-id=%d"
+ % (
+ utils.create_mask([coreList[index]]),
+ self.eal_param,
+ portMask,
+ execution["nprocs"],
+ index,
+ ),
+ "Finished Process Init",
+ )
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ execution["pps"] = pps
+
+ # close all symmetric_mp process
+ self.dut.send_expect("killall symmetric_mp", "# ")
+ # close all dut sessions
+ for dut_session in dutSessionList:
+ self.dut.close_session(dut_session)
+
+ # get rate and mpps data
+ for n in range(len(executions)):
+ self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+ self.result_table_create(
+ [
+ "Num-procs",
+ "Sockets/Cores/Threads",
+ "Num Ports",
+ "Frame Size",
+ "%-age Line Rate",
+ "Packet Rate(mpps)",
+ ]
+ )
+
+ for execution in validExecutions:
+ self.result_table_add(
+ [
+ execution["nprocs"],
+ execution["cores"],
+ 2,
+ 64,
+ execution["pps"] / float(100000000 / (8 * 84)),
+ execution["pps"] / float(1000000),
+ ]
+ )
+
+ self.result_table_print()
+
+ def test_perf_multiprocess_client_serverperformance(self):
+ """
+ Benchmark Multiprocess client-server performance.
+ """
+ self.dut.kill_all()
+ self.dut.send_expect("fg", "# ")
+ txPort = self.tester.get_local_port(self.dut_ports[0])
+ rxPort = self.tester.get_local_port(self.dut_ports[1])
+ mac = self.tester.get_mac(txPort)
+
+ self.tester.scapy_append(
+ 'dmac="%s"' % self.dut.get_mac_address(self.dut_ports[0])
+ )
+ self.tester.scapy_append('smac="%s"' % mac)
+ self.tester.scapy_append(
+ 'flows = [Ether(src=smac, dst=dmac)/IP(src="192.168.1.1", dst="192.168.1.1")/("X"*26)]'
+ )
+
+ pcap = os.sep.join([self.output_path, "test.pcap"])
+ self.tester.scapy_append('wrpcap("%s", flows)' % pcap)
+ self.tester.scapy_execute()
+
+ validExecutions = []
+ for execution in executions:
+ if len(self.dut.get_core_list(execution["cores"])) == execution["nprocs"]:
+ validExecutions.append(execution)
+
+ for execution in validExecutions:
+ coreList = self.dut.get_core_list(execution["cores"], socket=self.socket)
+ # get core with socket parameter to specified which core dut used when tester and dut in same server
+ coreMask = utils.create_mask(
+ self.dut.get_core_list("1S/1C/1T", socket=self.socket)
+ )
+ portMask = utils.create_mask(self.dut_ports)
+ # specified mp_server core and add -a option when tester and dut in same server
+ self.dut.send_expect(
+ self.app_mp_server
+ + " -n %d -c %s %s -- -p %s -n %d"
+ % (
+ self.dut.get_memory_channels(),
+ coreMask,
+ self.eal_param,
+ portMask,
+ execution["nprocs"],
+ ),
+ "Finished Process Init",
+ 20,
+ )
+ self.dut.send_expect("^Z", "\r\n")
+ self.dut.send_expect("bg", "# ")
+
+ for n in range(execution["nprocs"]):
+ time.sleep(5)
+ # use next core as mp_client core, different from mp_server
+ coreMask = utils.create_mask([str(int(coreList[n]) + 1)])
+ self.dut.send_expect(
+ self.app_mp_client
+ + " -n %d -c %s --proc-type=secondary %s -- -n %d"
+ % (self.dut.get_memory_channels(), coreMask, self.eal_param, n),
+ "Finished Process Init",
+ )
+ self.dut.send_expect("^Z", "\r\n")
+ self.dut.send_expect("bg", "# ")
+
+ tgenInput = []
+ tgenInput.append([txPort, rxPort, pcap])
+
+ # clear streams before add new streams
+ self.tester.pktgen.clear_streams()
+ # run packet generator
+ streams = self.pktgen_helper.prepare_stream_from_tginput(
+ tgenInput, 100, None, self.tester.pktgen
+ )
+ _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams)
+
+ execution["pps"] = pps
+ self.dut.kill_all()
+ time.sleep(5)
+
+ for n in range(len(executions)):
+ self.verify(executions[n]["pps"] is not 0, "No traffic detected")
+
+ self.result_table_create(
+ [
+ "Server threads",
+ "Server Cores/Threads",
+ "Num-procs",
+ "Sockets/Cores/Threads",
+ "Num Ports",
+ "Frame Size",
+ "%-age Line Rate",
+ "Packet Rate(mpps)",
+ ]
+ )
+
+ for execution in validExecutions:
+ self.result_table_add(
+ [
+ 1,
+ "1S/1C/1T",
+ execution["nprocs"],
+ execution["cores"],
+ 2,
+ 64,
+ execution["pps"] / float(100000000 / (8 * 84)),
+ execution["pps"] / float(1000000),
+ ]
+ )
+
+ self.result_table_print()
+
+ def set_fields(self):
+ """set ip protocol field behavior"""
+ fields_config = {
+ "ip": {
+ "src": {"range": 64, "action": "inc"},
+ "dst": {"range": 64, "action": "inc"},
+ },
+ }
+
+ return fields_config
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ if self.session_list:
+ for sess in self.session_list:
+ self.dut.close_session(sess)
+ self.dut.kill_all()
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.dut.kill_all()
+ pass
--
2.25.1
^ permalink raw reply [flat|nested] 7+ messages in thread