test suite reviews and discussions
 help / color / mirror / Atom feed
From: Hongbo Li <hongbox.li@intel.com>
To: dts@dpdk.org
Cc: Hongbo Li <hongbox.li@intel.com>
Subject: [dts][PATCH V1 5/7] tests/vxlan:Separated performance cases
Date: Thu, 22 Sep 2022 14:29:48 +0000	[thread overview]
Message-ID: <20220922142950.398902-5-hongbox.li@intel.com> (raw)
In-Reply-To: <20220922142950.398902-1-hongbox.li@intel.com>

Separated performance cases

Signed-off-by: Hongbo Li <hongbox.li@intel.com>
---
 test_plans/perf_vxlan_test_plan.rst |  84 ++++
 test_plans/vxlan_test_plan.rst      |  57 ---
 tests/TestSuite_perf_vxlan.py       | 691 ++++++++++++++++++++++++++++
 tests/TestSuite_vxlan.py            | 213 ---------
 4 files changed, 775 insertions(+), 270 deletions(-)
 create mode 100644 test_plans/perf_vxlan_test_plan.rst
 create mode 100644 tests/TestSuite_perf_vxlan.py

diff --git a/test_plans/perf_vxlan_test_plan.rst b/test_plans/perf_vxlan_test_plan.rst
new file mode 100644
index 00000000..18347a94
--- /dev/null
+++ b/test_plans/perf_vxlan_test_plan.rst
@@ -0,0 +1,84 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2014-2017 Intel Corporation
+
+======================================
+Intel® Ethernet 700 Series Vxlan Tests
+======================================
+Cloud providers build virtual network overlays over existing network
+infrastructure that provide tenant isolation and scaling. Tunneling
+layers added to the packets carry the virtual networking frames over
+existing Layer 2 and IP networks. Conceptually, this is similar to
+creating virtual private networks over the Internet. Intel® Ethernet
+700 Series will process these tunneling layers by the hardware.
+
+This document provides test plan for Intel® Ethernet 700 Series vxlan
+packet detecting, checksum computing and filtering.
+
+Prerequisites
+=============
+1x Intel® X710 (Intel® Ethernet 700 Series) NICs (2x 40GbE full duplex
+optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
+
+1x Intel® XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC)
+plugged into the available PCIe Gen3 8-lane slot.
+
+DUT board must be two sockets system and each cpu have more than 8 lcores.
+
+
+Test Case: Vxlan Checksum Offload Performance Benchmarking
+==========================================================
+The throughput is measured for each of these cases for vxlan tx checksum
+offload of "all by software", "L3 offload by hardware", "L4 offload by
+hardware", "l3&l4 offload by hardware".
+
+The results are printed in the following table:
+
++----------------+--------+--------+------------+
+| Calculate Type | Queues | Mpps   | % linerate |
++================+========+========+============+
+| SOFTWARE ALL   | Single |        |            |
++----------------+--------+--------+------------+
+| HW L4          | Single |        |            |
++----------------+--------+--------+------------+
+| HW L3&L4       | Single |        |            |
++----------------+--------+--------+------------+
+| SOFTWARE ALL   | Multi  |        |            |
++----------------+--------+--------+------------+
+| HW L4          | Multi  |        |            |
++----------------+--------+--------+------------+
+| HW L3&L4       | Multi  |        |            |
++----------------+--------+--------+------------+
+
+Test Case: Vxlan Tunnel filter Performance Benchmarking
+=======================================================
+The throughput is measured for different Vxlan tunnel filter types.
+Queue single mean there's only one flow and forwarded to the first queue.
+Queue multi mean there are two flows and configure to different queues.
+
++--------+------------------+--------+--------+------------+
+| Packet | Filter           | Queue  | Mpps   | % linerate |
++========+==================+========+========+============+
+| Normal | None             | Single |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | None             | Single |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac-ivlan       | Single |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac-ivlan-tenid | Single |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac-tenid       | Single |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac             | Single |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | omac-imac-tenid  | Single |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac-ivlan       | Multi  |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac-ivlan-tenid | Multi  |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac-tenid       | Multi  |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | imac             | Multi  |        |            |
++--------+------------------+--------+--------+------------+
+| Vxlan  | omac-imac-tenid  | Multi  |        |            |
++--------+------------------+--------+--------+------------+
diff --git a/test_plans/vxlan_test_plan.rst b/test_plans/vxlan_test_plan.rst
index d59cee31..96fdea0e 100644
--- a/test_plans/vxlan_test_plan.rst
+++ b/test_plans/vxlan_test_plan.rst
@@ -307,60 +307,3 @@ Add Cloud filter with invalid vni "16777216" will be failed.
 
 Add Cloud filter with invalid queue id "64" will be failed.
 
-Test Case: Vxlan Checksum Offload Performance Benchmarking
-==========================================================
-The throughput is measured for each of these cases for vxlan tx checksum
-offload of "all by software", "L3 offload by hardware", "L4 offload by
-hardware", "l3&l4 offload by hardware".
-
-The results are printed in the following table:
-
-+----------------+--------+--------+------------+
-| Calculate Type | Queues | Mpps   | % linerate |
-+================+========+========+============+
-| SOFTWARE ALL   | Single |        |            |
-+----------------+--------+--------+------------+
-| HW L4          | Single |        |            |
-+----------------+--------+--------+------------+
-| HW L3&L4       | Single |        |            |
-+----------------+--------+--------+------------+
-| SOFTWARE ALL   | Multi  |        |            |
-+----------------+--------+--------+------------+
-| HW L4          | Multi  |        |            |
-+----------------+--------+--------+------------+
-| HW L3&L4       | Multi  |        |            |
-+----------------+--------+--------+------------+
-
-Test Case: Vxlan Tunnel filter Performance Benchmarking
-=======================================================
-The throughput is measured for different Vxlan tunnel filter types.
-Queue single mean there's only one flow and forwarded to the first queue.
-Queue multi mean there are two flows and configure to different queues.
-
-+--------+------------------+--------+--------+------------+
-| Packet | Filter           | Queue  | Mpps   | % linerate |
-+========+==================+========+========+============+
-| Normal | None             | Single |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | None             | Single |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac-ivlan       | Single |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac-ivlan-tenid | Single |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac-tenid       | Single |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac             | Single |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | omac-imac-tenid  | Single |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac-ivlan       | Multi  |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac-ivlan-tenid | Multi  |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac-tenid       | Multi  |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | imac             | Multi  |        |            |
-+--------+------------------+--------+--------+------------+
-| Vxlan  | omac-imac-tenid  | Multi  |        |            |
-+--------+------------------+--------+--------+------------+
diff --git a/tests/TestSuite_perf_vxlan.py b/tests/TestSuite_perf_vxlan.py
new file mode 100644
index 00000000..4bd50c21
--- /dev/null
+++ b/tests/TestSuite_perf_vxlan.py
@@ -0,0 +1,691 @@
+import os
+import re
+import string
+import time
+from random import randint
+
+from scapy.config import conf
+from scapy.layers.inet import IP, TCP, UDP, Ether
+from scapy.layers.inet6 import IPv6
+from scapy.layers.l2 import Dot1Q
+from scapy.layers.sctp import SCTP, SCTPChunkData
+from scapy.layers.vxlan import VXLAN
+from scapy.route import *
+from scapy.sendrecv import sniff
+from scapy.utils import rdpcap, wrpcap
+
+import framework.packet as packet
+import framework.utils as utils
+from framework.packet import IncreaseIP, IncreaseIPv6
+from framework.pktgen import PacketGeneratorHelper
+from framework.pmd_output import PmdOutput
+from framework.settings import FOLDERS, HEADER_SIZE
+from framework.test_case import TestCase
+
+VXLAN_PORT = 4789
+PACKET_LEN = 128
+MAX_TXQ_RXQ = 4
+BIDIRECT = True
+
+
+class VxlanTestConfig(object):
+
+    """
+    Module for config/create/transmit vxlan packet
+    """
+
+    def __init__(self, test_case, **kwargs):
+        self.test_case = test_case
+        self.init()
+        for name in kwargs:
+            setattr(self, name, kwargs[name])
+        self.pkt_obj = packet.Packet()
+
+    def init(self):
+        self.packets_config()
+
+    def packets_config(self):
+        """
+        Default vxlan packet format
+        """
+        self.pcap_file = packet.TMP_PATH + "vxlan.pcap"
+        self.capture_file = packet.TMP_PATH + "vxlan_capture.pcap"
+        self.outer_mac_src = "00:00:10:00:00:00"
+        self.outer_mac_dst = "11:22:33:44:55:66"
+        self.outer_vlan = "N/A"
+        self.outer_ip_src = "192.168.1.1"
+        self.outer_ip_dst = "192.168.1.2"
+        self.outer_ip_invalid = 0
+        self.outer_ip6_src = "N/A"
+        self.outer_ip6_dst = "N/A"
+        self.outer_ip6_invalid = 0
+        self.outer_udp_src = 63
+        self.outer_udp_dst = VXLAN_PORT
+        self.outer_udp_invalid = 0
+        self.vni = 1
+        self.inner_mac_src = "00:00:20:00:00:00"
+        self.inner_mac_dst = "00:00:20:00:00:01"
+        self.inner_vlan = "N/A"
+        self.inner_ip_src = "192.168.2.1"
+        self.inner_ip_dst = "192.168.2.2"
+        self.inner_ip_invalid = 0
+        self.inner_ip6_src = "N/A"
+        self.inner_ip6_dst = "N/A"
+        self.inner_ip6_invalid = 0
+        self.payload_size = 18
+        self.inner_l4_type = "UDP"
+        self.inner_l4_invalid = 0
+
+    def packet_type(self):
+        """
+        Return vxlan packet type
+        """
+        if self.outer_udp_dst != VXLAN_PORT:
+            if self.outer_ip6_src != "N/A":
+                return "L3_IPV6_EXT_UNKNOWN"
+            else:
+                return "L3_IPV4_EXT_UNKNOWN"
+        else:
+            if self.inner_ip6_src != "N/A":
+                return "L3_IPV6_EXT_UNKNOWN"
+            else:
+                return "L3_IPV4_EXT_UNKNOWN"
+
+    def create_pcap(self):
+        """
+        Create pcap file and copy it to tester if configured
+        Return scapy packet object for later usage
+        """
+        if self.inner_l4_type == "SCTP":
+            self.inner_payload = SCTPChunkData(data="X" * 16)
+        else:
+            self.inner_payload = "X" * self.payload_size
+
+        if self.inner_l4_type == "TCP":
+            l4_pro = TCP()
+        elif self.inner_l4_type == "SCTP":
+            l4_pro = SCTP()
+        else:
+            l4_pro = UDP()
+
+        if self.inner_ip6_src != "N/A":
+            inner_l3 = IPv6()
+        else:
+            inner_l3 = IP()
+
+        if self.inner_vlan != "N/A":
+            inner = Ether() / Dot1Q() / inner_l3 / l4_pro / self.inner_payload
+            inner[Dot1Q].vlan = self.inner_vlan
+        else:
+            inner = Ether() / inner_l3 / l4_pro / self.inner_payload
+
+        if self.inner_ip6_src != "N/A":
+            inner[inner_l3.name].src = self.inner_ip6_src
+            inner[inner_l3.name].dst = self.inner_ip6_dst
+        else:
+            inner[inner_l3.name].src = self.inner_ip_src
+            inner[inner_l3.name].dst = self.inner_ip_dst
+
+        if self.inner_ip_invalid == 1:
+            inner[inner_l3.name].chksum = 0
+
+        # when udp checksum is 0, will skip checksum
+        if self.inner_l4_invalid == 1:
+            if self.inner_l4_type == "SCTP":
+                inner[SCTP].chksum = 0
+            else:
+                inner[self.inner_l4_type].chksum = 1
+
+        inner[Ether].src = self.inner_mac_src
+        inner[Ether].dst = self.inner_mac_dst
+
+        if self.outer_ip6_src != "N/A":
+            outer_l3 = IPv6()
+        else:
+            outer_l3 = IP()
+
+        if self.outer_vlan != "N/A":
+            outer = Ether() / Dot1Q() / outer_l3 / UDP()
+            outer[Dot1Q].vlan = self.outer_vlan
+        else:
+            outer = Ether() / outer_l3 / UDP()
+
+        outer[Ether].src = self.outer_mac_src
+        outer[Ether].dst = self.outer_mac_dst
+
+        if self.outer_ip6_src != "N/A":
+            outer[outer_l3.name].src = self.outer_ip6_src
+            outer[outer_l3.name].dst = self.outer_ip6_dst
+        else:
+            outer[outer_l3.name].src = self.outer_ip_src
+            outer[outer_l3.name].dst = self.outer_ip_dst
+
+        outer[UDP].sport = self.outer_udp_src
+        outer[UDP].dport = self.outer_udp_dst
+
+        if self.outer_ip_invalid == 1:
+            outer[outer_l3.name].chksum = 0
+        # when udp checksum is 0, will skip checksum
+        if self.outer_udp_invalid == 1:
+            outer[UDP].chksum = 1
+
+        if self.outer_udp_dst == VXLAN_PORT:
+            self.pkt = outer / VXLAN(vni=self.vni) / inner
+        else:
+            self.pkt = outer / ("X" * self.payload_size)
+
+        wrpcap(self.pcap_file, self.pkt)
+
+        return self.pkt
+
+    def get_chksums(self, pkt=None):
+        """
+        get chksum values of Outer and Inner packet L3&L4
+        Skip outer udp for it will be calculated by software
+        """
+        chk_sums = {}
+        if pkt is None:
+            pkt = rdpcap(self.pcap_file)
+        else:
+            pkt = pkt.pktgen.pkt
+
+        time.sleep(1)
+        if pkt[0].guess_payload_class(pkt[0]).name == "802.1Q":
+            payload = pkt[0][Dot1Q]
+        else:
+            payload = pkt[0]
+
+        if payload.guess_payload_class(payload).name == "IP":
+            chk_sums["outer_ip"] = hex(payload[IP].chksum)
+
+        if pkt[0].haslayer("VXLAN") == 1:
+            inner = pkt[0]["VXLAN"]
+            if inner.haslayer(IP) == 1:
+                chk_sums["inner_ip"] = hex(inner[IP].chksum)
+                if inner[IP].proto == 6:
+                    chk_sums["inner_tcp"] = hex(inner[TCP].chksum)
+                if inner[IP].proto == 17:
+                    chk_sums["inner_udp"] = hex(inner[UDP].chksum)
+                if inner[IP].proto == 132:
+                    chk_sums["inner_sctp"] = hex(inner[SCTP].chksum)
+            elif inner.haslayer(IPv6) == 1:
+                if inner[IPv6].nh == 6:
+                    chk_sums["inner_tcp"] = hex(inner[TCP].chksum)
+                if inner[IPv6].nh == 17:
+                    chk_sums["inner_udp"] = hex(inner[UDP].chksum)
+                # scapy can not get sctp checksum, so extracted manually
+                if inner[IPv6].nh == 59:
+                    load = str(inner[IPv6].payload)
+                    chk_sums["inner_sctp"] = hex(
+                        (ord(load[8]) << 24)
+                        | (ord(load[9]) << 16)
+                        | (ord(load[10]) << 8)
+                        | (ord(load[11]))
+                    )
+
+        return chk_sums
+
+    def send_pcap(self, iface=""):
+        """
+        Send vxlan pcap file by iface
+        """
+        del self.pkt_obj.pktgen.pkts[:]
+        self.pkt_obj.pktgen.assign_pkt(self.pkt)
+        self.pkt_obj.pktgen.update_pkts()
+        self.pkt_obj.send_pkt(crb=self.test_case.tester, tx_port=iface)
+
+    def pcap_len(self):
+        """
+        Return length of pcap packet, will plus 4 bytes crc
+        """
+        # add four bytes crc
+        return len(self.pkt) + 4
+
+
+class TestVxlan(TestCase):
+    def set_up_all(self):
+        """
+        vxlan Prerequisites
+        """
+        # this feature only enable in Intel® Ethernet 700 Series now
+        if self.nic in [
+            "I40E_10G-SFP_XL710",
+            "I40E_40G-QSFP_A",
+            "I40E_40G-QSFP_B",
+            "I40E_25G-25G_SFP28",
+            "I40E_10G-SFP_X722",
+            "I40E_10G-10G_BASE_T_X722",
+            "I40E_10G-10G_BASE_T_BC",
+        ]:
+            self.compile_switch = "CONFIG_RTE_LIBRTE_I40E_INC_VECTOR"
+        elif self.nic in ["IXGBE_10G-X550T", "IXGBE_10G-X550EM_X_10G_T"]:
+            self.compile_switch = "CONFIG_RTE_IXGBE_INC_VECTOR"
+        elif self.nic in ["ICE_25G-E810C_SFP", "ICE_100G-E810C_QSFP"]:
+            print("Intel® Ethernet 700 Series support default none VECTOR")
+        else:
+            self.verify(False, "%s not support this vxlan" % self.nic)
+        # Based on h/w type, choose how many ports to use
+        ports = self.dut.get_ports()
+
+        # Verify that enough ports are available
+        self.verify(len(ports) >= 2, "Insufficient ports for testing")
+        global valports
+        valports = [_ for _ in ports if self.tester.get_local_port(_) != -1]
+
+        self.portMask = utils.create_mask(valports[:2])
+
+        # Verify that enough threads are available
+        netdev = self.dut.ports_info[ports[0]]["port"]
+        self.ports_socket = netdev.socket
+
+        # start testpmd
+        self.pmdout = PmdOutput(self.dut)
+
+        # init port config
+        self.dut_port = valports[0]
+        self.dut_port_mac = self.dut.get_mac_address(self.dut_port)
+        tester_port = self.tester.get_local_port(self.dut_port)
+        self.tester_iface = self.tester.get_interface(tester_port)
+        self.recv_port = valports[1]
+        tester_recv_port = self.tester.get_local_port(self.recv_port)
+        self.recv_iface = self.tester.get_interface(tester_recv_port)
+
+        # invalid parameter
+        self.invalid_mac = "00:00:00:00:01"
+        self.invalid_ip = "192.168.1.256"
+        self.invalid_vlan = 4097
+        self.invalid_queue = 64
+        self.path = self.dut.apps_name["test-pmd"]
+
+        # vxlan payload length for performance test
+        # inner packet not contain crc, should need add four
+        self.vxlan_payload = (
+            PACKET_LEN
+            - HEADER_SIZE["eth"]
+            - HEADER_SIZE["ip"]
+            - HEADER_SIZE["udp"]
+            - HEADER_SIZE["vxlan"]
+            - HEADER_SIZE["eth"]
+            - HEADER_SIZE["ip"]
+            - HEADER_SIZE["udp"]
+            + 4
+        )
+
+        self.cal_type = [
+            {
+                "Type": "SOFTWARE ALL",
+                "csum": [],
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Type": "HW L4",
+                "csum": ["udp"],
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Type": "HW L3&L4",
+                "csum": ["ip", "udp", "outer-ip"],
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Type": "SOFTWARE ALL",
+                "csum": [],
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Type": "HW L4",
+                "csum": ["udp"],
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Type": "HW L3&L4",
+                "csum": ["ip", "udp", "outer-ip"],
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+        ]
+
+        self.chksum_header = ["Calculate Type"]
+        self.chksum_header.append("Queues")
+        self.chksum_header.append("Mpps")
+        self.chksum_header.append("% linerate")
+
+        # tunnel filter performance test
+        self.default_vlan = 1
+        self.tunnel_multiqueue = 2
+        self.tunnel_header = ["Packet", "Filter", "Queue", "Mpps", "% linerate"]
+        self.tunnel_perf = [
+            {
+                "Packet": "Normal",
+                "tunnel_filter": "None",
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "None",
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac-ivlan",
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac-ivlan-tenid",
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac-tenid",
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac",
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "omac-imac-tenid",
+                "recvqueue": "Single",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "None",
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac-ivlan",
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac-ivlan-tenid",
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac-tenid",
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "imac",
+                "recvqueue": "Multi",
+                "Mpps": {},
+                "pct": {},
+            },
+            {
+                "Packet": "VXLAN",
+                "tunnel_filter": "omac-imac-tenid",
+                "recvqueue": "Multi",
+            },
+        ]
+
+        self.pktgen_helper = PacketGeneratorHelper()
+
+    def set_up(self):
+        """
+        Run before each test case.
+        """
+        pass
+
+    def test_perf_vxlan_tunnelfilter_performance_2ports(self):
+        self.result_table_create(self.tunnel_header)
+        core_list = self.dut.get_core_list(
+            "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
+        )
+
+        pmd_temp = (
+            "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+        )
+
+        for perf_config in self.tunnel_perf:
+            tun_filter = perf_config["tunnel_filter"]
+            recv_queue = perf_config["recvqueue"]
+            print(
+                (
+                    utils.GREEN(
+                        "Measure tunnel performance of [%s %s %s]"
+                        % (perf_config["Packet"], tun_filter, recv_queue)
+                    )
+                )
+            )
+
+            if tun_filter == "None" and recv_queue == "Multi":
+                pmd_temp = (
+                    "./%s %s -- -i --rss-udp --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+                )
+
+            self.eal_para = self.dut.create_eal_parameters(cores=core_list)
+            pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
+            self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
+
+            # config flow
+            self.config_tunnelfilter(
+                self.dut_port, self.recv_port, perf_config, "flow1.pcap"
+            )
+            # config the flows
+            tgen_input = []
+            tgen_input.append(
+                (
+                    self.tester.get_local_port(self.dut_port),
+                    self.tester.get_local_port(self.recv_port),
+                    "flow1.pcap",
+                )
+            )
+
+            if BIDIRECT:
+                self.config_tunnelfilter(
+                    self.recv_port, self.dut_port, perf_config, "flow2.pcap"
+                )
+                tgen_input.append(
+                    (
+                        self.tester.get_local_port(self.recv_port),
+                        self.tester.get_local_port(self.dut_port),
+                        "flow2.pcap",
+                    )
+                )
+
+            self.dut.send_expect("set fwd io", "testpmd>", 10)
+            self.dut.send_expect("start", "testpmd>", 10)
+            self.pmdout.wait_link_status_up(self.dut_port)
+            if BIDIRECT:
+                wirespeed = self.wirespeed(self.nic, PACKET_LEN, 2)
+            else:
+                wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
+
+            # run traffic generator
+            use_vm = True if recv_queue == "Multi" and tun_filter == "None" else False
+            _, pps = self.suite_measure_throughput(tgen_input, use_vm=use_vm)
+
+            pps /= 1000000.0
+            perf_config["Mpps"] = pps
+            perf_config["pct"] = pps * 100 / wirespeed
+
+            out = self.dut.send_expect("stop", "testpmd>", 10)
+            self.dut.send_expect("quit", "# ", 10)
+
+            # verify every queue work fine
+            check_queue = 0
+            if recv_queue == "Multi":
+                for queue in range(check_queue):
+                    self.verify(
+                        "Queue= %d -> TX Port" % (queue) in out,
+                        "Queue %d no traffic" % queue,
+                    )
+
+            table_row = [
+                perf_config["Packet"],
+                tun_filter,
+                recv_queue,
+                perf_config["Mpps"],
+                perf_config["pct"],
+            ]
+
+            self.result_table_add(table_row)
+
+        self.result_table_print()
+
+    def test_perf_vxlan_checksum_performance_2ports(self):
+        self.result_table_create(self.chksum_header)
+        vxlan = VxlanTestConfig(self, payload_size=self.vxlan_payload)
+        vxlan.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
+        vxlan.pcap_file = "vxlan1.pcap"
+        vxlan.inner_mac_dst = "00:00:20:00:00:01"
+        vxlan.create_pcap()
+
+        vxlan_queue = VxlanTestConfig(self, payload_size=self.vxlan_payload)
+        vxlan_queue.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
+        vxlan_queue.pcap_file = "vxlan1_1.pcap"
+        vxlan_queue.inner_mac_dst = "00:00:20:00:00:02"
+        vxlan_queue.create_pcap()
+
+        # socket/core/thread
+        core_list = self.dut.get_core_list(
+            "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
+        )
+        core_mask = utils.create_mask(core_list)
+
+        self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
+        tx_port = self.tester.get_local_port(self.dut_ports[0])
+        rx_port = self.tester.get_local_port(self.dut_ports[1])
+
+        for cal in self.cal_type:
+            recv_queue = cal["recvqueue"]
+            print(
+                (
+                    utils.GREEN(
+                        "Measure checksum performance of [%s %s %s]"
+                        % (cal["Type"], recv_queue, cal["csum"])
+                    )
+                )
+            )
+
+            # configure flows
+            tgen_input = []
+            if recv_queue == "Multi":
+                tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
+                tgen_input.append((tx_port, rx_port, "vxlan1_1.pcap"))
+            else:
+                tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
+
+            # multi queue and signle queue commands
+            if recv_queue == "Multi":
+                pmd_temp = "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
+            else:
+                pmd_temp = "./%s %s -- -i --nb-cores=2 --portmask=%s"
+
+            self.eal_para = self.dut.create_eal_parameters(cores=core_list)
+            pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
+
+            self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
+            self.dut.send_expect("set fwd csum", "testpmd>", 10)
+            self.enable_vxlan(self.dut_port)
+            self.enable_vxlan(self.recv_port)
+            self.pmdout.wait_link_status_up(self.dut_port)
+
+            # redirect flow to another queue by tunnel filter
+            rule_config = {
+                "dut_port": self.dut_port,
+                "outer_mac_dst": vxlan.outer_mac_dst,
+                "inner_mac_dst": vxlan.inner_mac_dst,
+                "inner_ip_dst": vxlan.inner_ip_dst,
+                "inner_vlan": 0,
+                "tun_filter": "imac",
+                "vni": vxlan.vni,
+                "queue": 0,
+            }
+            self.perf_tunnel_filter_set_rule(rule_config)
+
+            if recv_queue == "Multi":
+                rule_config = {
+                    "dut_port": self.dut_port,
+                    "outer_mac_dst": vxlan_queue.outer_mac_dst,
+                    "inner_mac_dst": vxlan_queue.inner_mac_dst,
+                    "inner_ip_dst": vxlan_queue.inner_ip_dst,
+                    "inner_vlan": 0,
+                    "tun_filter": "imac",
+                    "vni": vxlan.vni,
+                    "queue": 1,
+                }
+                self.perf_tunnel_filter_set_rule(rule_config)
+
+            for pro in cal["csum"]:
+                self.csum_set_type(pro, self.dut_port)
+                self.csum_set_type(pro, self.recv_port)
+
+            self.dut.send_expect("start", "testpmd>", 10)
+
+            wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
+
+            # run traffic generator
+            _, pps = self.suite_measure_throughput(tgen_input)
+
+            pps /= 1000000.0
+            cal["Mpps"] = pps
+            cal["pct"] = pps * 100 / wirespeed
+
+            out = self.dut.send_expect("stop", "testpmd>", 10)
+            self.dut.send_expect("quit", "# ", 10)
+
+            # verify every queue work fine
+            check_queue = 1
+            if recv_queue == "Multi":
+                for queue in range(check_queue):
+                    self.verify(
+                        "Queue= %d -> TX Port" % (queue) in out,
+                        "Queue %d no traffic" % queue,
+                    )
+
+            table_row = [cal["Type"], recv_queue, cal["Mpps"], cal["pct"]]
+            self.result_table_add(table_row)
+
+        self.result_table_print()
+
+    def tear_down(self):
+        """
+        Run after each test case.
+        """
+        self.dut.kill_all()
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        pass
diff --git a/tests/TestSuite_vxlan.py b/tests/TestSuite_vxlan.py
index c69d7903..5dd49ecd 100644
--- a/tests/TestSuite_vxlan.py
+++ b/tests/TestSuite_vxlan.py
@@ -1163,219 +1163,6 @@ class TestVxlan(TestCase):
 
         wrpcap(dest_pcap, pkts)
 
-    def test_perf_vxlan_tunnelfilter_performance_2ports(self):
-        self.result_table_create(self.tunnel_header)
-        core_list = self.dut.get_core_list(
-            "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
-        )
-
-        pmd_temp = (
-            "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
-        )
-
-        for perf_config in self.tunnel_perf:
-            tun_filter = perf_config["tunnel_filter"]
-            recv_queue = perf_config["recvqueue"]
-            print(
-                (
-                    utils.GREEN(
-                        "Measure tunnel performance of [%s %s %s]"
-                        % (perf_config["Packet"], tun_filter, recv_queue)
-                    )
-                )
-            )
-
-            if tun_filter == "None" and recv_queue == "Multi":
-                pmd_temp = (
-                    "./%s %s -- -i --rss-udp --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
-                )
-
-            self.eal_para = self.dut.create_eal_parameters(cores=core_list)
-            pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
-            self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
-
-            # config flow
-            self.config_tunnelfilter(
-                self.dut_port, self.recv_port, perf_config, "flow1.pcap"
-            )
-            # config the flows
-            tgen_input = []
-            tgen_input.append(
-                (
-                    self.tester.get_local_port(self.dut_port),
-                    self.tester.get_local_port(self.recv_port),
-                    "flow1.pcap",
-                )
-            )
-
-            if BIDIRECT:
-                self.config_tunnelfilter(
-                    self.recv_port, self.dut_port, perf_config, "flow2.pcap"
-                )
-                tgen_input.append(
-                    (
-                        self.tester.get_local_port(self.recv_port),
-                        self.tester.get_local_port(self.dut_port),
-                        "flow2.pcap",
-                    )
-                )
-
-            self.dut.send_expect("set fwd io", "testpmd>", 10)
-            self.dut.send_expect("start", "testpmd>", 10)
-            self.pmdout.wait_link_status_up(self.dut_port)
-            if BIDIRECT:
-                wirespeed = self.wirespeed(self.nic, PACKET_LEN, 2)
-            else:
-                wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
-
-            # run traffic generator
-            use_vm = True if recv_queue == "Multi" and tun_filter == "None" else False
-            _, pps = self.suite_measure_throughput(tgen_input, use_vm=use_vm)
-
-            pps /= 1000000.0
-            perf_config["Mpps"] = pps
-            perf_config["pct"] = pps * 100 / wirespeed
-
-            out = self.dut.send_expect("stop", "testpmd>", 10)
-            self.dut.send_expect("quit", "# ", 10)
-
-            # verify every queue work fine
-            check_queue = 0
-            if recv_queue == "Multi":
-                for queue in range(check_queue):
-                    self.verify(
-                        "Queue= %d -> TX Port" % (queue) in out,
-                        "Queue %d no traffic" % queue,
-                    )
-
-            table_row = [
-                perf_config["Packet"],
-                tun_filter,
-                recv_queue,
-                perf_config["Mpps"],
-                perf_config["pct"],
-            ]
-
-            self.result_table_add(table_row)
-
-        self.result_table_print()
-
-    def test_perf_vxlan_checksum_performance_2ports(self):
-        self.result_table_create(self.chksum_header)
-        vxlan = VxlanTestConfig(self, payload_size=self.vxlan_payload)
-        vxlan.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
-        vxlan.pcap_file = "vxlan1.pcap"
-        vxlan.inner_mac_dst = "00:00:20:00:00:01"
-        vxlan.create_pcap()
-
-        vxlan_queue = VxlanTestConfig(self, payload_size=self.vxlan_payload)
-        vxlan_queue.outer_mac_dst = self.dut.get_mac_address(self.dut_port)
-        vxlan_queue.pcap_file = "vxlan1_1.pcap"
-        vxlan_queue.inner_mac_dst = "00:00:20:00:00:02"
-        vxlan_queue.create_pcap()
-
-        # socket/core/thread
-        core_list = self.dut.get_core_list(
-            "1S/%dC/1T" % (self.tunnel_multiqueue * 2 + 1), socket=self.ports_socket
-        )
-        core_mask = utils.create_mask(core_list)
-
-        self.dut_ports = self.dut.get_ports_performance(force_different_nic=False)
-        tx_port = self.tester.get_local_port(self.dut_ports[0])
-        rx_port = self.tester.get_local_port(self.dut_ports[1])
-
-        for cal in self.cal_type:
-            recv_queue = cal["recvqueue"]
-            print(
-                (
-                    utils.GREEN(
-                        "Measure checksum performance of [%s %s %s]"
-                        % (cal["Type"], recv_queue, cal["csum"])
-                    )
-                )
-            )
-
-            # configure flows
-            tgen_input = []
-            if recv_queue == "Multi":
-                tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
-                tgen_input.append((tx_port, rx_port, "vxlan1_1.pcap"))
-            else:
-                tgen_input.append((tx_port, rx_port, "vxlan1.pcap"))
-
-            # multi queue and signle queue commands
-            if recv_queue == "Multi":
-                pmd_temp = "./%s %s -- -i --disable-rss --rxq=2 --txq=2 --nb-cores=4 --portmask=%s"
-            else:
-                pmd_temp = "./%s %s -- -i --nb-cores=2 --portmask=%s"
-
-            self.eal_para = self.dut.create_eal_parameters(cores=core_list)
-            pmd_cmd = pmd_temp % (self.path, self.eal_para, self.portMask)
-
-            self.dut.send_expect(pmd_cmd, "testpmd> ", 100)
-            self.dut.send_expect("set fwd csum", "testpmd>", 10)
-            self.enable_vxlan(self.dut_port)
-            self.enable_vxlan(self.recv_port)
-            self.pmdout.wait_link_status_up(self.dut_port)
-
-            # redirect flow to another queue by tunnel filter
-            rule_config = {
-                "dut_port": self.dut_port,
-                "outer_mac_dst": vxlan.outer_mac_dst,
-                "inner_mac_dst": vxlan.inner_mac_dst,
-                "inner_ip_dst": vxlan.inner_ip_dst,
-                "inner_vlan": 0,
-                "tun_filter": "imac",
-                "vni": vxlan.vni,
-                "queue": 0,
-            }
-            self.perf_tunnel_filter_set_rule(rule_config)
-
-            if recv_queue == "Multi":
-                rule_config = {
-                    "dut_port": self.dut_port,
-                    "outer_mac_dst": vxlan_queue.outer_mac_dst,
-                    "inner_mac_dst": vxlan_queue.inner_mac_dst,
-                    "inner_ip_dst": vxlan_queue.inner_ip_dst,
-                    "inner_vlan": 0,
-                    "tun_filter": "imac",
-                    "vni": vxlan.vni,
-                    "queue": 1,
-                }
-                self.perf_tunnel_filter_set_rule(rule_config)
-
-            for pro in cal["csum"]:
-                self.csum_set_type(pro, self.dut_port)
-                self.csum_set_type(pro, self.recv_port)
-
-            self.dut.send_expect("start", "testpmd>", 10)
-
-            wirespeed = self.wirespeed(self.nic, PACKET_LEN, 1)
-
-            # run traffic generator
-            _, pps = self.suite_measure_throughput(tgen_input)
-
-            pps /= 1000000.0
-            cal["Mpps"] = pps
-            cal["pct"] = pps * 100 / wirespeed
-
-            out = self.dut.send_expect("stop", "testpmd>", 10)
-            self.dut.send_expect("quit", "# ", 10)
-
-            # verify every queue work fine
-            check_queue = 1
-            if recv_queue == "Multi":
-                for queue in range(check_queue):
-                    self.verify(
-                        "Queue= %d -> TX Port" % (queue) in out,
-                        "Queue %d no traffic" % queue,
-                    )
-
-            table_row = [cal["Type"], recv_queue, cal["Mpps"], cal["pct"]]
-            self.result_table_add(table_row)
-
-        self.result_table_print()
-
     def enable_vxlan(self, port):
         self.dut.send_expect(
             "rx_vxlan_port add %d %d" % (VXLAN_PORT, port), "testpmd>", 10
-- 
2.25.1


  parent reply	other threads:[~2022-09-22  6:25 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-22 14:29 [dts][PATCH V1 1/7] tests/efd:Separated " Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 2/7] tests/kni:Separated " Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 3/7] tests/l2fwd:Separated " Hongbo Li
2022-09-22 14:29 ` [dts][PATCH V1 4/7] tests/tso:Separated " Hongbo Li
2022-09-22 14:29 ` Hongbo Li [this message]
2022-09-22 14:29 ` [dts][PATCH V1 6/7] tests/ipfrag:Separated " Hongbo Li
2022-09-22 14:29 ` [PATCH V1 7/7] tests/multiprocess:Separated " Hongbo Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220922142950.398902-5-hongbox.li@intel.com \
    --to=hongbox.li@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).