* [dts][PATCH V3 2/3] test_plans/ice_iavf_rx_timestamp: ice iavf support rx timestamp
2022-07-28 16:08 [PATCH V3 0/3] ice iavf support rx timestamp Yaqi Tang
2022-07-28 16:08 ` [dts][PATCH V3 1/3] test_plans/index: add new test plan for ice_iavf_rx_timestamp Yaqi Tang
@ 2022-07-28 16:08 ` Yaqi Tang
2022-07-28 16:09 ` [dts][PATCH V3 3/3] tests/ice_iavf_rx_timestamp: " Yaqi Tang
2 siblings, 0 replies; 6+ messages in thread
From: Yaqi Tang @ 2022-07-28 16:08 UTC (permalink / raw)
To: dts; +Cc: Yaqi Tang
The IAVF driver is able to enable rx timestamp offload.
Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>
---
.../ice_iavf_rx_timestamp_test_plan.rst | 158 ++++++++++++++++++
1 file changed, 158 insertions(+)
create mode 100644 test_plans/ice_iavf_rx_timestamp_test_plan.rst
diff --git a/test_plans/ice_iavf_rx_timestamp_test_plan.rst b/test_plans/ice_iavf_rx_timestamp_test_plan.rst
new file mode 100644
index 00000000..89f1bd8b
--- /dev/null
+++ b/test_plans/ice_iavf_rx_timestamp_test_plan.rst
@@ -0,0 +1,158 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2022 Intel Corporation
+
+=============================
+ICE IAVF Support Rx Timestamp
+=============================
+
+Description
+===========
+The VF driver is able to enable rx timestamp offload, the 64 bits timestamp is able
+to extracted from the flexible Rx descriptor and be stored in mbuf's dynamic field.
+The received packets have timestamp values, and the timestamp values are incremented.
+
+NOTE: Require kernel support rx timestamp offload function in VF.
+
+Prerequisites
+=============
+
+Topology
+--------
+DUT port 0 <----> Tester port 0
+
+Hardware
+--------
+Supported NICs: Intel® Ethernet 800 Series E810-XXVDA4/E810-CQ
+
+Software
+--------
+dpdk: http://dpdk.org/git/dpdk
+scapy: http://www.secdev.org/projects/scapy/
+
+General Set Up
+--------------
+1. Compile DPDK::
+
+ # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib --default-library=static <dpdk build dir>
+ # ninja -C <dpdk build dir> -j 110
+
+2. Get the pci device id and interface of DUT and tester.
+ For example, 0000:3b:00.0 and 0000:3b:00.1 is pci device id,
+ ens785f0 and ens785f1 is interface::
+
+ <dpdk dir># ./usertools/dpdk-devbind.py -s
+
+ 0000:3b:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci
+ 0000:3b:00.1 'Device 159b' if=ens785f1 drv=ice unused=vfio-pci
+
+3. Generate 1 VF on PF0, set mac address for this VF::
+
+ # echo 1 > /sys/bus/pci/devices/0000:3b:00.0/sriov_numvfs
+ # ip link set ens785f0 vf 0 mac 00:11:22:33:44:55
+
+
+4. Bind the DUT port to dpdk::
+
+ <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
+
+Test Case
+=========
+Common Steps
+------------
+All the packets in this test plan use below settings:
+dst mac: 68:05:CA:C1:BA:28
+ipv4 src: 192.168.0.2
+ipv4 dst: 192.168.0.3
+ipv6 src: 2001::2
+ipv6 dst: 2001::3
+sport: 1026
+dport: 1027
+count: 3
+
+1. Check driver is IAVF::
+
+ testpmd> show port info all
+
+2. Set fwd engine::
+
+ testpmd> set fwd rxonly
+
+3. Set verbose::
+
+ testpmd> set verbose 1
+
+4. Start testpmd::
+
+ testpmd> start
+
+5. Send ether packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/("X"*480)], iface="<tester interface>",count=<count>)
+
+6. Send ipv4 packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IP(src="<ipv4 src>",dst="<ipv4 dst>")/("X"*480)], iface="<tester interface>",count=<count>)
+
+7. Send ipv6 packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IPv6(src="<ipv6 src>",dst="<ipv6 dst>")/("X"*480)], iface="<tester interface>",count=<count>)
+
+8. Send ipv4-udp packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IP(src="<ipv4 src>",dst="<ipv4 dst>")/UDP(sport=<sport>, dport=<dport>)/("X"*480)], iface="<tester interface>",count=<count>)
+
+9. Send ipv6-udp packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IPv6(src="<ipv6 src>",dst="<ipv6 dst>")/UDP(sport=<sport>, dport=<dport>)/("X"*480)], iface="<tester interface>",count=<count>)
+
+10. Send ipv4-tcp packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IP(src="<ipv4 src>",dst="<ipv4 dst>")/TCP(sport=<sport>, dport=<dport>)/("X"*480)], iface="<tester interface>",count=<count>)
+
+11. Send ipv6-tcp packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IPv6(src="<ipv6 src>",dst="<ipv6 dst>")/TCP(sport=<sport>, dport=<dport>)/("X"*480)], iface="<tester interface>",count=<count>)
+
+12. Send ipv4-sctp packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IP(src="<ipv4 src>",dst="<ipv4 dst>")/SCTP(sport=<sport>, dport=<dport>)/("X"*480)], iface="<tester interface>",count=<count>)
+
+13. Send ipv6-sctp packets, record the timestamp values and check the timestamp values are incremented::
+
+ >>> sendp([Ether(dst="<dst mac>")/IPv6(src="<ipv6 src>",dst="<ipv6 dst>")/SCTP(sport=<sport>, dport=<dport>)/("X"*480)], iface="<tester interface>",count=<count>)
+
+Test Case 1: Without timestamp, check no timestamp
+--------------------------------------------------
+This case is designed to check no timestamp value while testpmd not enable rx timestamp.
+
+Test Steps
+~~~~~~~~~~
+1. Start testpmd with different command line::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -a 3b:01.0 -- -i --rxq=16 --txq=16
+
+2. Send packets as common steps, check no timestamp value.
+
+Test Case 2: Single queue With timestamp, check timestamp
+---------------------------------------------------------
+This case is designed to check single queue has timestamp values and the timestamp values are incremented.
+
+Test Steps
+~~~~~~~~~~
+1. Start testpmd with different command line::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -a 3b:01.0 -- -i --enable-rx-timestamp
+
+2. Send packets as common steps, check single queue has timestamp values and the timestamp values are incremented.
+
+Test Case 3: Multi queues With timestamp, check timestamp
+---------------------------------------------------------
+This case is designed to check multi queues have timestamp values and the timestamp values are incremented.
+
+Test Steps
+~~~~~~~~~~
+1. Start testpmd with different command line::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -a 3b:01.0 -- -i --rxq=16 --txq=16 --enable-rx-timestamp
+
+2. Send packets as common steps, check multi queues have timestamp values and the timestamp values are incremented.
--
2.25.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [dts][PATCH V3 3/3] tests/ice_iavf_rx_timestamp: ice iavf support rx timestamp
2022-07-28 16:08 [PATCH V3 0/3] ice iavf support rx timestamp Yaqi Tang
2022-07-28 16:08 ` [dts][PATCH V3 1/3] test_plans/index: add new test plan for ice_iavf_rx_timestamp Yaqi Tang
2022-07-28 16:08 ` [dts][PATCH V3 2/3] test_plans/ice_iavf_rx_timestamp: ice iavf support rx timestamp Yaqi Tang
@ 2022-07-28 16:09 ` Yaqi Tang
2022-07-29 9:04 ` Lin, Xueqin
2022-08-02 4:18 ` Jiale, SongX
2 siblings, 2 replies; 6+ messages in thread
From: Yaqi Tang @ 2022-07-28 16:09 UTC (permalink / raw)
To: dts; +Cc: Yaqi Tang
The IAVF driver is able to enable rx timestamp offload.
Signed-off-by: Yaqi Tang <yaqi.tang@intel.com>
---
tests/TestSuite_ice_iavf_rx_timestamp.py | 251 +++++++++++++++++++++++
1 file changed, 251 insertions(+)
create mode 100644 tests/TestSuite_ice_iavf_rx_timestamp.py
diff --git a/tests/TestSuite_ice_iavf_rx_timestamp.py b/tests/TestSuite_ice_iavf_rx_timestamp.py
new file mode 100644
index 00000000..2527afc0
--- /dev/null
+++ b/tests/TestSuite_ice_iavf_rx_timestamp.py
@@ -0,0 +1,251 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+#
+
+import copy
+import os
+import re
+import time
+
+from framework.packet import Packet
+from framework.pmd_output import PmdOutput
+from framework.test_case import TestCase
+from framework.utils import GREEN, RED
+
+tv_packets_basic = {
+ "tv_mac": 'Ether(dst="00:11:22:33:44:55")/("X"*480)',
+ "tv_mac_ipv4": 'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2",dst="192.168.0.3")/("X"*480)',
+ "tv_mac_ipv6": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::2", dst="2001::3")/("X"*480)',
+ "tv_mac_ipv4_udp": 'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/UDP(sport=1026, dport=1027)/("X"*480)',
+ "tv_mac_ipv6_udp": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::2", dst="2001::3")/UDP(sport=1026, dport=1027)/("X"*480)',
+ "tv_mac_ipv4_tcp": 'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/TCP(sport=1026, dport=1027)/("X"*480)',
+ "tv_mac_ipv6_tcp": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::2", dst="2001::3")/TCP(sport=1026, dport=1027)/("X"*480)',
+ "tv_mac_ipv4_sctp": 'Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/SCTP(sport=1026, dport=1027)/("X"*480)',
+ "tv_mac_ipv6_sctp": 'Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::2", dst="2001::3")/SCTP(sport=1026, dport=1027)/("X"*480)',
+}
+
+command_line_option_with_timestamp = {
+ "casename": "command_line_option_with_timestamp",
+ "port_id": 0,
+ "test": [
+ {
+ "send_packet": tv_packets_basic["tv_mac"],
+ "action": {"check_timestamp": "ether"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv4"],
+ "action": {"check_timestamp": "ipv4"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv6"],
+ "action": {"check_timestamp": "ipv6"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv4_udp"],
+ "action": {"check_timestamp": "ipv4-udp"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv6_udp"],
+ "action": {"check_timestamp": "ipv6-udp"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv4_tcp"],
+ "action": {"check_timestamp": "ipv4-tcp"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv6_tcp"],
+ "action": {"check_timestamp": "ipv6-tcp"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv4_sctp"],
+ "action": {"check_timestamp": "ipv4-sctp"},
+ },
+ {
+ "send_packet": tv_packets_basic["tv_mac_ipv6_sctp"],
+ "action": {"check_timestamp": "ipv6-sctp"},
+ },
+ ],
+}
+
+
+class IAVFTimestampConfigureTest(TestCase):
+ def set_up_all(self):
+ """
+ Run at the start of each test suite.
+ Generic filter Prerequistites
+ """
+ self.verify(
+ self.nic in ["ICE_25G-E810C_SFP", "ICE_100G-E810C_QSFP"],
+ "%s nic not support vf timestamp" % self.nic,
+ )
+ self.dut_ports = self.dut.get_ports(self.nic)
+ self.ports_socket = self.dut.get_numa_id(self.dut_ports[0])
+ # Verify that enough ports are available
+ self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
+ self.tester_port0 = self.tester.get_local_port(self.dut_ports[0])
+ self.tester_iface0 = self.tester.get_interface(self.tester_port0)
+ self.pkt = Packet()
+ self.pmdout = PmdOutput(self.dut)
+
+ self.vf_driver = self.get_suite_cfg()["vf_driver"]
+ if self.vf_driver is None:
+ self.vf_driver = "vfio-pci"
+ self.pf0_intf = self.dut.ports_info[self.dut_ports[0]]["intf"]
+ self.create_vf()
+
+ self.current_saved_timestamp = ""
+ self.timestamp_records = {}
+ self.handle_output_methods = {"check_timestamp": self.check_timestamp}
+ self.error_msgs = []
+
+ def set_up(self):
+ """
+ Run before each test case.
+ """
+ pass
+
+ def launch_testpmd(self, line_option=""):
+ """
+ start testpmd
+ """
+ # Prepare testpmd EAL and parameters
+ self.pmdout.start_testpmd(
+ param=line_option,
+ eal_param=f"-a {self.vf0_pci}",
+ socket=self.ports_socket,
+ )
+ # test link status
+ res = self.pmdout.wait_link_status_up("all", timeout=15)
+ self.verify(res is True, "there have port link is down")
+ self.pmdout.execute_cmd("set fwd rxonly")
+ self.pmdout.execute_cmd("set verbose 1")
+ self.pmdout.execute_cmd("start")
+
+ def create_vf(self):
+ self.dut.bind_interfaces_linux("ice")
+ self.dut.generate_sriov_vfs_by_port(self.dut_ports[0], 1)
+ self.sriov_vfs_port = self.dut.ports_info[self.dut_ports[0]]["vfs_port"]
+ self.dut.send_expect("ifconfig %s up" % self.pf0_intf, "# ")
+ self.dut.send_expect(
+ "ip link set %s vf 0 mac 00:11:22:33:44:55" % self.pf0_intf, "#"
+ )
+ self.vf0_pci = self.sriov_vfs_port[0].pci
+ try:
+ for port in self.sriov_vfs_port:
+ port.bind_driver(self.vf_driver)
+ except Exception as e:
+ self.destroy_vf()
+ raise Exception(e)
+
+ def destroy_vf(self):
+ self.dut.send_expect("quit", "# ", 60)
+ time.sleep(2)
+ self.dut.destroy_sriov_vfs_by_port(self.dut_ports[0])
+
+ def check_timestamp(self, out, key="", port_id=0):
+ timestamps = self.get_timestamp(out, port_id)
+ if len(timestamps) == 0:
+ self.logger.info("There is no timestamp value")
+ return
+ if len(key) == 0:
+ for item in timestamps:
+ if item <= self.current_saved_timestamp:
+ error_msg = (
+ "timestamp value {} should be larger "
+ "than current saved timestamp {}".format(
+ item, self.current_saved_timestamp
+ )
+ )
+ self.logger.error(error_msg)
+ self.error_msgs.append(error_msg)
+ else:
+ self.timestamp_records[key] = timestamps
+ for item in timestamps:
+ if item == self.timestamp_records[key]:
+ error_msg = (
+ "timestamp value {} should be increment "
+ "with {} {}".format(item, key, self.timestamp_records[key])
+ )
+ self.logger.error(error_msg)
+ self.error_msgs.append(error_msg)
+
+ def get_timestamp(self, out, port_id=0):
+ timestamp_pattern = re.compile(".*timestamp\s(\w+)")
+ timestamp_infos = timestamp_pattern.findall(out)
+ self.logger.info("timestamp_infos: {}".format(timestamp_infos))
+ return timestamp_infos
+
+ def send_pkt_get_output(self, pkts, port_id=0, count=3):
+ self.logger.info("----------send packet-------------")
+ self.logger.info("{}".format(pkts))
+ self.pkt.update_pkt(pkts)
+ self.pkt.send_pkt(crb=self.tester, tx_port=self.tester_iface0, count=count)
+ out = self.pmdout.get_output(timeout=1)
+ pkt_pattern = (
+ "port\s%d/queue\s\d+:\sreceived\s(\d+)\spackets.+?\n.*length=\d{2,}\s"
+ % port_id
+ )
+ reveived_data = re.findall(pkt_pattern, out)
+ reveived_pkts = sum(map(int, [i[0] for i in reveived_data]))
+ return out
+
+ def send_pkt_get_timestamp(self, pkts, port_id=0, count=3):
+ output = self.send_pkt_get_output(pkts, port_id, count)
+ timestamps = self.get_timestamp(output, port_id)
+ return timestamps
+
+ def handle_actions(self, output, actions, port_id=0):
+ actions = [actions]
+ for action in actions: # [{}]
+ self.logger.info("action: {}\n".format(action))
+ for method in action: # {'save': ''}
+ if method in self.handle_output_methods:
+ self.handle_output_methods[method](
+ output, action[method], port_id=port_id
+ )
+
+ def handle_tests(self, tests, port_id=0):
+ out = ""
+ for test in tests:
+ if "send_packet" in test:
+ out = self.send_pkt_get_output(test["send_packet"], port_id)
+ if "action" in test:
+ self.handle_actions(out, test["action"])
+
+ def handle_timestamp_case(self, case_info):
+ # clear timestamp_records before each case
+ self.timestamp_records = {}
+ self.error_msgs = []
+ self.current_saved_timestamp = ""
+ port_id = case_info.get("port_id") if case_info.get("port_id") else 0
+ # handle tests
+ tests = case_info["test"]
+ self.logger.info("------------handle test--------------")
+ self.handle_tests(tests, port_id)
+ if self.error_msgs:
+ self.verify(False, str(self.error_msgs[:500]))
+
+ def test_iavf_without_timestamp(self):
+ self.launch_testpmd(line_option="--rxq=16 --txq=16")
+ self.handle_timestamp_case(command_line_option_with_timestamp)
+
+ def test_iavf_single_queue_with_timestamp(self):
+ self.launch_testpmd(line_option="--enable-rx-timestamp")
+ self.handle_timestamp_case(command_line_option_with_timestamp)
+
+ def test_iavf_multi_queues_with_timestamp(self):
+ self.launch_testpmd(line_option="--rxq=16 --txq=16 --enable-rx-timestamp")
+ self.handle_timestamp_case(command_line_option_with_timestamp)
+
+ def tear_down(self):
+ """
+ Run after each test case.
+ """
+ self.pmdout.execute_cmd("quit", "#")
+
+ def tear_down_all(self):
+ """
+ Run after each test suite.
+ """
+ self.destroy_vf()
+ self.dut.kill_all()
--
2.25.1
^ permalink raw reply [flat|nested] 6+ messages in thread