* Re: [dts] [PATCH V1] switch_filter: add reload ice code
2021-04-08 8:15 [dts] [PATCH V1] switch_filter: add reload ice code Zhimin Huang
@ 2021-04-08 8:33 ` Huang, ZhiminX
2021-04-08 8:48 ` Fu, Qi
0 siblings, 1 reply; 4+ messages in thread
From: Huang, ZhiminX @ 2021-04-08 8:33 UTC (permalink / raw)
To: dts
[-- Attachment #1: Type: text/plain, Size: 307 bytes --]
> -----Original Message-----
> From: Zhimin Huang <zhiminx.huang@intel.com>
> Sent: Thursday, April 8, 2021 4:16 PM
> To: dts@dpdk.org
> Cc: Huang, ZhiminX <zhiminx.huang@intel.com>
> Subject: [dts] [PATCH V1] switch_filter: add reload ice code
>
Tested-by: Huang Zhimin <zhiminx.huang@intel.com>
[-- Attachment #2: CVLDCFSwitchFilterPPPOETest.log --]
[-- Type: application/octet-stream, Size: 17666 bytes --]
08/04/2021 15:35:18 dts:
TEST SUITE : CVLDCFSwitchFilterPPPOETest
08/04/2021 15:35:18 dts: NIC : columbiaville_100g
08/04/2021 15:35:18 dut.10.240.183.133:
08/04/2021 15:35:18 tester:
08/04/2021 15:35:23 dut.10.240.183.133: modprobe vfio-pci
08/04/2021 15:35:23 dut.10.240.183.133:
08/04/2021 15:35:23 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipcp_pay Begin
08/04/2021 15:35:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipcp_pay Begin
08/04/2021 15:35:34 CVLDCFSwitchFilterPPPOETest: Test Case test_mac_pppoe_ipcp_pay Begin
08/04/2021 15:35:34 dut.10.240.183.133:
08/04/2021 15:35:34 tester:
08/04/2021 15:35:34 dut.10.240.183.133: rmmod ice
08/04/2021 15:35:37 dut.10.240.183.133:
08/04/2021 15:35:37 dut.10.240.183.133: modprobe ice
08/04/2021 15:35:39 dut.10.240.183.133:
08/04/2021 15:35:39 dut.10.240.183.133: ethtool -i ens801f0
08/04/2021 15:35:39 dut.10.240.183.133: driver: ice
version: 1.5.0_rc37_7_gc94639c4
firmware-version: 2.50 0x80006b7a 1.2916.0
expansion-rom-version:
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/vendor
08/04/2021 15:35:42 dut.10.240.183.133: 0x8086
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/device
08/04/2021 15:35:42 dut.10.240.183.133: 0x1889
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/vendor
08/04/2021 15:35:42 dut.10.240.183.133: 0x8086
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/device
08/04/2021 15:35:42 dut.10.240.183.133: 0x1889
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/vendor
08/04/2021 15:35:42 dut.10.240.183.133: 0x8086
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/device
08/04/2021 15:35:42 dut.10.240.183.133: 0x1889
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/vendor
08/04/2021 15:35:42 dut.10.240.183.133: 0x8086
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/device
08/04/2021 15:35:42 dut.10.240.183.133: 0x1889
08/04/2021 15:35:42 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/vendor
08/04/2021 15:35:43 dut.10.240.183.133: 0x8086
08/04/2021 15:35:43 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/device
08/04/2021 15:35:43 dut.10.240.183.133: 0x1889
08/04/2021 15:35:43 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/vendor
08/04/2021 15:35:43 dut.10.240.183.133: 0x8086
08/04/2021 15:35:43 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/device
08/04/2021 15:35:43 dut.10.240.183.133: 0x1889
08/04/2021 15:35:43 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/vendor
08/04/2021 15:35:43 dut.10.240.183.133: 0x8086
08/04/2021 15:35:43 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/device
08/04/2021 15:35:43 dut.10.240.183.133: 0x1889
08/04/2021 15:35:43 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/vendor
08/04/2021 15:35:43 dut.10.240.183.133: 0x8086
08/04/2021 15:35:43 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/device
08/04/2021 15:35:43 dut.10.240.183.133: 0x1889
08/04/2021 15:35:43 dut.10.240.183.133: ip link set ens801f0 vf 0 trust on
08/04/2021 15:35:43 dut.10.240.183.133:
08/04/2021 15:35:53 dut.10.240.183.133: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4 -n 4 -a 0000:81:01.0,cap=dcf -a 0000:81:01.1 --file-prefix=dpdk_5224_20210408153459 -- -i
08/04/2021 15:36:06 dut.10.240.183.133: EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_5224_20210408153459/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:81:01.0 (socket 1)
EAL: Releasing pci mapped resource for 0000:81:01.0
EAL: Calling pci_unmap_resource for 0000:81:01.0 at 0x2200000000
EAL: Calling pci_unmap_resource for 0000:81:01.0 at 0x2200020000
EAL: using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice_dcf (8086:1889) device: 0000:81:01.0 (socket 1)
ice_load_pkg_type(): Active package is: 1.3.28.0, ICE COMMS Package (double VLAN mode)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:81:01.1 (socket 1)
Interactive-mode selected
Failed to set MTU to 1500 for port 0
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
ice_dcf_init_rss(): RSS is enabled by PF by default
ice_dcf_configure_queues(): request RXDID == 16 in Queue[0]
Port 0: FA:FC:91:15:ED:65
Configuring Port 1 (socket 1)
iavf_init_rss(): RSS is enabled by PF by default
iavf_configure_queues(): request RXDID[22] in Queue[0]
Port 1: link state change event
Port 1: link state change event
Port 1: 2A:24:09:A3:49:12
Checking link statuses...
Done
08/04/2021 15:36:06 dut.10.240.183.133: set portlist 1
08/04/2021 15:36:07 dut.10.240.183.133:
previous number of forwarding ports 2 - changed to number of configured ports 1
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
08/04/2021 15:36:07 dut.10.240.183.133: set fwd rxonly
08/04/2021 15:36:07 dut.10.240.183.133:
Set rxonly packet forwarding mode
08/04/2021 15:36:07 dut.10.240.183.133: set verbose 1
08/04/2021 15:36:07 dut.10.240.183.133:
Change verbose level from 0 to 1
08/04/2021 15:36:07 dut.10.240.183.133: flow validate 0 ingress pattern eth dst is 00:11:22:33:44:55 / pppoes seid is 3 / pppoe_proto_id is 0x8021 / end actions vf id 1 / end
08/04/2021 15:36:07 dut.10.240.183.133:
Flow rule validated
08/04/2021 15:36:07 dut.10.240.183.133: flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / pppoes seid is 3 / pppoe_proto_id is 0x8021 / end actions vf id 1 / end
08/04/2021 15:36:07 dut.10.240.183.133:
Flow rule #0 created
08/04/2021 15:36:07 dut.10.240.183.133: flow list 0
08/04/2021 15:36:07 dut.10.240.183.133:
ID Group Prio Attr Rule
0 0 0 i-- ETH PPPOES PPPOE_PROTO_ID => VF
08/04/2021 15:36:07 dut.10.240.183.133: start
08/04/2021 15:36:07 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:36:12 dut.10.240.183.133: stop
08/04/2021 15:36:12 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:36:12 dut.10.240.183.133: start
08/04/2021 15:36:13 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:36:18 dut.10.240.183.133: stop
08/04/2021 15:36:18 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:36:18 dut.10.240.183.133: flow destroy 0 rule 0
08/04/2021 15:36:18 dut.10.240.183.133:
Flow rule #0 destroyed
08/04/2021 15:36:18 dut.10.240.183.133: flow list 0
08/04/2021 15:36:18 dut.10.240.183.133:
08/04/2021 15:36:18 dut.10.240.183.133: start
08/04/2021 15:36:18 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:36:23 dut.10.240.183.133: stop
08/04/2021 15:36:23 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:36:23 CVLDCFSwitchFilterPPPOETest: Test Case test_mac_pppoe_ipcp_pay Result PASSED:
08/04/2021 15:36:23 dut.10.240.183.133: flow flush 0
08/04/2021 15:36:23 dut.10.240.183.133:
08/04/2021 15:36:23 dut.10.240.183.133: clear port stats all
08/04/2021 15:36:23 dut.10.240.183.133:
NIC statistics for port 0 cleared
NIC statistics for port 1 cleared
08/04/2021 15:36:23 dut.10.240.183.133: quit
08/04/2021 15:36:25 dut.10.240.183.133:
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Done
Shutting down port 0...
Closing ports...
Port 0 is closed
Done
Shutting down port 1...
Closing ports...
Port 1 is closed
Done
Bye...
08/04/2021 15:36:25 dut.10.240.183.133: kill_all: called by dut and prefix list has value.
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv4_pay_ip_address Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv4_pay_session_id_proto_id Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv4_tcp_pay Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv4_tcp_pay_non_src_dst_port Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv4_udp_pay Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv4_udp_pay_non_src_dst_port Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv6_pay_ip_address Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv6_pay_session_id_proto_id Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv6_tcp_pay Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv6_tcp_pay_non_src_dst_port Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv6_udp_pay Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_ipv6_udp_pay_non_src_dst_port Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_pppoe_lcp_pay Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipcp_pay Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv4_pay_ip_address Begin
08/04/2021 15:37:33 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv4_pay_session_id_proto_id Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv4_tcp_pay Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv4_tcp_pay_non_src_dst_port Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv4_udp_pay Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv4_udp_pay_non_src_dst_port Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv6_pay_ip_address Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv6_pay_session_id_proto_id Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv6_tcp_pay Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv6_tcp_pay_non_src_dst_port Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv6_udp_pay Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_ipv6_udp_pay_non_src_dst_port Begin
08/04/2021 15:37:34 CVLDCFSwitchFilterPPPOETest: Rerun Test Case test_mac_vlan_pppoe_lcp_pay Begin
08/04/2021 15:37:34 dts:
TEST SUITE ENDED: CVLDCFSwitchFilterPPPOETest
[-- Attachment #3: CVLDCFSwitchFilterTest.log --]
[-- Type: application/octet-stream, Size: 40459 bytes --]
08/04/2021 15:27:12 dts:
TEST SUITE : CVLDCFSwitchFilterTest
08/04/2021 15:27:12 dts: NIC : columbiaville_100g
08/04/2021 15:27:12 dut.10.240.183.133:
08/04/2021 15:27:13 tester:
08/04/2021 15:27:18 dut.10.240.183.133: modprobe vfio-pci
08/04/2021 15:27:18 dut.10.240.183.133:
08/04/2021 15:27:18 CVLDCFSwitchFilterTest: Rerun Test Case test_add_existing_rules_but_with_different_vfs Begin
08/04/2021 15:27:37 CVLDCFSwitchFilterTest: Rerun Test Case test_add_existing_rules_with_the_same_vfs Begin
08/04/2021 15:27:37 CVLDCFSwitchFilterTest: Rerun Test Case test_add_two_rules_with_different_input_set_different_vf_id Begin
08/04/2021 15:27:37 CVLDCFSwitchFilterTest: Rerun Test Case test_add_two_rules_with_different_input_set_same_vf_id Begin
08/04/2021 15:27:37 CVLDCFSwitchFilterTest: Rerun Test Case test_add_two_rules_with_one_rule_input_set_included_in_the_other Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_dcf_stop_start Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_ethertype_filter_ipv6 Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_ethertype_filter_pppod Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_ethertype_filter_pppoe Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_fwd_with_multi_vfs Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_fwd_with_single_vf Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_ip_multicast Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_l2_multicast Begin
08/04/2021 15:27:38 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_drop_action Begin
08/04/2021 15:27:41 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_drop_action Begin
08/04/2021 15:27:41 CVLDCFSwitchFilterTest: Test Case test_mac_drop_action Begin
08/04/2021 15:27:41 dut.10.240.183.133:
08/04/2021 15:27:41 tester:
08/04/2021 15:27:41 dut.10.240.183.133: rmmod ice
08/04/2021 15:27:44 dut.10.240.183.133:
08/04/2021 15:27:44 dut.10.240.183.133: modprobe ice
08/04/2021 15:27:46 dut.10.240.183.133:
08/04/2021 15:27:46 dut.10.240.183.133: ethtool -i ens801f0
08/04/2021 15:27:46 dut.10.240.183.133: driver: ice
version: 1.5.0_rc37_7_gc94639c4
firmware-version: 2.50 0x80006b7a 1.2916.0
expansion-rom-version:
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
08/04/2021 15:27:48 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/vendor
08/04/2021 15:27:49 dut.10.240.183.133: 0x8086
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/device
08/04/2021 15:27:49 dut.10.240.183.133: 0x1889
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/vendor
08/04/2021 15:27:49 dut.10.240.183.133: 0x8086
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/device
08/04/2021 15:27:49 dut.10.240.183.133: 0x1889
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/vendor
08/04/2021 15:27:49 dut.10.240.183.133: 0x8086
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/device
08/04/2021 15:27:49 dut.10.240.183.133: 0x1889
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/vendor
08/04/2021 15:27:49 dut.10.240.183.133: 0x8086
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/device
08/04/2021 15:27:49 dut.10.240.183.133: 0x1889
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/vendor
08/04/2021 15:27:49 dut.10.240.183.133: 0x8086
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/device
08/04/2021 15:27:49 dut.10.240.183.133: 0x1889
08/04/2021 15:27:49 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/vendor
08/04/2021 15:27:50 dut.10.240.183.133: 0x8086
08/04/2021 15:27:50 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.2/device
08/04/2021 15:27:50 dut.10.240.183.133: 0x1889
08/04/2021 15:27:50 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/vendor
08/04/2021 15:27:50 dut.10.240.183.133: 0x8086
08/04/2021 15:27:50 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/device
08/04/2021 15:27:50 dut.10.240.183.133: 0x1889
08/04/2021 15:27:50 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/vendor
08/04/2021 15:27:50 dut.10.240.183.133: 0x8086
08/04/2021 15:27:50 dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.3/device
08/04/2021 15:27:50 dut.10.240.183.133: 0x1889
08/04/2021 15:27:50 dut.10.240.183.133: ip link set ens801f0 vf 0 trust on
08/04/2021 15:27:50 dut.10.240.183.133:
08/04/2021 15:28:00 dut.10.240.183.133: ip link set ens801f0 vf 1 mac "00:11:22:33:44:55"
08/04/2021 15:28:00 dut.10.240.183.133:
08/04/2021 15:28:00 dut.10.240.183.133: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4 -n 4 -a 0000:81:01.0,cap=dcf -a 0000:81:01.1 --file-prefix=dpdk_4171_20210408152654 -- -i
08/04/2021 15:28:13 dut.10.240.183.133: EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_4171_20210408152654/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:81:01.0 (socket 1)
EAL: Releasing pci mapped resource for 0000:81:01.0
EAL: Calling pci_unmap_resource for 0000:81:01.0 at 0x2200000000
EAL: Calling pci_unmap_resource for 0000:81:01.0 at 0x2200020000
EAL: using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice_dcf (8086:1889) device: 0000:81:01.0 (socket 1)
ice_load_pkg_type(): Active package is: 1.3.28.0, ICE COMMS Package (double VLAN mode)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:81:01.1 (socket 1)
Interactive-mode selected
Failed to set MTU to 1500 for port 0
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
ice_dcf_init_rss(): RSS is enabled by PF by default
ice_dcf_configure_queues(): request RXDID == 16 in Queue[0]
Port 0: 7E:FC:D8:9D:41:67
Configuring Port 1 (socket 1)
iavf_init_rss(): RSS is enabled by PF by default
iavf_configure_queues(): request RXDID[22] in Queue[0]
Port 1: link state change event
Port 1: link state change event
Port 1: 00:11:22:33:44:55
Checking link statuses...
Done
08/04/2021 15:28:13 dut.10.240.183.133: set portlist 1
08/04/2021 15:28:13 dut.10.240.183.133:
previous number of forwarding ports 2 - changed to number of configured ports 1
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
08/04/2021 15:28:13 dut.10.240.183.133: set fwd rxonly
08/04/2021 15:28:14 dut.10.240.183.133:
Set rxonly packet forwarding mode
08/04/2021 15:28:14 dut.10.240.183.133: set verbose 1
08/04/2021 15:28:14 dut.10.240.183.133:
Change verbose level from 0 to 1
08/04/2021 15:28:14 dut.10.240.183.133: flow validate 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.1 / end actions drop / end
08/04/2021 15:28:14 dut.10.240.183.133:
Flow rule validated
08/04/2021 15:28:14 dut.10.240.183.133: flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.1 / end actions drop / end
08/04/2021 15:28:14 dut.10.240.183.133:
Flow rule #0 created
08/04/2021 15:28:14 dut.10.240.183.133: flow list 0
08/04/2021 15:28:14 dut.10.240.183.133:
ID Group Prio Attr Rule
0 0 0 i-- ETH IPV4 => DROP
08/04/2021 15:28:14 dut.10.240.183.133: start
08/04/2021 15:28:14 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:19 dut.10.240.183.133: stop
08/04/2021 15:28:19 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:19 dut.10.240.183.133: start
08/04/2021 15:28:20 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:25 dut.10.240.183.133: stop
08/04/2021 15:28:25 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:25 dut.10.240.183.133: flow destroy 0 rule 0
08/04/2021 15:28:25 dut.10.240.183.133:
Flow rule #0 destroyed
08/04/2021 15:28:25 dut.10.240.183.133: flow list 0
08/04/2021 15:28:25 dut.10.240.183.133:
08/04/2021 15:28:25 dut.10.240.183.133: start
08/04/2021 15:28:25 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:30 dut.10.240.183.133: stop
08/04/2021 15:28:30 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:30 dut.10.240.183.133: flow flush 0
08/04/2021 15:28:30 dut.10.240.183.133:
08/04/2021 15:28:30 dut.10.240.183.133: clear port stats all
08/04/2021 15:28:30 dut.10.240.183.133:
NIC statistics for port 0 cleared
NIC statistics for port 1 cleared
08/04/2021 15:28:30 dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 dst spec 224.0.0.0 dst mask 240.0.0.0 / end actions drop / end
08/04/2021 15:28:30 dut.10.240.183.133:
Flow rule validated
08/04/2021 15:28:30 dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 dst spec 224.0.0.0 dst mask 240.0.0.0 / end actions drop / end
08/04/2021 15:28:30 dut.10.240.183.133:
Flow rule #0 created
08/04/2021 15:28:30 dut.10.240.183.133: flow list 0
08/04/2021 15:28:31 dut.10.240.183.133:
ID Group Prio Attr Rule
0 0 0 i-- ETH IPV4 => DROP
08/04/2021 15:28:31 dut.10.240.183.133: start
08/04/2021 15:28:31 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:36 dut.10.240.183.133: stop
08/04/2021 15:28:36 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 1 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 1 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:36 dut.10.240.183.133: start
08/04/2021 15:28:36 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:41 dut.10.240.183.133: stop
08/04/2021 15:28:41 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:41 dut.10.240.183.133: flow destroy 0 rule 0
08/04/2021 15:28:41 dut.10.240.183.133:
Flow rule #0 destroyed
08/04/2021 15:28:41 dut.10.240.183.133: flow list 0
08/04/2021 15:28:41 dut.10.240.183.133:
08/04/2021 15:28:41 dut.10.240.183.133: start
08/04/2021 15:28:41 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:46 dut.10.240.183.133: stop
08/04/2021 15:28:47 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:47 dut.10.240.183.133: flow flush 0
08/04/2021 15:28:47 dut.10.240.183.133:
08/04/2021 15:28:47 dut.10.240.183.133: clear port stats all
08/04/2021 15:28:47 dut.10.240.183.133:
NIC statistics for port 0 cleared
NIC statistics for port 1 cleared
08/04/2021 15:28:47 dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 dst is 192.168.0.1 / nvgre tni is 2 / eth / ipv4 src is 192.168.1.2 dst is 192.168.1.3 / end actions drop / end
08/04/2021 15:28:47 dut.10.240.183.133:
Flow rule validated
08/04/2021 15:28:47 dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 dst is 192.168.0.1 / nvgre tni is 2 / eth / ipv4 src is 192.168.1.2 dst is 192.168.1.3 / end actions drop / end
08/04/2021 15:28:47 dut.10.240.183.133:
Flow rule #0 created
08/04/2021 15:28:47 dut.10.240.183.133: flow list 0
08/04/2021 15:28:47 dut.10.240.183.133:
ID Group Prio Attr Rule
0 0 0 i-- ETH IPV4 NVGRE ETH IPV4 => DROP
08/04/2021 15:28:47 dut.10.240.183.133: start
08/04/2021 15:28:47 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:52 dut.10.240.183.133: stop
08/04/2021 15:28:52 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:52 dut.10.240.183.133: start
08/04/2021 15:28:52 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:28:57 dut.10.240.183.133: stop
08/04/2021 15:28:58 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:28:58 dut.10.240.183.133: flow destroy 0 rule 0
08/04/2021 15:28:58 dut.10.240.183.133:
Flow rule #0 destroyed
08/04/2021 15:28:58 dut.10.240.183.133: flow list 0
08/04/2021 15:28:58 dut.10.240.183.133:
08/04/2021 15:28:58 dut.10.240.183.133: start
08/04/2021 15:28:58 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:29:03 dut.10.240.183.133: stop
08/04/2021 15:29:03 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:29:03 dut.10.240.183.133: flow flush 0
08/04/2021 15:29:03 dut.10.240.183.133:
08/04/2021 15:29:03 dut.10.240.183.133: clear port stats all
08/04/2021 15:29:03 dut.10.240.183.133:
NIC statistics for port 0 cleared
NIC statistics for port 1 cleared
08/04/2021 15:29:03 dut.10.240.183.133: flow validate 0 ingress pattern eth dst is 00:11:22:33:44:55 / vlan tci is 1 / pppoes seid is 3 / pppoe_proto_id is 0x0021 / end actions drop / end
08/04/2021 15:29:03 dut.10.240.183.133:
Flow rule validated
08/04/2021 15:29:03 dut.10.240.183.133: flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / vlan tci is 1 / pppoes seid is 3 / pppoe_proto_id is 0x0021 / end actions drop / end
08/04/2021 15:29:03 dut.10.240.183.133:
Flow rule #0 created
08/04/2021 15:29:03 dut.10.240.183.133: flow list 0
08/04/2021 15:29:03 dut.10.240.183.133:
ID Group Prio Attr Rule
0 0 0 i-- ETH VLAN PPPOES PPPOE_PROTO_ID => DROP
08/04/2021 15:29:03 dut.10.240.183.133: start
08/04/2021 15:29:04 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:29:09 dut.10.240.183.133: stop
08/04/2021 15:29:09 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:29:09 dut.10.240.183.133: start
08/04/2021 15:29:09 dut.10.240.183.133:
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=32
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
08/04/2021 15:29:14 dut.10.240.183.133: stop
08/04/2021 15:29:14 dut.10.240.183.133:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
08/04/2021 15:29:14 CVLDCFSwitchFilterTest: Test Case test_mac_drop_action Result ERROR: Traceback (most recent call last):
File "/home/autoregression/dts/dts_debug/framework/test_case.py", line 332, in _execute_test_case
case_obj()
File "/home/autoregression/dts/dts_debug/framework/test_case.py", line 545, in wrapper
return func(*args, **kwargs)
File "tests/TestSuite_cvl_dcf_switch_filter.py", line 1847, in test_mac_drop_action
self.send_and_check_packets(mismatched_dic)
File "tests/TestSuite_cvl_dcf_switch_filter.py", line 1119, in send_and_check_packets
dic["check_func"]["func"](out, dic["check_func"]["param"], dic["expect_results"])
File "tests/rte_flow_common.py", line 235, in check_vf_rx_packets_number
verify(pkt_num == expect_pkts, "failed: packets number not correct. expect %s, result %s" % (expect_pkts, pkt_num))
File "tests/rte_flow_common.py", line 324, in verify
raise AssertionError(description)
AssertionError: failed: packets number not correct. expect 1, result 0
08/04/2021 15:29:14 dut.10.240.183.133: flow flush 0
08/04/2021 15:29:14 dut.10.240.183.133:
08/04/2021 15:29:14 dut.10.240.183.133: clear port stats all
08/04/2021 15:29:14 dut.10.240.183.133:
NIC statistics for port 0 cleared
NIC statistics for port 1 cleared
08/04/2021 15:29:14 dut.10.240.183.133: quit
08/04/2021 15:29:15 dut.10.240.183.133:
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Done
Shutting down port 0...
Closing ports...
Port 0 is closed
Done
Shutting down port 1...
Closing ports...
Port 1 is closed
Done
Bye...
08/04/2021 15:29:15 dut.10.240.183.133: kill_all: called by dut and prefix list has value.
08/04/2021 15:29:23 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_ah Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_esp Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_frag Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_igmp Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_l2tpv3 Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_nat_t_esp Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_nvgre_ipv4_pay Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_nvgre_ipv4_tcp Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_nvgre_ipv4_udp_pay Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_nvgre_mac_ipv4_pay Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_nvgre_mac_ipv4_tcp Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_nvgre_mac_ipv4_udp_pay Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_pay Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_pfcp_node Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_pfcp_session Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_tcp_pay Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv4_udp_pay Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_ah Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_dstip_tc Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_esp Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_l2tpv3 Begin
08/04/2021 15:29:24 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_nat_t_esp Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_pfcp_node Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_pfcp_session Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_srcip_dstip Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_tcp Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_ipv6_udp_pay Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_pay Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_mac_vlan_filter Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_max_field_vectors Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_max_vfs Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_negative_case Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_udp_port_filter_dhcp_discovery Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_udp_port_filter_dhcp_offer Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_udp_port_filter_vxlan Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_unsupported_pattern_in_os_default Begin
08/04/2021 15:29:25 CVLDCFSwitchFilterTest: Rerun Test Case test_vlan_filter Begin
08/04/2021 15:29:25 dts:
TEST SUITE ENDED: CVLDCFSwitchFilterTest
^ permalink raw reply [flat|nested] 4+ messages in thread