* Re: [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support NIC
2021-09-28 18:05 ` [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support NIC Yan Xia
@ 2021-09-28 10:10 ` Chen, LingliX
2021-10-09 7:54 ` Tu, Lijuan
0 siblings, 1 reply; 8+ messages in thread
From: Chen, LingliX @ 2021-09-28 10:10 UTC (permalink / raw)
To: dts
[-- Attachment #1: Type: text/plain, Size: 409 bytes --]
> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Yan Xia
> Sent: Wednesday, September 29, 2021 2:06 AM
> To: dts@dpdk.org
> Cc: Xia, YanX <yanx.xia@intel.com>
> Subject: [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support
> NIC
>
> add cases not support NIC
>
> Signed-off-by: Yan Xia <yanx.xia@intel.com>
Tested-by: Yan Xia <yanx.xia@intel.com>
[-- Attachment #2: TestGeneric_flow_api.log --]
[-- Type: application/octet-stream, Size: 75837 bytes --]
29/09/2021 20:11:41 dts:
TEST SUITE : TestGeneric_flow_api
29/09/2021 20:11:41 dts: NIC : niantic
29/09/2021 20:11:41 dut.10.240.183.151:
29/09/2021 20:11:41 tester:
29/09/2021 20:11:45 TestGeneric_flow_api: Test Case test_jumbo_frame_size Begin
29/09/2021 20:11:45 dut.10.240.183.151:
29/09/2021 20:11:46 tester:
29/09/2021 20:11:46 dut.10.240.183.151: kill_all: called by dut and has no prefix list.
29/09/2021 20:11:46 dut.10.240.183.151: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -a 0000:81:00.0 -a 0000:81:00.1 --file-prefix=dpdk_9655_20210929201128 -- -i --disable-rss --rxq=4 --txq=4 --portmask=0x3 --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9600
29/09/2021 20:11:49 dut.10.240.183.151: EAL: Detected CPU lcores: 72
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_9655_20210929201128/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(2)
EAL: Ignore mapping IO port bar(5)
EAL: Probe PCI driver: net_ixgbe (8086:10fb) device: 0000:81:00.0 (socket 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(2)
EAL: Ignore mapping IO port bar(5)
EAL: Probe PCI driver: net_ixgbe (8086:10fb) device: 0000:81:00.1 (socket 1)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=192256, size=2048, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=192256, size=2048, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 1)
Port 0: 90:E2:BA:36:99:34
Configuring Port 1 (socket 1)
Port 1: 90:E2:BA:36:99:35
Checking link statuses...
Done
29/09/2021 20:11:59 tester: ifconfig ens224f0 mtu 9200
29/09/2021 20:11:59 tester:
29/09/2021 20:11:59 tester: ifconfig ens224f1 mtu 9200
29/09/2021 20:12:00 tester:
29/09/2021 20:12:00 dut.10.240.183.151: set fwd rxonly
29/09/2021 20:12:00 dut.10.240.183.151:
Set rxonly packet forwarding mode
29/09/2021 20:12:00 dut.10.240.183.151: set verbose 1
29/09/2021 20:12:00 dut.10.240.183.151:
Change verbose level from 0 to 1
29/09/2021 20:12:00 dut.10.240.183.151: start
29/09/2021 20:12:00 dut.10.240.183.151:
rxonly packet forwarding - ports=1 - cores=4 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
Logical Core 3 (socket 0) forwards packets on 1 streams:
RX P=0/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
Logical Core 4 (socket 0) forwards packets on 1 streams:
RX P=0/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
Logical Core 5 (socket 0) forwards packets on 1 streams:
RX P=0/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=4 - nb forwarding ports=1
port 0: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:02 dut.10.240.183.151: flow validate 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 2 / end
29/09/2021 20:12:02 dut.10.240.183.151:
29/09/2021 20:12:02 dut.10.240.183.151: flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 2 / end
29/09/2021 20:12:02 dut.10.240.183.151:
29/09/2021 20:12:05 dut.10.240.183.151:
testpmd> port 0/queue 2: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=9016 - nb_segs=9 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x2
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
Port 0: link state change event
Port 1: link state change event
29/09/2021 20:12:05 dut.10.240.183.151: stop
29/09/2021 20:12:05 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:05 TestGeneric_flow_api: pf:
testpmd> port 0/queue 2: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=9016 - nb_segs=9 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x2
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
Port 0: link state change event
Port 1: link state change event
29/09/2021 20:12:07 dut.10.240.183.151: start
29/09/2021 20:12:07 dut.10.240.183.151:
rxonly packet forwarding - ports=1 - cores=4 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
Logical Core 3 (socket 0) forwards packets on 1 streams:
RX P=0/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
Logical Core 4 (socket 0) forwards packets on 1 streams:
RX P=0/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
Logical Core 5 (socket 0) forwards packets on 1 streams:
RX P=0/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=4 - nb forwarding ports=1
port 0: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:10 dut.10.240.183.151: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=60 - nb_segs=1 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:10 dut.10.240.183.151: stop
29/09/2021 20:12:10 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:10 TestGeneric_flow_api: pf: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=60 - nb_segs=1 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:12 dut.10.240.183.151: start
29/09/2021 20:12:12 dut.10.240.183.151:
rxonly packet forwarding - ports=1 - cores=4 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
Logical Core 3 (socket 0) forwards packets on 1 streams:
RX P=0/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
Logical Core 4 (socket 0) forwards packets on 1 streams:
RX P=0/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
Logical Core 5 (socket 0) forwards packets on 1 streams:
RX P=0/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=4 - nb forwarding ports=1
port 0: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:12 dut.10.240.183.151: flow destroy 0 rule 0
29/09/2021 20:12:12 dut.10.240.183.151:
Flow rule #0 destroyed
29/09/2021 20:12:15 dut.10.240.183.151: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=9016 - nb_segs=9 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:15 dut.10.240.183.151: stop
29/09/2021 20:12:16 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:16 TestGeneric_flow_api: pf: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=9016 - nb_segs=9 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:18 dut.10.240.183.151: start
29/09/2021 20:12:18 dut.10.240.183.151:
rxonly packet forwarding - ports=1 - cores=4 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
Logical Core 3 (socket 0) forwards packets on 1 streams:
RX P=0/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
Logical Core 4 (socket 0) forwards packets on 1 streams:
RX P=0/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
Logical Core 5 (socket 0) forwards packets on 1 streams:
RX P=0/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=4 - nb forwarding ports=1
port 0: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 4 Tx queue number: 4
Rx offloads=0x800 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x800
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:18 dut.10.240.183.151: stop
29/09/2021 20:12:18 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:18 tester: ifconfig ens224f0 mtu 1500
29/09/2021 20:12:18 tester:
29/09/2021 20:12:18 tester: ifconfig ens224f1 mtu 1500
29/09/2021 20:12:18 tester:
29/09/2021 20:12:18 TestGeneric_flow_api: Test Case test_jumbo_frame_size Result PASSED:
29/09/2021 20:12:18 dut.10.240.183.151: quit
29/09/2021 20:12:20 dut.10.240.183.151:
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Done
Shutting down port 0...
Closing ports...
Port 0 is closed
Done
Shutting down port 1...
Closing ports...
Port 1 is closed
Done
Bye...
29/09/2021 20:12:22 dut.10.240.183.151: kill_all: called by dut and prefix list has value.
29/09/2021 20:12:22 TestGeneric_flow_api: Test Case test_multiple_filters_10GB Begin
29/09/2021 20:12:23 dut.10.240.183.151:
29/09/2021 20:12:23 tester:
29/09/2021 20:12:23 dut.10.240.183.151: kill_all: called by dut and has no prefix list.
29/09/2021 20:12:23 dut.10.240.183.151: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -a 0000:81:00.0 -a 0000:81:00.1 --file-prefix=dpdk_9655_20210929201128 -- -i --disable-rss --rxq=16 --txq=16
29/09/2021 20:12:26 dut.10.240.183.151: EAL: Detected CPU lcores: 72
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_9655_20210929201128/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(2)
EAL: Ignore mapping IO port bar(5)
EAL: Probe PCI driver: net_ixgbe (8086:10fb) device: 0000:81:00.0 (socket 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(2)
EAL: Ignore mapping IO port bar(5)
EAL: Probe PCI driver: net_ixgbe (8086:10fb) device: 0000:81:00.1 (socket 1)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=203456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: 90:E2:BA:36:99:34
Configuring Port 1 (socket 1)
Port 1: 90:E2:BA:36:99:35
Checking link statuses...
Done
29/09/2021 20:12:36 dut.10.240.183.151: set fwd rxonly
29/09/2021 20:12:36 dut.10.240.183.151:
Set rxonly packet forwarding mode
29/09/2021 20:12:36 dut.10.240.183.151: set verbose 1
29/09/2021 20:12:36 dut.10.240.183.151:
Change verbose level from 0 to 1
29/09/2021 20:12:36 dut.10.240.183.151: start
29/09/2021 20:12:36 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:38 dut.10.240.183.151: flow validate 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 1 / end
29/09/2021 20:12:38 dut.10.240.183.151:
29/09/2021 20:12:38 dut.10.240.183.151: flow validate 0 ingress pattern eth type is 0x0806 / end actions queue index 2 / end
29/09/2021 20:12:38 dut.10.240.183.151:
29/09/2021 20:12:38 dut.10.240.183.151: flow validate 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end
29/09/2021 20:12:39 dut.10.240.183.151:
29/09/2021 20:12:39 dut.10.240.183.151: flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 1 / end
29/09/2021 20:12:39 dut.10.240.183.151:
29/09/2021 20:12:39 dut.10.240.183.151: flow create 0 ingress pattern eth type is 0x0806 / end actions queue index 2 / end
29/09/2021 20:12:39 dut.10.240.183.151:
29/09/2021 20:12:39 dut.10.240.183.151: flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end
29/09/2021 20:12:39 dut.10.240.183.151: flow
29/09/2021 20:12:43 dut.10.240.183.151: 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end
Flow rule #2 created
testpmd> port 0/queue 1: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:43 dut.10.240.183.151: stop
29/09/2021 20:12:43 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:43 TestGeneric_flow_api: pf: 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end
Flow rule #2 created
testpmd> port 0/queue 1: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:45 dut.10.240.183.151: start
29/09/2021 20:12:45 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:47 dut.10.240.183.151: port 0/queue 2: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=62 - nb_segs=1 - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x2
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:47 dut.10.240.183.151: stop
29/09/2021 20:12:47 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:47 TestGeneric_flow_api: pf: port 0/queue 2: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=62 - nb_segs=1 - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x2
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:49 dut.10.240.183.151: start
29/09/2021 20:12:49 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:51 dut.10.240.183.151: port 0/queue 3: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x8100 - length=60 - nb_segs=1 - VLAN tci=0x0 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER_VLAN L3_IPV4 L4_UDP - l2_len=18 - l3_len=20 - l4_len=8 - Receive queue=0x3
ol_flags: PKT_RX_VLAN PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:51 dut.10.240.183.151: stop
29/09/2021 20:12:51 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 1/Queue= 3 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:51 TestGeneric_flow_api: pf: port 0/queue 3: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x8100 - length=60 - nb_segs=1 - VLAN tci=0x0 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER_VLAN L3_IPV4 L4_UDP - l2_len=18 - l3_len=20 - l4_len=8 - Receive queue=0x3
ol_flags: PKT_RX_VLAN PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:53 dut.10.240.183.151: start
29/09/2021 20:12:53 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:12:53 dut.10.240.183.151: flow destroy 0 rule 2
29/09/2021 20:12:53 dut.10.240.183.151:
Flow rule #2 destroyed
29/09/2021 20:12:56 dut.10.240.183.151: port 0/queue 1: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:56 dut.10.240.183.151: stop
29/09/2021 20:12:56 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:12:56 TestGeneric_flow_api: pf: port 0/queue 1: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:12:58 dut.10.240.183.151: start
29/09/2021 20:12:58 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:13:00 dut.10.240.183.151: port 0/queue 2: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=62 - nb_segs=1 - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x2
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:00 dut.10.240.183.151: stop
29/09/2021 20:13:00 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:13:00 TestGeneric_flow_api: pf: port 0/queue 2: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=62 - nb_segs=1 - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x2
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:02 dut.10.240.183.151: start
29/09/2021 20:13:02 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:13:04 dut.10.240.183.151: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x8100 - length=60 - nb_segs=1 - VLAN tci=0x0 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER_VLAN L3_IPV4 L4_UDP - l2_len=18 - l3_len=20 - l4_len=8 - Receive queue=0x0
ol_flags: PKT_RX_VLAN PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:04 dut.10.240.183.151: stop
29/09/2021 20:13:04 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:13:04 TestGeneric_flow_api: pf: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x8100 - length=60 - nb_segs=1 - VLAN tci=0x0 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER_VLAN L3_IPV4 L4_UDP - l2_len=18 - l3_len=20 - l4_len=8 - Receive queue=0x0
ol_flags: PKT_RX_VLAN PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:06 dut.10.240.183.151: start
29/09/2021 20:13:06 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:13:06 dut.10.240.183.151: flow destroy 0 rule 1
29/09/2021 20:13:06 dut.10.240.183.151:
Flow rule #1 destroyed
29/09/2021 20:13:08 dut.10.240.183.151: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=62 - nb_segs=1 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:08 dut.10.240.183.151: stop
29/09/2021 20:13:08 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:13:08 TestGeneric_flow_api: pf: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=FF:FF:FF:FF:FF:FF - type=0x0806 - length=62 - nb_segs=1 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:10 dut.10.240.183.151: start
29/09/2021 20:13:10 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:13:13 dut.10.240.183.151: port 0/queue 1: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:13 dut.10.240.183.151: stop
29/09/2021 20:13:13 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 1/Queue= 1 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:13:13 TestGeneric_flow_api: pf: port 0/queue 1: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:15 dut.10.240.183.151: start
29/09/2021 20:13:15 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:13:15 dut.10.240.183.151: flow destroy 0 rule 0
29/09/2021 20:13:15 dut.10.240.183.151:
Flow rule #0 destroyed
29/09/2021 20:13:17 dut.10.240.183.151: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:17 dut.10.240.183.151: stop
29/09/2021 20:13:17 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
RX-packets: 1 TX-packets: 0 TX-dropped: 0
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:13:17 TestGeneric_flow_api: pf: port 0/queue 0: received 1 packets
src=00:0C:29:B3:0E:82 - dst=90:E2:BA:36:99:34 - type=0x0800 - length=74 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
29/09/2021 20:13:19 dut.10.240.183.151: start
29/09/2021 20:13:19 dut.10.240.183.151:
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 16 Tx queue number: 16
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
29/09/2021 20:13:19 dut.10.240.183.151: stop
29/09/2021 20:13:19 dut.10.240.183.151:
Telling cores to ...
Waiting for lcores to finish...
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
29/09/2021 20:13:19 TestGeneric_flow_api: Test Case test_multiple_filters_10GB Result PASSED:
29/09/2021 20:13:19 dut.10.240.183.151: quit
29/09/2021 20:13:20 dut.10.240.183.151:
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Done
Shutting down port 0...
Closing ports...
Port 0 is closed
Done
Shutting down port 1...
Closing ports...
Port 1 is closed
Done
Bye...
29/09/2021 20:13:22 dut.10.240.183.151: kill_all: called by dut and prefix list has value.
29/09/2021 20:13:23 dts:
TEST SUITE ENDED: TestGeneric_flow_api
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts] [PATCH V1 1/5] tests/generic_flow_api: move two cases from generic_filter to generic_flow_api
@ 2021-09-28 18:05 Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 3/5] test_plans/generic_flow_api_test_plan: " Yan Xia
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Yan Xia @ 2021-09-28 18:05 UTC (permalink / raw)
To: dts; +Cc: Yan Xia
move two cases from generic_filter to generic_flow_api
Signed-off-by: Yan Xia <yanx.xia@intel.com>
---
tests/TestSuite_generic_flow_api.py | 118 ++++++++++++++++++++++++++++
1 file changed, 118 insertions(+)
diff --git a/tests/TestSuite_generic_flow_api.py b/tests/TestSuite_generic_flow_api.py
index 3ea76b64..32805e46 100644
--- a/tests/TestSuite_generic_flow_api.py
+++ b/tests/TestSuite_generic_flow_api.py
@@ -84,6 +84,10 @@ class TestGeneric_flow_api(TestCase):
MAX_QUEUE = 3
# Based on h/w type, choose how many ports to use
self.dut_ports = self.dut.get_ports(self.nic)
+ global valports
+ valports = [_ for _ in self.dut_ports if self.tester.get_local_port(_) != -1]
+ global portMask
+ portMask = utils.create_mask(valports[:2])
# Verify that enough ports are available
self.verify(len(self.dut_ports) >= 1, "Insufficient ports")
self.cores = "1S/8C/1T"
@@ -2520,6 +2524,120 @@ class TestGeneric_flow_api(TestCase):
self.verify(result_rows[2][1] != result_rows[4][1], "The hash values should be different when setting rss to 's-vlan c-vlan' and sending packet with different ivlan.")
self.verify(result_rows[3][1] != result_rows[4][1], "The hash values should be different when setting rss to 's-vlan c-vlan' and sending packet with different ovlan and ivlan")
+ def test_multiple_filters_10GB(self):
+ """
+ only supported by ixgbe and igb
+ """
+ self.verify(self.nic in ["niantic", "kawela_4", "kawela",
+ "twinville", "foxville"], "%s nic not support n-tuple filter" % self.nic)
+ self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
+ self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
+ self.dut.send_expect("set verbose 1", "testpmd> ", 120)
+ self.dut.send_expect("start", "testpmd> ", 120)
+ time.sleep(2)
+
+ self.dut.send_expect(
+ "flow validate 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 1 / end", "validated")
+ self.dut.send_expect(
+ "flow validate 0 ingress pattern eth type is 0x0806 / end actions queue index 2 / end", "validated")
+ self.dut.send_expect(
+ "flow validate 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end",
+ "validated")
+ self.dut.send_expect(
+ "flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 1 / end", "created")
+ self.dut.send_expect(
+ "flow create 0 ingress pattern eth type is 0x0806 / end actions queue index 2 / end", "created")
+ self.dut.send_expect(
+ "flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end", "create")
+ time.sleep(2)
+ self.sendpkt(pktstr='Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="S")/Raw("x" * 20)' % self.pf_mac)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="1", verify_mac=self.pf_mac)
+
+ self.sendpkt(pktstr='Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1")/Raw("x" * 20)')
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="2", verify_mac="ff:ff:ff:ff:ff:ff")
+
+ self.sendpkt(pktstr='Ether(dst="%s")/Dot1Q(prio=3)/IP(src="2.2.2.4",dst="2.2.2.5")/UDP(sport=1,dport=1)' % self.pf_mac)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="3", verify_mac=self.pf_mac)
+ # destroy rule 2
+ out = self.dut.send_expect("flow destroy 0 rule 2", "testpmd> ")
+ p = re.compile(r"Flow rule #(\d+) destroyed")
+ m = p.search(out)
+ self.verify(m, "flow rule 2 delete failed" )
+ self.sendpkt(pktstr='Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="S")/Raw("x" * 20)' % self.pf_mac)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="1", verify_mac=self.pf_mac)
+
+ self.sendpkt(pktstr='Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1")/Raw("x" * 20)')
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="2", verify_mac="ff:ff:ff:ff:ff:ff")
+
+ self.sendpkt(pktstr='Ether(dst="%s")/Dot1Q(prio=3)/IP(src="2.2.2.4",dst="2.2.2.5")/UDP(sport=1,dport=1)' % self.pf_mac)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac=self.pf_mac)
+ # destroy rule 1
+ out = self.dut.send_expect("flow destroy 0 rule 1", "testpmd> ")
+ p = re.compile(r"Flow rule #(\d+) destroyed")
+ m = p.search(out)
+ self.verify(m, "flow rule 1 delete failed" )
+
+ self.sendpkt(pktstr='Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1")/Raw("x" * 20)')
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac="ff:ff:ff:ff:ff:ff")
+
+ self.sendpkt(pktstr='Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="S")/Raw("x" * 20)' % self.pf_mac)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="1", verify_mac=self.pf_mac)
+ # destroy rule 0
+ out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
+ p = re.compile(r"Flow rule #(\d+) destroyed")
+ m = p.search(out)
+ self.verify(m, "flow rule 0 delete failed" )
+
+ self.sendpkt(pktstr='Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="S")/Raw("x" * 20)' % self.pf_mac)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac=self.pf_mac)
+ self.dut.send_expect("stop", "testpmd> ")
+
+ def test_jumbo_frame_size(self):
+
+ self.verify(self.nic in ["niantic", "kawela_4", "kawela", "bartonhills", "twinville", "sagepond", "sageville",
+ "powerville", "foxville"], "%s nic not support" % self.nic)
+ if (self.nic in ["cavium_a063", "cavium_a064", "foxville"]):
+ self.pmdout.start_testpmd(
+ "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9200" % portMask)
+ else:
+ self.pmdout.start_testpmd(
+ "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9600" % portMask)
+ port = self.tester.get_local_port(valports[0])
+ txItf = self.tester.get_interface(port)
+
+ port = self.tester.get_local_port(valports[1])
+ rxItf = self.tester.get_interface(port)
+ self.tester.send_expect("ifconfig %s mtu %s" % (txItf, 9200), "# ")
+ self.tester.send_expect("ifconfig %s mtu %s" % (rxItf, 9200), "# ")
+ self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
+ self.dut.send_expect("set verbose 1", "testpmd> ", 120)
+ self.dut.send_expect("start", "testpmd> ", 120)
+ time.sleep(2)
+
+ self.dut.send_expect(
+ "flow validate 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 2 / end",
+ "validated")
+ self.dut.send_expect(
+ "flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 2 / end",
+ "created")
+
+ self.sendpkt(pktstr='Ether(dst="%s")/IP(src="2.2.2.5",dst="2.2.2.4")/TCP(dport=80,flags="S")/Raw(load="\x50"*8962)' % self.pf_mac)
+ time.sleep(1)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="2", verify_mac=self.pf_mac)
+
+ self.sendpkt(pktstr='Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1")')
+ time.sleep(1)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac="ff:ff:ff:ff:ff:ff")
+ # destroy rule
+ self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
+ self.sendpkt(pktstr='Ether(dst="%s")/IP(src="2.2.2.5",dst="2.2.2.4")/TCP(dport=80,flags="S")/Raw(load="\x50"*8962)' % self.pf_mac)
+ time.sleep(1)
+ self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac=self.pf_mac)
+ self.dut.send_expect("stop", "testpmd> ")
+
+ self.tester.send_expect("ifconfig %s mtu %s" % (txItf, 1500), "# ")
+ self.tester.send_expect("ifconfig %s mtu %s" % (rxItf, 1500), "# ")
+
def tear_down(self):
"""
Run after each test case.
--
2.32.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts] [PATCH V1 3/5] test_plans/generic_flow_api_test_plan: move two cases from generic_filter to generic_flow_api
2021-09-28 18:05 [dts] [PATCH V1 1/5] tests/generic_flow_api: move two cases from generic_filter to generic_flow_api Yan Xia
@ 2021-09-28 18:05 ` Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 2/5] test_plans/generic_filter_test_plan: " Yan Xia
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Yan Xia @ 2021-09-28 18:05 UTC (permalink / raw)
To: dts; +Cc: Yan Xia
move two cases from generic_filter to generic_flow_api
Signed-off-by: Yan Xia <yanx.xia@intel.com>
---
test_plans/generic_flow_api_test_plan.rst | 112 ++++++++++++++++++++++
1 file changed, 112 insertions(+)
diff --git a/test_plans/generic_flow_api_test_plan.rst b/test_plans/generic_flow_api_test_plan.rst
index d1fa33bd..760f8e82 100644
--- a/test_plans/generic_flow_api_test_plan.rst
+++ b/test_plans/generic_flow_api_test_plan.rst
@@ -2042,3 +2042,115 @@ Test case: create different rule after destroy
pkt2 = Ether()/IP()/UDP(dport=32)/Raw('x' * 20)
verify match pkt2 to queue 2, verify mismatch pkt1 to queue 0.
+
+Test Case: 10GB Multiple filters
+======================================
+
+1. config testpmd on DUT
+
+ 1. set up testpmd with Fortville NICs::
+
+ ./testpmd -l 1,2,3,4,5,6,7,8 -n 4 -- -i --disable-rss --rxq=16 --txq=16
+
+ 2. verbose configuration::
+
+ testpmd> set verbose 1
+
+ 3. PMD fwd only receive the packets::
+
+ testpmd> set fwd rxonly
+
+ 4. start packet receive::
+
+ testpmd> start
+
+ 5. create rule,Enable ethertype filter, SYN filter and 5-tuple Filter on the port 0 at same
+ time. Assigning different filters to different queues on port 0::
+
+ testpmd> flow validate 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 1 / end
+ testpmd> flow validate 0 ingress pattern eth type is 0x0806 / end actions queue index 2 / end
+ testpmd> flow validate 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 1 / end
+ testpmd> flow create 0 ingress pattern eth type is 0x0806 / end actions queue index 2 / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 proto is 17 / udp dst is 1 src is 1 / end actions queue index 3 / end
+
+2. Configure the traffic generator to send different packets. Such as,SYN packets, ARP packets, IP packets and
+packets with(`dst_ip` = 2.2.2.5 `src_ip` = 2.2.2.4 `dst_port` = 1 `src_port` = 1 `protocol` = udp)::
+
+ sendp([Ether(dst="90:e2:ba:36:99:34")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="S")/Raw("x" * 20)],iface="ens224f0",count=1,inter=0,verbose=False)
+ sendp([Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1")/Raw("x" * 20)],iface="ens224f0",count=1,inter=0,verbose=False)
+ sendp([Ether(dst="90:e2:ba:36:99:34")/Dot1Q(prio=3)/IP(src="2.2.2.4",dst="2.2.2.5")/UDP(sport=1,dport=1)],iface="ens224f0",count=1,inter=0,verbose=False)
+
+3. Verify that all packets are received (RX-packets incremented)on the assigned
+queue, remove 5-tuple filter::
+
+ testpmd> stop
+ testpmd> start
+ testpmd> flow destroy 0 rule 2
+4. Send different packets such as,SYN packets, ARP packets, packets with
+(`dst_ip` = 2.2.2.5 `src_ip` = 2.2.2.4 `dst_port` = 1 `src_port` = 1
+`protocol` = udp)::
+
+ testpmd> stop
+
+5. Verify that different packets are received (RX-packets incremented)on the
+assigned queue export 5-tuple filter, remove ethertype filter::
+
+ testpmd> start
+ testpmd> flow destroy 0 rule 1
+
+Send different packets such as,SYN packets, ARP packets, packets with
+(`dst_ip` = 2.2.2.5 `src_ip` = 2.2.2.4 `dst_port` = 1 `src_port` = 1
+`protocol` = udp)::
+
+ testpmd>stop
+
+Verify that only SYN packets are received (RX-packets incremented)on the
+assigned queue set off SYN filter,remove syn filter::
+
+ testpmd>start
+ testpmd>flow destroy 0 rule 0
+
+Configure the traffic generator to send SYN packets::
+
+ testpmd>stop
+
+Verify that the packets are not received (RX-packets do not increased)on the
+queue 1.
+
+Test Case: jumbo framesize filter
+===================================
+
+This case is designed for NIC (niantic,I350, 82576 and 82580). Since
+``Testpmd`` could transmits packets with jumbo frame size , it also could
+transmit above packets on assigned queue. Launch the app ``testpmd`` with the
+following arguments::
+
+ testpmd -l 1,2,3,4,5,6,7,8 -n 4 -- -i --disable-rss --rxq=4 --txq=4 --portmask=0x3 --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9600
+ testpmd> set fwd rxonly
+ testpmd> set verbose 1
+ testpmd> start
+
+Enable the syn filters with large size::
+
+ testpmd> flow validate 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 2 / end
+ testpmd> flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 2 / end
+
+Configure the traffic generator to send syn packets::
+
+ sendp([Ether(dst="90:e2:ba:36:99:34")/IP(src="2.2.2.5",dst="2.2.2.4")/TCP(dport=80,flags="S")/Raw(load="P"*8962)],iface="ens224f0",count=1,inter=0,verbose=False)
+ testpmd> stop
+
+Then Verify that the packet are received on the queue 2. Configure the traffic generator to send arp packets::
+
+ testpmd> start
+ sendp([Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1")],iface="ens224f0",count=1,inter=0,verbose=False)
+
+Then Verify that the packet are not received on the queue 2. Remove the filter::
+
+ testpmd> flow destroy 0 rule 0
+
+Configure the traffic generator to send syn packets. Then Verify that
+the packet are not received on the queue 2::
+
+ testpmd> stop
--
2.32.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts] [PATCH V1 2/5] test_plans/generic_filter_test_plan: move two cases from generic_filter to generic_flow_api
2021-09-28 18:05 [dts] [PATCH V1 1/5] tests/generic_flow_api: move two cases from generic_filter to generic_flow_api Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 3/5] test_plans/generic_flow_api_test_plan: " Yan Xia
@ 2021-09-28 18:05 ` Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 4/5] tests/generic_filter: " Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support NIC Yan Xia
3 siblings, 0 replies; 8+ messages in thread
From: Yan Xia @ 2021-09-28 18:05 UTC (permalink / raw)
To: dts; +Cc: Yan Xia
move two cases from generic_filter to generic_flow_api
Signed-off-by: Yan Xia <yanx.xia@intel.com>
---
test_plans/generic_filter_test_plan.rst | 90 -------------------------
1 file changed, 90 deletions(-)
diff --git a/test_plans/generic_filter_test_plan.rst b/test_plans/generic_filter_test_plan.rst
index fda54e0f..10043629 100644
--- a/test_plans/generic_filter_test_plan.rst
+++ b/test_plans/generic_filter_test_plan.rst
@@ -210,60 +210,6 @@ For instance, enable priority filter(just support niantic)::
testpmd> ethertype_filter 0 add ethertype 0x0806 priority enable 1 queue 2
-Test Case 4: 10GB Multiple filters
-======================================
-
-Enable ethertype filter, SYN filter and 5-tuple Filter on the port 0 at same
-time. Assigning different filters to different queues on port 0::
-
- testpmd> syn_filter 0 add priority high queue 1
- testpmd> ethertype_filter 0 add ethertype 0x0806 priority disable 0 queue 3
- testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol tcp mask 0x1f priority 3 queue 3
- testpmd> start
-
-Configure the traffic generator to send different packets. Such as,SYN
-packets, ARP packets, IP packets and packets with(`dst_ip` = 2.2.2.5 `src_ip`
-= 2.2.2.4 `dst_port` = 1 `src_port` = 1 `protocol` = tcp)::
-
- testpmd> stop
-
-Verify that different packets are received (RX-packets incremented)on the
-assigned queue. Remove ethertype filter::
-
- testpmd> ethertype_filter 0 del ethertype 0x0806 priority disable 0 queue 3
- testpmd>start
-
-Send SYN packets, ARP packets and packets with (`dst_ip` = 2.2.2.5 `src_ip` =
-2.2.2.4 `dst_port` = 1 `src_port` = 1 `protocol` = tcp)::
-
- testpmd> stop
-
-Verify that all packets are received (RX-packets incremented)on the assigned
-queue except arp packets, remove 5-tuple filter::
-
- testpmd>5tuple_filter 0 del dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol tcp mask 0x1f priority 3 queue 3
- testpmd> start
-
-Send different packets such as,SYN packets, ARP packets, packets with
-(`dst_ip` = 2.2.2.5 `src_ip` = 2.2.2.4 `dst_port` = 1 `src_port` = 1
-`protocol` = tcp)::
-
- testpmd>stop
-
-Verify that only SYN packets are received (RX-packets incremented)on the
-assigned queue set off SYN filter::
-
- testpmd>syn_filter 0 del priority high queue 1
- testpmd>start
-
-Configure the traffic generator to send 5 SYN packets::
-
- testpmd>stop
-
-Verify that the packets are not received (RX-packets do not increased)on the
-queue 1.
-
-
Test Case 5: 2-tuple filter
===============================
@@ -451,42 +397,6 @@ Verify that the packet are not received on the queue 1 and queue 3::
testpmd> quit
-Test Case 9: jumbo framesize filter
-===================================
-
-This case is designed for NIC (niantic,I350, 82576 and 82580). Since
-``Testpmd`` could transmits packets with jumbo frame size , it also could
-transmit above packets on assigned queue. Launch the app ``testpmd`` with the
-following arguments::
-
- testpmd -c ffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --rxd=1024 --txd=1024 --burst=144 --txpt=32 --txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9600
-
- testpmd>set stat_qmap rx 0 0 0
- testpmd>set stat_qmap rx 0 1 1
- testpmd>set stat_qmap rx 0 2 2
- testpmd>vlan set strip off 0
- testpmd>vlan set strip off 1
- testpmd>vlan set filter off 0
- testpmd>vlan set filter off 1
-
-Enable the syn filters with large size::
-
- testpmd> syn_filter 0 add priority high queue 1
- testpmd> start
-
-Configure the traffic generator to send syn packets(framesize=2000)::
-
- testpmd> stop
-
-Then Verify that the packet are received on the queue 1. Remove the filter::
-
- testpmd> syn_filter 0 del priority high queue 1
-
-Configure the traffic generator to send syn packets and s. Then Verify that
-the packet are not received on the queue 1::
-
- testpmd> quit
-
Test Case 10: 128 queues
========================
--
2.32.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts] [PATCH V1 4/5] tests/generic_filter: move two cases from generic_filter to generic_flow_api
2021-09-28 18:05 [dts] [PATCH V1 1/5] tests/generic_flow_api: move two cases from generic_filter to generic_flow_api Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 3/5] test_plans/generic_flow_api_test_plan: " Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 2/5] test_plans/generic_filter_test_plan: " Yan Xia
@ 2021-09-28 18:05 ` Yan Xia
2021-09-29 1:50 ` Peng, Yuan
2021-09-28 18:05 ` [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support NIC Yan Xia
3 siblings, 1 reply; 8+ messages in thread
From: Yan Xia @ 2021-09-28 18:05 UTC (permalink / raw)
To: dts; +Cc: Yan Xia
move two cases from generic_filter to generic_flow_api
Signed-off-by: Yan Xia <yanx.xia@intel.com>
---
tests/TestSuite_generic_filter.py | 157 ------------------------------
1 file changed, 157 deletions(-)
diff --git a/tests/TestSuite_generic_filter.py b/tests/TestSuite_generic_filter.py
index 22f85bd0..173e4f72 100644
--- a/tests/TestSuite_generic_filter.py
+++ b/tests/TestSuite_generic_filter.py
@@ -416,89 +416,6 @@ class TestGeneric_filter(TestCase):
self.verify_result(out, tx_pkts="1", expect_queue="0")
- def test_multiple_filters_10GB(self):
- if self.nic in ["niantic", "sagepond", "sageville", "fortville_eagle", "fortville_25g", "fortville_spirit", "columbiaville_25g", "columbiaville_100g"]:
- self.pmdout.start_testpmd(
- "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1" % portMask)
- self.port_config()
- self.dut.send_expect(
- "syn_filter %s add priority high queue 1" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "ethertype_filter %s add mac_ignr 00:00:00:00:00:00 ethertype 0x0806 fwd queue 2" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "5tuple_filter %s add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f tcp_flags 0x0 priority 3 queue 3 " % (valports[0]), "testpmd> ")
- self.dut.send_expect("start", "testpmd> ")
-
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="1")
-
- self.ethertype_filter = "on"
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp_prio")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("fivetuple")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="3")
-
- self.dut.send_expect(
- "5tuple_filter %s del dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f tcp_flags 0x0 priority 3 queue 3 " % (valports[0]), "testpmd> ")
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="1")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("fivetuple")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="0")
- self.dut.send_expect(
- "ethertype_filter %s del mac_ignr 00:00:00:00:00:00 ethertype 0x0806 fwd queue 2" % valports[0], "testpmd> ")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="0")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="1")
-
- self.dut.send_expect(
- "syn_filter %s del priority high queue 1" % valports[0], "testpmd> ")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="0")
-
- else:
- self.verify(False, "%s nic not support this test" % self.nic)
-
def test_twotuple_filter(self):
if self.nic in ["powerville", "bartonhills", "cavium_a063", "sagepond", "foxville", "sageville", "fortville_eagle", "fortville_25g", "fortville_spirit", "columbiaville_25g", "columbiaville_100g"]:
@@ -631,80 +548,6 @@ class TestGeneric_filter(TestCase):
self.verify_result(out, tx_pkts="1", expect_queue="0")
else:
self.verify(False, "%s nic not support this test" % self.nic)
- def test_jumbo_frame_size(self):
-
- self.verify(self.nic not in ["fortville_spirit_single", "fortpark_TLV","fortpark_BASE-T", "carlsville"], "%s nic not support this test" % self.nic)
- if (self.nic in ["cavium_a063", "cavium_a064", "foxville"]):
- self.pmdout.start_testpmd(
- "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9200" % portMask)
- else:
- self.pmdout.start_testpmd(
- "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9600" % portMask)
- port = self.tester.get_local_port(valports[0])
- txItf = self.tester.get_interface(port)
-
- port = self.tester.get_local_port(valports[1])
- rxItf = self.tester.get_interface(port)
- self.tester.send_expect("ifconfig %s mtu %s" % (txItf, 9200), "# ")
- self.tester.send_expect("ifconfig %s mtu %s" % (rxItf, 9200), "# ")
-
- self.dut.send_expect(
- "set stat_qmap rx %s 0 0" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 0 0" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 1 1" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 1 1" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 2 2" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 2 2" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 3 3" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 3 3" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "vlan set strip off %s" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "vlan set strip off %s" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "vlan set filter off %s" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "vlan set filter off %s" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "syn_filter %s add priority high queue 2" % valports[0], "testpmd> ")
-
- self.dut.send_expect("start", "testpmd> ", 120)
-
- self.filter_send_packet("jumbo")
- time.sleep(1)
-
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd>")
-
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
-
- self.filter_send_packet("arp")
- time.sleep(1)
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd>")
-
- self.verify_result(out, tx_pkts="1", expect_queue="0")
-
- self.dut.send_expect(
- "syn_filter %s del priority high queue 2" % valports[0], "testpmd> ")
- self.dut.send_expect("start", "testpmd> ", 120)
- self.filter_send_packet("jumbo")
- time.sleep(1)
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd>")
-
- self.verify_result(out, tx_pkts="1", expect_queue="0")
- self.tester.send_expect("ifconfig %s mtu %s" % (txItf, 1500), "# ")
- self.tester.send_expect("ifconfig %s mtu %s" % (rxItf, 1500), "# ")
def test_128_queues(self):
# testpmd can't support assign queue to received packet, so can't test
--
2.32.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support NIC
2021-09-28 18:05 [dts] [PATCH V1 1/5] tests/generic_flow_api: move two cases from generic_filter to generic_flow_api Yan Xia
` (2 preceding siblings ...)
2021-09-28 18:05 ` [dts] [PATCH V1 4/5] tests/generic_filter: " Yan Xia
@ 2021-09-28 18:05 ` Yan Xia
2021-09-28 10:10 ` Chen, LingliX
3 siblings, 1 reply; 8+ messages in thread
From: Yan Xia @ 2021-09-28 18:05 UTC (permalink / raw)
To: dts; +Cc: Yan Xia
add cases not support NIC
Signed-off-by: Yan Xia <yanx.xia@intel.com>
---
conf/test_case_checklist.json | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/conf/test_case_checklist.json b/conf/test_case_checklist.json
index 8cd47dd1..b979d04b 100644
--- a/conf/test_case_checklist.json
+++ b/conf/test_case_checklist.json
@@ -114,8 +114,16 @@
"ALL"
],
"NIC": [
+ "fortville_25g",
+ "fortville_eagle",
+ "fortville_spirit",
"fortville_spirit_single",
"springville",
+ "ironpond",
+ "springfountain",
+ "twinpond",
+ "cavium_a034",
+ "cavium_a011",
"fortpark_TLV",
"fortpark_BASE-T",
"carlsville"
@@ -136,6 +144,9 @@
"kawela_4",
"bartonhills",
"powerville",
+ "fortville_25g",
+ "fortville_eagle",
+ "fortville_spirit",
"fortville_spirit_single",
"springville",
"ironpond",
@@ -146,7 +157,9 @@
"fortpark_TLV",
"fortpark_BASE-T",
"carlsville",
- "foxville"
+ "columbiaville_25g",
+ "columbiaville_100g",
+ "foxville"
],
"Target": [
"ALL"
--
2.32.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dts] [PATCH V1 4/5] tests/generic_filter: move two cases from generic_filter to generic_flow_api
2021-09-28 18:05 ` [dts] [PATCH V1 4/5] tests/generic_filter: " Yan Xia
@ 2021-09-29 1:50 ` Peng, Yuan
0 siblings, 0 replies; 8+ messages in thread
From: Peng, Yuan @ 2021-09-29 1:50 UTC (permalink / raw)
To: Xia, YanX, dts; +Cc: Xia, YanX
Acked-by Peng, Yuan <yuan.peng@intel.com>
-----Original Message-----
From: dts <dts-bounces@dpdk.org> On Behalf Of Yan Xia
Sent: Wednesday, September 29, 2021 2:06 AM
To: dts@dpdk.org
Cc: Xia, YanX <yanx.xia@intel.com>
Subject: [dts] [PATCH V1 4/5] tests/generic_filter: move two cases from generic_filter to generic_flow_api
move two cases from generic_filter to generic_flow_api
Signed-off-by: Yan Xia <yanx.xia@intel.com>
---
tests/TestSuite_generic_filter.py | 157 ------------------------------
1 file changed, 157 deletions(-)
diff --git a/tests/TestSuite_generic_filter.py b/tests/TestSuite_generic_filter.py
index 22f85bd0..173e4f72 100644
--- a/tests/TestSuite_generic_filter.py
+++ b/tests/TestSuite_generic_filter.py
@@ -416,89 +416,6 @@ class TestGeneric_filter(TestCase):
self.verify_result(out, tx_pkts="1", expect_queue="0")
- def test_multiple_filters_10GB(self):
- if self.nic in ["niantic", "sagepond", "sageville", "fortville_eagle", "fortville_25g", "fortville_spirit", "columbiaville_25g", "columbiaville_100g"]:
- self.pmdout.start_testpmd(
- "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1" % portMask)
- self.port_config()
- self.dut.send_expect(
- "syn_filter %s add priority high queue 1" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "ethertype_filter %s add mac_ignr 00:00:00:00:00:00 ethertype 0x0806 fwd queue 2" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "5tuple_filter %s add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f tcp_flags 0x0 priority 3 queue 3 " % (valports[0]), "testpmd> ")
- self.dut.send_expect("start", "testpmd> ")
-
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="1")
-
- self.ethertype_filter = "on"
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp_prio")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("fivetuple")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="3")
-
- self.dut.send_expect(
- "5tuple_filter %s del dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f tcp_flags 0x0 priority 3 queue 3 " % (valports[0]), "testpmd> ")
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="1")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("fivetuple")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="0")
- self.dut.send_expect(
- "ethertype_filter %s del mac_ignr 00:00:00:00:00:00 ethertype 0x0806 fwd queue 2" % valports[0], "testpmd> ")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("arp")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="0")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="1")
-
- self.dut.send_expect(
- "syn_filter %s del priority high queue 1" % valports[0], "testpmd> ")
-
- self.dut.send_expect("start", "testpmd> ")
- self.filter_send_packet("syn")
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd> ")
- self.verify_result(out, tx_pkts="1", expect_queue="0")
-
- else:
- self.verify(False, "%s nic not support this test" % self.nic)
-
def test_twotuple_filter(self):
if self.nic in ["powerville", "bartonhills", "cavium_a063", "sagepond", "foxville", "sageville", "fortville_eagle", "fortville_25g", "fortville_spirit", "columbiaville_25g", "columbiaville_100g"]:
@@ -631,80 +548,6 @@ class TestGeneric_filter(TestCase):
self.verify_result(out, tx_pkts="1", expect_queue="0")
else:
self.verify(False, "%s nic not support this test" % self.nic)
- def test_jumbo_frame_size(self):
-
- self.verify(self.nic not in ["fortville_spirit_single", "fortpark_TLV","fortpark_BASE-T", "carlsville"], "%s nic not support this test" % self.nic)
- if (self.nic in ["cavium_a063", "cavium_a064", "foxville"]):
- self.pmdout.start_testpmd(
- "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9200" % portMask)
- else:
- self.pmdout.start_testpmd(
- "%s" % self.cores, "--disable-rss --rxq=4 --txq=4 --portmask=%s --nb-cores=4 --nb-ports=1 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9600" % portMask)
- port = self.tester.get_local_port(valports[0])
- txItf = self.tester.get_interface(port)
-
- port = self.tester.get_local_port(valports[1])
- rxItf = self.tester.get_interface(port)
- self.tester.send_expect("ifconfig %s mtu %s" % (txItf, 9200), "# ")
- self.tester.send_expect("ifconfig %s mtu %s" % (rxItf, 9200), "# ")
-
- self.dut.send_expect(
- "set stat_qmap rx %s 0 0" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 0 0" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 1 1" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 1 1" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 2 2" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 2 2" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 3 3" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "set stat_qmap rx %s 3 3" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "vlan set strip off %s" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "vlan set strip off %s" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "vlan set filter off %s" % valports[0], "testpmd> ")
- self.dut.send_expect(
- "vlan set filter off %s" % valports[1], "testpmd> ")
- self.dut.send_expect(
- "syn_filter %s add priority high queue 2" % valports[0], "testpmd> ")
-
- self.dut.send_expect("start", "testpmd> ", 120)
-
- self.filter_send_packet("jumbo")
- time.sleep(1)
-
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd>")
-
- self.verify_result(out, tx_pkts="1", expect_queue="2")
-
- self.dut.send_expect("start", "testpmd> ")
-
- self.filter_send_packet("arp")
- time.sleep(1)
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd>")
-
- self.verify_result(out, tx_pkts="1", expect_queue="0")
-
- self.dut.send_expect(
- "syn_filter %s del priority high queue 2" % valports[0], "testpmd> ")
- self.dut.send_expect("start", "testpmd> ", 120)
- self.filter_send_packet("jumbo")
- time.sleep(1)
- out = self.dut.send_expect("stop", "testpmd> ")
- self.dut.send_expect("clear port stats all", "testpmd>")
-
- self.verify_result(out, tx_pkts="1", expect_queue="0")
- self.tester.send_expect("ifconfig %s mtu %s" % (txItf, 1500), "# ")
- self.tester.send_expect("ifconfig %s mtu %s" % (rxItf, 1500), "# ")
def test_128_queues(self):
# testpmd can't support assign queue to received packet, so can't test
--
2.32.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support NIC
2021-09-28 10:10 ` Chen, LingliX
@ 2021-10-09 7:54 ` Tu, Lijuan
0 siblings, 0 replies; 8+ messages in thread
From: Tu, Lijuan @ 2021-10-09 7:54 UTC (permalink / raw)
To: Chen, LingliX, dts
> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Chen, LingliX
> Sent: 2021年9月28日 18:10
> To: dts@dpdk.org
> Subject: Re: [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not
> support NIC
>
>
> > -----Original Message-----
> > From: dts <dts-bounces@dpdk.org> On Behalf Of Yan Xia
> > Sent: Wednesday, September 29, 2021 2:06 AM
> > To: dts@dpdk.org
> > Cc: Xia, YanX <yanx.xia@intel.com>
> > Subject: [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not
> > support NIC
> >
> > add cases not support NIC
> >
> > Signed-off-by: Yan Xia <yanx.xia@intel.com>
>
> Tested-by: Yan Xia <yanx.xia@intel.com>
Applied, thanks
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-10-09 7:54 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-28 18:05 [dts] [PATCH V1 1/5] tests/generic_flow_api: move two cases from generic_filter to generic_flow_api Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 3/5] test_plans/generic_flow_api_test_plan: " Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 2/5] test_plans/generic_filter_test_plan: " Yan Xia
2021-09-28 18:05 ` [dts] [PATCH V1 4/5] tests/generic_filter: " Yan Xia
2021-09-29 1:50 ` Peng, Yuan
2021-09-28 18:05 ` [dts] [PATCH V1 5/5] conf/test_case_checklist: add cases not support NIC Yan Xia
2021-09-28 10:10 ` Chen, LingliX
2021-10-09 7:54 ` Tu, Lijuan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).