test suite reviews and discussions
 help / color / mirror / Atom feed
* Re: [dts] [PATCH V1] tests/generic_flow_api: add two test cases
  2021-07-30 15:10 [dts] [PATCH V1] tests/generic_flow_api: add two test cases Lingli Chen
@ 2021-07-30  6:44 ` Chen, LingliX
  2021-08-02  9:54 ` Lin, Xueqin
  2021-08-10  6:30 ` Tu, Lijuan
  2 siblings, 0 replies; 4+ messages in thread
From: Chen, LingliX @ 2021-07-30  6:44 UTC (permalink / raw)
  To: dts

[-- Attachment #1: Type: text/plain, Size: 310 bytes --]


> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: Friday, July 30, 2021 11:11 PM
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts][PATCH V1] tests/generic_flow_api: add two test cases
> 
Tested-by: Lingli Chen <linglix.chen@intel.com>

[-- Attachment #2: TestGeneric_flow_api.log --]
[-- Type: application/octet-stream, Size: 38904 bytes --]

 

08/07/2021 01:16:56                            dts: 
TEST SUITE : TestGeneric_flow_api
08/07/2021 01:16:56                            dts: NIC :        fortville_25g
08/07/2021 01:16:56             dut.10.240.183.197: 
08/07/2021 01:16:56                         tester: 
08/07/2021 01:17:00           TestGeneric_flow_api: Test Case test_create_different_rule_after_destroy Begin
08/07/2021 01:17:00             dut.10.240.183.197: 
08/07/2021 01:17:00                         tester: 
08/07/2021 01:17:00             dut.10.240.183.197: kill_all: called by dut and has no prefix list.
08/07/2021 01:17:01             dut.10.240.183.197: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -a 0000:86:00.0 -a 0000:86:00.1 --file-prefix=dpdk_15570_20210708011634   -- -i --disable-rss --rxq=16 --txq=16
08/07/2021 01:17:03             dut.10.240.183.197: EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_15570_20210708011634/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:86:00.0 (socket 1)
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:86:00.1 (socket 1)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=203456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: 3C:FD:FE:B8:97:64
Configuring Port 1 (socket 1)
Port 1: 3C:FD:FE:B8:97:65
Checking link statuses...
Done
08/07/2021 01:17:13             dut.10.240.183.197: set fwd rxonly
08/07/2021 01:17:13             dut.10.240.183.197: 

Set rxonly packet forwarding mode
08/07/2021 01:17:13             dut.10.240.183.197: set verbose 1
08/07/2021 01:17:13             dut.10.240.183.197: 

Change verbose level from 0 to 1
08/07/2021 01:17:13             dut.10.240.183.197: start
08/07/2021 01:17:13             dut.10.240.183.197: 

rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
08/07/2021 01:17:15             dut.10.240.183.197: flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
08/07/2021 01:17:15             dut.10.240.183.197: 
08/07/2021 01:17:15             dut.10.240.183.197: flow destroy 0 rule 0
08/07/2021 01:17:15             dut.10.240.183.197: 

Flow rule #0 destroyed
08/07/2021 01:17:15             dut.10.240.183.197: flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions queue index 2 / end
08/07/2021 01:17:15             dut.10.240.183.197: 
08/07/2021 01:17:17             dut.10.240.183.197: 
testpmd> port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:17:17             dut.10.240.183.197: stop
08/07/2021 01:17:17             dut.10.240.183.197: 

Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
08/07/2021 01:17:17           TestGeneric_flow_api: pf: 
testpmd> port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:17:19             dut.10.240.183.197: start
08/07/2021 01:17:19             dut.10.240.183.197: 

rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
08/07/2021 01:17:21             dut.10.240.183.197:  port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:17:21             dut.10.240.183.197: stop
08/07/2021 01:17:22             dut.10.240.183.197: 

Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
08/07/2021 01:17:22           TestGeneric_flow_api: pf:  port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:17:24             dut.10.240.183.197: start
08/07/2021 01:17:24             dut.10.240.183.197: 

rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
08/07/2021 01:17:24             dut.10.240.183.197: quit
08/07/2021 01:17:25             dut.10.240.183.197: 

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...

Port 0: link state change event
Done

Shutting down port 0...
Closing ports...
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
Port 1 is closed
Done

Bye...
08/07/2021 01:17:27           TestGeneric_flow_api: Test Case test_create_different_rule_after_destroy Result PASSED:
08/07/2021 01:17:27             dut.10.240.183.197: quit
08/07/2021 01:17:27             dut.10.240.183.197: 
Command '' not found, did you mean:

  command 'luit' from deb x11-utils (7.7+5)
  command 'qgit' from deb qgit (2.9-1build1)
  command 'quiz' from deb bsdgames (2.17-28build1)
  command 'quilt' from deb quilt (0.65-3)

Try: apt install <deb name>

08/07/2021 01:17:29             dut.10.240.183.197: kill_all: called by dut and prefix list has value.
08/07/2021 01:17:30           TestGeneric_flow_api: Test Case test_create_same_rule_after_destroy Begin
08/07/2021 01:17:30             dut.10.240.183.197:  
08/07/2021 01:17:30                         tester: 
08/07/2021 01:17:30             dut.10.240.183.197: kill_all: called by dut and has no prefix list.
08/07/2021 01:17:31             dut.10.240.183.197: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4,5,6,7,8 -n 4 -a 0000:86:00.0 -a 0000:86:00.1 --file-prefix=dpdk_15570_20210708011634   -- -i --disable-rss --rxq=16 --txq=16
08/07/2021 01:17:33             dut.10.240.183.197: EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dpdk_15570_20210708011634/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:86:00.0 (socket 1)
EAL: Probe PCI driver: net_i40e (8086:158b) device: 0000:86:00.1 (socket 1)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=203456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: 3C:FD:FE:B8:97:64
Configuring Port 1 (socket 1)
Port 1: 3C:FD:FE:B8:97:65
Checking link statuses...
Done
08/07/2021 01:17:43             dut.10.240.183.197: set fwd rxonly
08/07/2021 01:17:43             dut.10.240.183.197: 

Set rxonly packet forwarding mode
08/07/2021 01:17:43             dut.10.240.183.197: set verbose 1
08/07/2021 01:17:43             dut.10.240.183.197: 

Change verbose level from 0 to 1
08/07/2021 01:17:43             dut.10.240.183.197: start
08/07/2021 01:17:43             dut.10.240.183.197: 

rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
08/07/2021 01:17:45             dut.10.240.183.197: flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
08/07/2021 01:17:45             dut.10.240.183.197: 
08/07/2021 01:17:45             dut.10.240.183.197: flow destroy 0 rule 0
08/07/2021 01:17:45             dut.10.240.183.197: 

Flow rule #0 destroyed
08/07/2021 01:17:45             dut.10.240.183.197: flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end
08/07/2021 01:17:45             dut.10.240.183.197: 
08/07/2021 01:18:27             dut.10.240.183.197: 
testpmd> port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:18:27             dut.10.240.183.197: stop
08/07/2021 01:18:27             dut.10.240.183.197: 

Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 1/Queue= 2 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
08/07/2021 01:18:27           TestGeneric_flow_api: pf: 
testpmd> port 0/queue 2: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - FDIR matched hash=0x0 ID=0x0  - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x2
  ol_flags: PKT_RX_FDIR PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:18:29             dut.10.240.183.197: start
08/07/2021 01:18:29             dut.10.240.183.197: 

rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
08/07/2021 01:19:14             dut.10.240.183.197:  port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:19:14             dut.10.240.183.197: stop
08/07/2021 01:19:14             dut.10.240.183.197: 

Telling cores to ...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0 -------
  RX-packets: 1              TX-packets: 0              TX-dropped: 0             

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
08/07/2021 01:19:14           TestGeneric_flow_api: pf:  port 0/queue 0: received 1 packets
  src=00:00:00:00:00:00 - dst=3C:FD:FE:B8:97:64 - type=0x0800 - length=62 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN 

08/07/2021 01:19:16             dut.10.240.183.197: start
08/07/2021 01:19:16             dut.10.240.183.197: 

rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
08/07/2021 01:19:21             dut.10.240.183.197: quit
08/07/2021 01:19:22             dut.10.240.183.197: 

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...

Port 0: link state change event
Done

Shutting down port 0...
Closing ports...

Port 1: link state change event
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
Port 1 is closed
Done

Bye...
08/07/2021 01:19:24           TestGeneric_flow_api: Test Case test_create_same_rule_after_destroy Result PASSED:
08/07/2021 01:19:24             dut.10.240.183.197: quit
08/07/2021 01:19:24             dut.10.240.183.197: 
Command '' not found, did you mean:

  command 'quiz' from deb bsdgames (2.17-28build1)
  command 'luit' from deb x11-utils (7.7+5)
  command 'qgit' from deb qgit (2.9-1build1)
  command 'quilt' from deb quilt (0.65-3)

Try: apt install <deb name>

08/07/2021 01:19:26             dut.10.240.183.197: kill_all: called by dut and prefix list has value.
08/07/2021 01:19:27                            dts: 
TEST SUITE ENDED: TestGeneric_flow_api

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V1] tests/generic_flow_api: add two test cases
@ 2021-07-30 15:10 Lingli Chen
  2021-07-30  6:44 ` Chen, LingliX
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Lingli Chen @ 2021-07-30 15:10 UTC (permalink / raw)
  To: dts; +Cc: Lingli Chen

add new i40e test cases.

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 tests/TestSuite_generic_flow_api.py | 54 +++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/tests/TestSuite_generic_flow_api.py b/tests/TestSuite_generic_flow_api.py
index b97b49f2..546e63a7 100644
--- a/tests/TestSuite_generic_flow_api.py
+++ b/tests/TestSuite_generic_flow_api.py
@@ -2487,6 +2487,60 @@ class TestGeneric_flow_api(TestCase):
             flag = 0
             self.verify(flag, "The packet index %d and %d hash values are same, rss_granularity_config failed!" %(result_rows[3][0],result_rows[4][0]))
 
+     def test_create_same_rule_after_destroy(self):
+
+         self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
+         self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
+         self.dut.send_expect("set verbose 1", "testpmd> ", 120)
+         self.dut.send_expect("start", "testpmd> ", 120)
+         time.sleep(2)
+
+         self.dut.send_expect(
+             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end", "created")
+
+         out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
+         p = re.compile(r"Flow rule #(\d+) destroyed")
+         m = p.search(out)
+         self.verify(m, "flow rule 0 delete failed" )
+
+         self.dut.send_expect(
+             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end", "created")
+
+         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' % self.pf_mac)
+         self.verify_result("pf", expect_rxpkts="1", expect_queue="2", verify_mac=self.pf_mac)
+         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' % self.pf_mac)
+         self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac=self.pf_mac)
+
+         self.dut.send_expect("quit", "# ")
+         time.sleep(2)
+
+     def test_create_different_rule_after_destroy(self):
+
+         self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
+         self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
+         self.dut.send_expect("set verbose 1", "testpmd> ", 120)
+         self.dut.send_expect("start", "testpmd> ", 120)
+         time.sleep(2)
+
+         self.dut.send_expect(
+             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions queue index 2 / end", "created")
+
+         out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
+         p = re.compile(r"Flow rule #(\d+) destroyed")
+         m = p.search(out)
+         self.verify(m, "flow rule 0 delete failed" )
+
+         self.dut.send_expect(
+             "flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions queue index 2 / end", "created")
+
+         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' % self.pf_mac)
+         self.verify_result("pf", expect_rxpkts="1", expect_queue="2", verify_mac=self.pf_mac)
+         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' % self.pf_mac)
+         self.verify_result("pf", expect_rxpkts="1", expect_queue="0", verify_mac=self.pf_mac)
+
+         self.dut.send_expect("quit", "# ")
+         time.sleep(2)
+
     def tear_down(self):
         """
         Run after each test case.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1] tests/generic_flow_api: add two test cases
  2021-07-30 15:10 [dts] [PATCH V1] tests/generic_flow_api: add two test cases Lingli Chen
  2021-07-30  6:44 ` Chen, LingliX
@ 2021-08-02  9:54 ` Lin, Xueqin
  2021-08-10  6:30 ` Tu, Lijuan
  2 siblings, 0 replies; 4+ messages in thread
From: Lin, Xueqin @ 2021-08-02  9:54 UTC (permalink / raw)
  To: Chen, LingliX, dts; +Cc: Chen, LingliX

> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Lingli Chen
> Sent: Friday, July 30, 2021 11:11 PM
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts] [PATCH V1] tests/generic_flow_api: add two test cases
> 
> add new i40e test cases.
> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
Acked-by: Xueqin Lin <Xueqin.lin@intel.com>
> ---
>  tests/TestSuite_generic_flow_api.py | 54 +++++++++++++++++++++++++++++
>  1 file changed, 54 insertions(+)
> 
> diff --git a/tests/TestSuite_generic_flow_api.py
> b/tests/TestSuite_generic_flow_api.py
> index b97b49f2..546e63a7 100644
> --- a/tests/TestSuite_generic_flow_api.py
> +++ b/tests/TestSuite_generic_flow_api.py
> @@ -2487,6 +2487,60 @@ class TestGeneric_flow_api(TestCase):
>              flag = 0
>              self.verify(flag, "The packet index %d and %d hash values are same,
> rss_granularity_config failed!" %(result_rows[3][0],result_rows[4][0]))
> 
> +     def test_create_same_rule_after_destroy(self):
> +
> +         self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --
> txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
> +         self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
> +         self.dut.send_expect("set verbose 1", "testpmd> ", 120)
> +         self.dut.send_expect("start", "testpmd> ", 120)
> +         time.sleep(2)
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +         out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
> +         p = re.compile(r"Flow rule #(\d+) destroyed")
> +         m = p.search(out)
> +         self.verify(m, "flow rule 0 delete failed" )
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="2",
> verify_mac=self.pf_mac)
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="0",
> verify_mac=self.pf_mac)
> +
> +         self.dut.send_expect("quit", "# ")
> +         time.sleep(2)
> +
> +     def test_create_different_rule_after_destroy(self):
> +
> +         self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --
> txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
> +         self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
> +         self.dut.send_expect("set verbose 1", "testpmd> ", 120)
> +         self.dut.send_expect("start", "testpmd> ", 120)
> +         time.sleep(2)
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +         out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
> +         p = re.compile(r"Flow rule #(\d+) destroyed")
> +         m = p.search(out)
> +         self.verify(m, "flow rule 0 delete failed" )
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions
> queue index 2 / end", "created")
> +
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="2",
> verify_mac=self.pf_mac)
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="0",
> verify_mac=self.pf_mac)
> +
> +         self.dut.send_expect("quit", "# ")
> +         time.sleep(2)
> +
>      def tear_down(self):
>          """
>          Run after each test case.
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1] tests/generic_flow_api: add two test cases
  2021-07-30 15:10 [dts] [PATCH V1] tests/generic_flow_api: add two test cases Lingli Chen
  2021-07-30  6:44 ` Chen, LingliX
  2021-08-02  9:54 ` Lin, Xueqin
@ 2021-08-10  6:30 ` Tu, Lijuan
  2 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2021-08-10  6:30 UTC (permalink / raw)
  To: Chen, LingliX, dts; +Cc: Chen, LingliX, Lin, Xueqin



> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Lingli Chen
> Sent: 2021年7月30日 23:11
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts] [PATCH V1] tests/generic_flow_api: add two test cases
> 
> add new i40e test cases.

1, the 2 cases are i40e specific, but I do not see any verification based on NIC type, furthermore, checklist should be updated too.
2, no any exception management. If exception happened, testpmd didn't quit as expected, it absolutely will impact other cases.
3, test plan and test cases should be coupled in the same series.

> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> ---
>  tests/TestSuite_generic_flow_api.py | 54 +++++++++++++++++++++++++++++
>  1 file changed, 54 insertions(+)
> 
> diff --git a/tests/TestSuite_generic_flow_api.py
> b/tests/TestSuite_generic_flow_api.py
> index b97b49f2..546e63a7 100644
> --- a/tests/TestSuite_generic_flow_api.py
> +++ b/tests/TestSuite_generic_flow_api.py
> @@ -2487,6 +2487,60 @@ class TestGeneric_flow_api(TestCase):
>              flag = 0
>              self.verify(flag, "The packet index %d and %d hash values are same,
> rss_granularity_config failed!" %(result_rows[3][0],result_rows[4][0]))
> 
> +     def test_create_same_rule_after_destroy(self):
> +
> +         self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --
> txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
> +         self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
> +         self.dut.send_expect("set verbose 1", "testpmd> ", 120)
> +         self.dut.send_expect("start", "testpmd> ", 120)
> +         time.sleep(2)
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +         out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
> +         p = re.compile(r"Flow rule #(\d+) destroyed")
> +         m = p.search(out)
> +         self.verify(m, "flow rule 0 delete failed" )
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="2",
> verify_mac=self.pf_mac)
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="0",
> verify_mac=self.pf_mac)
> +
> +         self.dut.send_expect("quit", "# ")
> +         time.sleep(2)
> +
> +     def test_create_different_rule_after_destroy(self):
> +
> +         self.pmdout.start_testpmd("%s" % self.cores, "--disable-rss --rxq=%d --
> txq=%d" % (MAX_QUEUE+1, MAX_QUEUE+1))
> +         self.dut.send_expect("set fwd rxonly", "testpmd> ", 120)
> +         self.dut.send_expect("set verbose 1", "testpmd> ", 120)
> +         self.dut.send_expect("start", "testpmd> ", 120)
> +         time.sleep(2)
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp src is 32 / end actions
> queue index 2 / end", "created")
> +
> +         out = self.dut.send_expect("flow destroy 0 rule 0", "testpmd> ")
> +         p = re.compile(r"Flow rule #(\d+) destroyed")
> +         m = p.search(out)
> +         self.verify(m, "flow rule 0 delete failed" )
> +
> +         self.dut.send_expect(
> +             "flow create 0 ingress pattern eth / ipv4 / udp dst is 32 / end actions
> queue index 2 / end", "created")
> +
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(dport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="2",
> verify_mac=self.pf_mac)
> +         self.sendpkt(pktstr='Ether(dst="%s")/IP()/UDP(sport=32)/Raw("x" * 20)' %
> self.pf_mac)
> +         self.verify_result("pf", expect_rxpkts="1", expect_queue="0",
> verify_mac=self.pf_mac)
> +
> +         self.dut.send_expect("quit", "# ")
> +         time.sleep(2)
> +
>      def tear_down(self):
>          """
>          Run after each test case.
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-08-10  6:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-30 15:10 [dts] [PATCH V1] tests/generic_flow_api: add two test cases Lingli Chen
2021-07-30  6:44 ` Chen, LingliX
2021-08-02  9:54 ` Lin, Xueqin
2021-08-10  6:30 ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).