test suite reviews and discussions
 help / color / mirror / Atom feed
* Re: [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no such device to invaild port
  2020-10-22  9:36 [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no such device to invaild port Xie wei
@ 2020-10-22  9:34 ` Xie, WeiX
  2020-10-29  8:56 ` Ma, LihongX
  1 sibling, 0 replies; 4+ messages in thread
From: Xie, WeiX @ 2020-10-22  9:34 UTC (permalink / raw)
  To: dts

[-- Attachment #1: Type: text/plain, Size: 357 bytes --]

Tested-by:  Xie,WeiX < weix.xie@intel.com>

Regards,
Xie Wei


> -----Original Message-----
> From: Xie wei [mailto:weix.xie@intel.com]
> Sent: Thursday, October 22, 2020 5:36 PM
> To: dts@dpdk.org
> Cc: Xie, WeiX <weix.xie@intel.com>
> Subject: [dts][PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no
> such device to invaild port

[-- Attachment #2: TestIAVFFdir.log --]
[-- Type: application/octet-stream, Size: 33591 bytes --]

22/10/2020 17:29:57                            dts: 
TEST SUITE : TestIAVFFdir
22/10/2020 17:29:57                            dts: NIC :        columbiaville_100g
22/10/2020 17:29:57             dut.10.240.183.133: 
22/10/2020 17:29:57                         tester: 
22/10/2020 17:30:02             dut.10.240.183.133: modprobe vfio-pci
22/10/2020 17:30:02             dut.10.240.183.133: 
22/10/2020 17:30:02                   TestIAVFFdir: Test Case test_negative_case Begin
22/10/2020 17:30:02             dut.10.240.183.133:  
22/10/2020 17:30:02                         tester: 
22/10/2020 17:30:02             dut.10.240.183.133: rmmod ice
22/10/2020 17:30:06             dut.10.240.183.133: 
22/10/2020 17:30:06             dut.10.240.183.133: insmod /lib/modules/5.4.0-45-generic/updates/drivers/net/ethernet/intel/ice/ice.ko
22/10/2020 17:30:07             dut.10.240.183.133: 
22/10/2020 17:30:07             dut.10.240.183.133: ifconfig ens801f0 up
22/10/2020 17:30:07             dut.10.240.183.133: 
22/10/2020 17:30:07             dut.10.240.183.133: ifconfig ens801f1 up
22/10/2020 17:30:08             dut.10.240.183.133: 
22/10/2020 17:30:11             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/vendor
22/10/2020 17:30:11             dut.10.240.183.133: 0x8086
22/10/2020 17:30:11             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/device
22/10/2020 17:30:11             dut.10.240.183.133: 0x1889
22/10/2020 17:30:11             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/vendor
22/10/2020 17:30:11             dut.10.240.183.133: 0x8086
22/10/2020 17:30:11             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.0/device
22/10/2020 17:30:11             dut.10.240.183.133: 0x1889
22/10/2020 17:30:11             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/vendor
22/10/2020 17:30:11             dut.10.240.183.133: 0x8086
22/10/2020 17:30:11             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/device
22/10/2020 17:30:12             dut.10.240.183.133: 0x1889
22/10/2020 17:30:12             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/vendor
22/10/2020 17:30:12             dut.10.240.183.133: 0x8086
22/10/2020 17:30:12             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:01.1/device
22/10/2020 17:30:12             dut.10.240.183.133: 0x1889
22/10/2020 17:30:14             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.0/vendor
22/10/2020 17:30:14             dut.10.240.183.133: 0x8086
22/10/2020 17:30:14             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.0/device
22/10/2020 17:30:14             dut.10.240.183.133: 0x1889
22/10/2020 17:30:14             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.0/vendor
22/10/2020 17:30:14             dut.10.240.183.133: 0x8086
22/10/2020 17:30:14             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.0/device
22/10/2020 17:30:14             dut.10.240.183.133: 0x1889
22/10/2020 17:30:14             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.1/vendor
22/10/2020 17:30:14             dut.10.240.183.133: 0x8086
22/10/2020 17:30:14             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.1/device
22/10/2020 17:30:15             dut.10.240.183.133: 0x1889
22/10/2020 17:30:15             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.1/vendor
22/10/2020 17:30:15             dut.10.240.183.133: 0x8086
22/10/2020 17:30:15             dut.10.240.183.133: cat /sys/bus/pci/devices/0000\:81\:11.1/device
22/10/2020 17:30:15             dut.10.240.183.133: 0x1889
22/10/2020 17:30:15             dut.10.240.183.133: ip link set ens801f0 vf 0 mac 00:11:22:33:44:55
22/10/2020 17:30:15             dut.10.240.183.133: 
22/10/2020 17:30:15             dut.10.240.183.133: ip link set ens801f0 vf 1 mac 00:11:22:33:44:66
22/10/2020 17:30:15             dut.10.240.183.133: 
22/10/2020 17:30:15             dut.10.240.183.133: ip link set ens801f1 vf 0 mac 00:11:22:33:44:77
22/10/2020 17:30:15             dut.10.240.183.133: 
22/10/2020 17:30:15             dut.10.240.183.133: ip link set ens801f1 vf 1 mac 00:11:22:33:44:88
22/10/2020 17:30:15             dut.10.240.183.133: 
22/10/2020 17:30:20             dut.10.240.183.133: ./usertools/dpdk-devbind.py -s
22/10/2020 17:30:21             dut.10.240.183.133: 
Network devices using DPDK-compatible driver
============================================
0000:81:01.0 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf
0000:81:01.1 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf
0000:81:11.0 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf
0000:81:11.1 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf

Network devices using kernel driver
===================================
0000:01:00.0 'I350 Gigabit Network Connection 1521' if=eno0 drv=igb unused=vfio-pci 
0000:01:00.1 'I350 Gigabit Network Connection 1521' if=enp1s0f1 drv=igb unused=vfio-pci 
0000:81:00.0 'Ethernet Controller E810-C for QSFP 1592' if=ens801f0 drv=ice unused=vfio-pci 
0000:81:00.1 'Ethernet Controller E810-C for QSFP 1592' if=ens801f1 drv=ice unused=vfio-pci 

No 'Baseband' devices detected
==============================

No 'Crypto' devices detected
============================

No 'Eventdev' devices detected
==============================

No 'Mempool' devices detected
=============================

No 'Compress' devices detected
==============================

No 'Misc (rawdev)' devices detected
===================================

No 'Regex' devices detected
===========================
22/10/2020 17:30:22             dut.10.240.183.133: x86_64-native-linuxapp-gcc/app/dpdk-testpmd  -l 32,33,34,35 -n 4 -w 0000:81:01.0 -w 0000:81:01.1  --file-prefix=dpdk_11577_20201022172938    -- -i --rxq=16 --txq=16
22/10/2020 17:30:24             dut.10.240.183.133: EAL: Detected 72 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/dpdk_11577_20201022172938/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:81:01.0 (socket 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:81:01.1 (socket 1)
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
iavf_configure_queues(): request RXDID == 22 in Queue[0]
iavf_configure_queues(): request RXDID == 22 in Queue[1]
iavf_configure_queues(): request RXDID == 22 in Queue[2]
iavf_configure_queues(): request RXDID == 22 in Queue[3]
iavf_configure_queues(): request RXDID == 22 in Queue[4]
iavf_configure_queues(): request RXDID == 22 in Queue[5]
iavf_configure_queues(): request RXDID == 22 in Queue[6]
iavf_configure_queues(): request RXDID == 22 in Queue[7]
iavf_configure_queues(): request RXDID == 22 in Queue[8]
iavf_configure_queues(): request RXDID == 22 in Queue[9]
iavf_configure_queues(): request RXDID == 22 in Queue[10]
iavf_configure_queues(): request RXDID == 22 in Queue[11]
iavf_configure_queues(): request RXDID == 22 in Queue[12]
iavf_configure_queues(): request RXDID == 22 in Queue[13]
iavf_configure_queues(): request RXDID == 22 in Queue[14]
iavf_configure_queues(): request RXDID == 22 in Queue[15]

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event
Port 0: 00:11:22:33:44:55
Configuring Port 1 (socket 1)
iavf_configure_queues(): request RXDID == 22 in Queue[0]
iavf_configure_queues(): request RXDID == 22 in Queue[1]
iavf_configure_queues(): request RXDID == 22 in Queue[2]
iavf_configure_queues(): request RXDID == 22 in Queue[3]
iavf_configure_queues(): request RXDID == 22 in Queue[4]
iavf_configure_queues(): request RXDID == 22 in Queue[5]
iavf_configure_queues(): request RXDID == 22 in Queue[6]
iavf_configure_queues(): request RXDID == 22 in Queue[7]
iavf_configure_queues(): request RXDID == 22 in Queue[8]
iavf_configure_queues(): request RXDID == 22 in Queue[9]
iavf_configure_queues(): request RXDID == 22 in Queue[10]
iavf_configure_queues(): request RXDID == 22 in Queue[11]
iavf_configure_queues(): request RXDID == 22 in Queue[12]
iavf_configure_queues(): request RXDID == 22 in Queue[13]
iavf_configure_queues(): request RXDID == 22 in Queue[14]
iavf_configure_queues(): request RXDID == 22 in Queue[15]

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event

Port 1: link state change event
Port 1: 00:11:22:33:44:66
Checking link statuses...
Done
22/10/2020 17:30:34             dut.10.240.183.133: set fwd rxonly
22/10/2020 17:30:34             dut.10.240.183.133: 
Set rxonly packet forwarding mode
22/10/2020 17:30:34             dut.10.240.183.133: set verbose 1
22/10/2020 17:30:34             dut.10.240.183.133: 
Change verbose level from 0 to 1
22/10/2020 17:30:34             dut.10.240.183.133: port config 0 rss-hash-key ipv4 1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd
22/10/2020 17:30:34             dut.10.240.183.133: 
22/10/2020 17:30:34             dut.10.240.183.133: port config 1 rss-hash-key ipv4 1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd
22/10/2020 17:30:34             dut.10.240.183.133: 
22/10/2020 17:30:34             dut.10.240.183.133: show port info all
22/10/2020 17:30:34             dut.10.240.183.133: 

********************* Infos for port 0  *********************
MAC address: 00:11:22:33:44:55
Device name: 0000:81:01.0
Driver name: net_iavf
Firmware-version: not available
Devargs: 
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 100 Gbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 52
Redirection table size: 64
Supported RSS offload flow types:
  ipv4-frag
  ipv4-tcp
  ipv4-udp
  ipv4-sctp
  ipv4-other
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 16
Max possible RX queues: 16
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 16
Max possible TX queues: 16
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 0
Max segment number per MTU/TSO: 0

********************* Infos for port 1  *********************
MAC address: 00:11:22:33:44:66
Device name: 0000:81:01.1
Driver name: net_iavf
Firmware-version: not available
Devargs: 
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 100 Gbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 52
Redirection table size: 64
Supported RSS offload flow types:
  ipv4-frag
  ipv4-tcp
  ipv4-udp
  ipv4-sctp
  ipv4-other
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 16
Max possible RX queues: 16
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 16
Max possible TX queues: 16
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 0
Max segment number per MTU/TSO: 0
22/10/2020 17:30:34             dut.10.240.183.133: start
22/10/2020 17:30:34             dut.10.240.183.133: 
rxonly packet forwarding - ports=2 - cores=1 - streams=32 - NUMA support enabled, MP allocation mode: native
Logical Core 33 (socket 1) forwards packets on 32 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 1) -> TX P=1/Q=1 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=1 (socket 1) -> TX P=0/Q=1 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=2 (socket 1) -> TX P=1/Q=2 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=2 (socket 1) -> TX P=0/Q=2 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=3 (socket 1) -> TX P=1/Q=3 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=3 (socket 1) -> TX P=0/Q=3 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=4 (socket 1) -> TX P=1/Q=4 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=4 (socket 1) -> TX P=0/Q=4 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=5 (socket 1) -> TX P=1/Q=5 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=5 (socket 1) -> TX P=0/Q=5 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=6 (socket 1) -> TX P=1/Q=6 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=6 (socket 1) -> TX P=0/Q=6 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=7 (socket 1) -> TX P=1/Q=7 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=7 (socket 1) -> TX P=0/Q=7 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=8 (socket 1) -> TX P=1/Q=8 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=8 (socket 1) -> TX P=0/Q=8 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=9 (socket 1) -> TX P=1/Q=9 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=9 (socket 1) -> TX P=0/Q=9 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=10 (socket 1) -> TX P=1/Q=10 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=10 (socket 1) -> TX P=0/Q=10 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=11 (socket 1) -> TX P=1/Q=11 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=11 (socket 1) -> TX P=0/Q=11 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=12 (socket 1) -> TX P=1/Q=12 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=12 (socket 1) -> TX P=0/Q=12 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=13 (socket 1) -> TX P=1/Q=13 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=13 (socket 1) -> TX P=0/Q=13 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=14 (socket 1) -> TX P=1/Q=14 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=14 (socket 1) -> TX P=0/Q=14 (socket 1) peer=02:00:00:00:00:00
  RX P=0/Q=15 (socket 1) -> TX P=1/Q=15 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=15 (socket 1) -> TX P=0/Q=15 (socket 1) peer=02:00:00:00:00:00

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
  port 1: RX queue number: 16 Tx queue number: 16
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
22/10/2020 17:30:34             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions queue index 16 / end
22/10/2020 17:30:34             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:34             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions queue index 16 / end
22/10/2020 17:30:34             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:34             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 1 2 3 end / end
22/10/2020 17:30:34             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:34             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 0 end / end
22/10/2020 17:30:34             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:34             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues end / end
22/10/2020 17:30:34             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:34             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 1 2 3 5 end / end
22/10/2020 17:30:34             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:34             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 15 16 end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions rss queues 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 1 2 3 end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 0 end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 1 2 3 5 end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 proto is 255 ttl is 2 tos is 4 / end actions rss queues 15 16 end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions rss queues 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 end / end
22/10/2020 17:30:35             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x12345678 / gtp_psc qfi is 0x100 / end actions queue index 1 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:35             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x100000000 / gtp_psc qfi is 0x5 / end actions queue index 2 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:35             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x100000000 / end actions queue index 1 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x12345678 / gtp_psc qfi is 0x100 / end actions queue index 1 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x100000000 / gtp_psc qfi is 0x5 / end actions queue index 2 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x100000000 / end actions queue index 1 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:35             dut.10.240.183.133: flow validate 0 ingress pattern eth type is 0x0800 / end actions queue index 1 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow validate 0 ingress pattern eth type is 0x86dd / end actions queue index 1 / end
22/10/2020 17:30:35             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:35             dut.10.240.183.133: flow create 0 ingress pattern eth type is 0x0800 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow create 0 ingress pattern eth type is 0x86dd / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 1 / rss queues 2 3 end / end
22/10/2020 17:30:36             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 1 / rss queues 2 3 end / end
22/10/2020 17:30:36             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x12345678 / gtp_psc qfi is 0x34 / end actions end
22/10/2020 17:30:36             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x12345678 / gtp_psc qfi is 0x34 / end actions end
22/10/2020 17:30:36             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 tc is 2 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:36             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 tc is 2 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
Bad arguments
22/10/2020 17:30:36             dut.10.240.183.133: flow validate 0 ingress pattern eth / ipv4 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:36             dut.10.240.183.133: flow validate 2 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 1 (cause unspecified): No such device: No such device
22/10/2020 17:30:36             dut.10.240.183.133: flow create 2 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
port_flow_complain(): Caught PMD error type 1 (cause unspecified): No such device: No such device
22/10/2020 17:30:36             dut.10.240.183.133: flow list 0
22/10/2020 17:30:36             dut.10.240.183.133: 
22/10/2020 17:30:36             dut.10.240.183.133: flow list 1
22/10/2020 17:30:36             dut.10.240.183.133: 
22/10/2020 17:30:36             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 1 / end
22/10/2020 17:30:36             dut.10.240.183.133: 
Flow rule #0 created
22/10/2020 17:30:36             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 1 / end
22/10/2020 17:30:37             dut.10.240.183.133: 
iavf_fdir_add(): Failed to add rule request due to the rule is already existed
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:37             dut.10.240.183.133: flow destroy 0 rule 0
22/10/2020 17:30:37             dut.10.240.183.133: 
Flow rule #0 destroyed
22/10/2020 17:30:37             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 1 / end
22/10/2020 17:30:37             dut.10.240.183.133: 
Flow rule #0 created
22/10/2020 17:30:37             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions queue index 2 / end
22/10/2020 17:30:37             dut.10.240.183.133: 
iavf_fdir_add(): Failed to add rule request due to the rule is already existed
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:37             dut.10.240.183.133: flow create 0 ingress pattern eth / ipv4 src is 192.168.0.20 dst is 192.168.0.21 ttl is 2 tos is 4 / end actions drop / end
22/10/2020 17:30:37             dut.10.240.183.133: 
iavf_fdir_add(): Failed to add rule request due to the rule is already existed
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to create parser engine.: Invalid argument
22/10/2020 17:30:37             dut.10.240.183.133: flow destroy 0 rule 0
22/10/2020 17:30:37             dut.10.240.183.133: 
Flow rule #0 destroyed
22/10/2020 17:30:37             dut.10.240.183.133: flow destroy 0 rule 0
22/10/2020 17:30:37             dut.10.240.183.133: 
22/10/2020 17:30:37             dut.10.240.183.133: flow destroy 2 rule 0
22/10/2020 17:30:37             dut.10.240.183.133: 
Invalid port 2
22/10/2020 17:30:37             dut.10.240.183.133: flow flush 2
22/10/2020 17:30:37             dut.10.240.183.133: 
Invalid port 2
22/10/2020 17:30:37             dut.10.240.183.133: flow list 2
22/10/2020 17:30:37             dut.10.240.183.133: 
Invalid port 2
22/10/2020 17:30:37             dut.10.240.183.133: flow list 0
22/10/2020 17:30:37             dut.10.240.183.133: 
22/10/2020 17:30:37             dut.10.240.183.133: flow list 1
22/10/2020 17:30:37             dut.10.240.183.133: 
22/10/2020 17:30:37                   TestIAVFFdir: Test Case test_negative_case Result PASSED:
22/10/2020 17:30:37             dut.10.240.183.133: kill_all: called by dut and prefix list has value.
22/10/2020 17:30:40             dut.10.240.183.133: Killed
[PEXPECT]# 
22/10/2020 17:30:40             dut.10.240.183.133: quit
22/10/2020 17:30:40             dut.10.240.183.133: 
Command '' not found, did you mean:

  command 'quilt' from deb quilt (0.65-3)
  command 'qgit' from deb qgit (2.9-1build1)
  command 'luit' from deb x11-utils (7.7+5)
  command 'quiz' from deb bsdgames (2.17-28build1)

Try: apt install <deb name>

22/10/2020 17:30:48                            dts: 
TEST SUITE ENDED: TestIAVFFdir

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no such device to invaild port
@ 2020-10-22  9:36 Xie wei
  2020-10-22  9:34 ` Xie, WeiX
  2020-10-29  8:56 ` Ma, LihongX
  0 siblings, 2 replies; 4+ messages in thread
From: Xie wei @ 2020-10-22  9:36 UTC (permalink / raw)
  To: dts; +Cc: Xie wei

run command with invalid port 'flow flush 2', the error info changes no such device to invaild port.

Signed-off-by: Xie wei <weix.xie@intel.com>
---
 tests/TestSuite_iavf_fdir.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/TestSuite_iavf_fdir.py b/tests/TestSuite_iavf_fdir.py
index 05ec2d52..d1186471 100755
--- a/tests/TestSuite_iavf_fdir.py
+++ b/tests/TestSuite_iavf_fdir.py
@@ -2496,7 +2496,7 @@ class TestIAVFFdir(TestCase):
         out2 = self.pmd_output.execute_cmd("flow destroy 2 rule 0")
         self.verify("Invalid port" in out2, "there should report error message")
         out3 = self.pmd_output.execute_cmd("flow flush 2")
-        self.verify("No such device" in out3, "port 2 doesn't exist.")
+        self.verify("Invalid port" in out3, "port 2 doesn't exist.")
         out4 = self.pmd_output.execute_cmd("flow list 2")
         self.verify("Invalid port" in out4, "port 2 doesn't exist.")
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no such device to invaild port
  2020-10-22  9:36 [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no such device to invaild port Xie wei
  2020-10-22  9:34 ` Xie, WeiX
@ 2020-10-29  8:56 ` Ma, LihongX
  2020-11-03  2:07   ` Tu, Lijuan
  1 sibling, 1 reply; 4+ messages in thread
From: Ma, LihongX @ 2020-10-29  8:56 UTC (permalink / raw)
  To: Xie, WeiX, dts; +Cc: Xie, WeiX

Acke-by: Lihong Ma<lihongx.ma@intel.com>

Regards,
Ma,lihong

> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Xie wei
> Sent: Thursday, October 22, 2020 5:36 PM
> To: dts@dpdk.org
> Cc: Xie, WeiX <weix.xie@intel.com>
> Subject: [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes
> no such device to invaild port
> 
> run command with invalid port 'flow flush 2', the error info changes no
> such device to invaild port.
> 
> Signed-off-by: Xie wei <weix.xie@intel.com>
> ---


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no such device to invaild port
  2020-10-29  8:56 ` Ma, LihongX
@ 2020-11-03  2:07   ` Tu, Lijuan
  0 siblings, 0 replies; 4+ messages in thread
From: Tu, Lijuan @ 2020-11-03  2:07 UTC (permalink / raw)
  To: Ma, LihongX, Xie, WeiX, dts; +Cc: Xie, WeiX

> > run command with invalid port 'flow flush 2', the error info changes
> > no such device to invaild port.
> >
> > Signed-off-by: Xie wei <weix.xie@intel.com>
> Acke-by: Lihong Ma<lihongx.ma@intel.com>

Applied

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-11-03  2:07 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-22  9:36 [dts] [PATCH V1] tests/TestSuite_iavf_fdir:the error info changes no such device to invaild port Xie wei
2020-10-22  9:34 ` Xie, WeiX
2020-10-29  8:56 ` Ma, LihongX
2020-11-03  2:07   ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).