DPDK patches and discussions
 help / color / mirror / Atom feed
* RE: Testpmd/l3fwd port shutdown failure on Arm Altra systems
@ 2023-01-20 12:07 ` Juraj Linkeš
  2023-02-06  8:52   ` Juraj Linkeš
  0 siblings, 1 reply; 9+ messages in thread
From: Juraj Linkeš @ 2023-01-20 12:07 UTC (permalink / raw)
  To: aman.deep.singh, yuying.zhang, Xing, Beilei
  Cc: dev, Ruifeng Wang, Lijian Zhang, Honnappa Nagarahalli


[-- Attachment #1.1: Type: text/plain, Size: 4785 bytes --]

Adding the logfile.

One thing that's in the logs but didn't explicitly mention is the DPDK version we've tried this with:
EAL: RTE Version: 'DPDK 22.07.0'

We also tried earlier versions going back to 21.08, with no luck. I also did a quick check on 22.11, also with no luck.

Juraj

From: Juraj Linkeš
Sent: Friday, January 20, 2023 12:56 PM
To: 'aman.deep.singh@intel.com' <aman.deep.singh@intel.com>; 'yuying.zhang@intel.com' <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; 'Lijian Zhang' <Lijian.Zhang@arm.com>; 'Honnappa Nagarahalli' <Honnappa.Nagarahalli@arm.com>
Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems

Hello i40e and testpmd maintainers,

We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io lab.

A bit of background: along with VPP performance tests (which uses DPDK), we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io as well. This is to catch any performance differences due to VPP updating its DPDK version.

We're running both l3fwd tests and testpmd tests. The Altra servers are two socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both directions, but nothing gets forwarded (with a slight caveat - put a pin in this). There's nothing special in the tests, just forwarding traffic. The NIC we're testing is xl710-QDA2.

The same tests are passing on all other testbeds - we have various two node (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with various NICs (Intel 700 and 800 series and the Intel testbeds use some Mellanox NICs as well). We don't have quite the same combination of another three node topology with the same NIC though, so it looks like something with testpmd/l3fwd and xl710-QDA2 on Altra servers.

VPP performance tests are passing, but l3fwd and testpmd fail. This leads us to believe to it's a software issue, but there could something wrong with the hardware. I'll talk about testpmd from now on, but as far we can tell, the behavior is the same for testpmd and l3fwd.

Getting back to the caveat mentioned earlier, there seems to be something wrong with port shutdown. When running testpmd on a testbed that hasn't been used for a while it seems that all ports are up right away (we don't see any "Port 0|1: link state change event") and the setup works fine (forwarding works). After restarting testpmd (restarting on one server is sufficient), the ports between DUT1 and DUT2 (but not between DUTs and TG) go down and are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while (measured in minutes, sometimes dozens of minutes; the duration is seemingly random). The ports eventually recover and can be used again, but there's nothing in syslog suggesting what happened.

What seems to be happening is testpmd put the ports into some faulty state. This only happens on the DUT1 -> DUT2 link though (the ports between the two testpmds), not on TG -> DUT1 link (the TG port is left alone).

Some more info:
We've come across the issue with this configuration:
OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
Drivers versions: i40e 2.17.15 and iavf 4.3.19.

As well as with this configuration:
OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
Updated firmware: 8.30 0x8000a4ae 1.2926.0.
Drivers: i40e 2.19.3 and iavf 4.5.3.

Unsafe noiommu mode is disabled:
cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
N

We used DPDK 22.07 in manual testing and built it on DUTs, using generic build:
meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y -Dplatform=generic build

We're running testpmd with this command:
sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0 --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1 --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768 --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384 --nb-cores=1

And l3fwd (with different macs on the other server):
sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1" --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3

We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but that didn't reveal anything helpful, as far as we can tell - please have a look at the attached log. The faulty port is port0 (starts out as down, then we waited for around 25 minutes for it to go up and then we shut down testpmd).

We'd like to ask for pointers on what could be the cause or how to debug this issue further.

Thanks,
Juraj

[-- Attachment #1.2: Type: text/html, Size: 11737 bytes --]

[-- Attachment #2: testpmd.log --]
[-- Type: application/octet-stream, Size: 71498 bytes --]

sudo /tmp/openvpp-testing/dpdk/build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0 --in-memory --log-level=pmd,debug -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1 --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768 --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384 --nb-cores=1 --no-lsc-interrupt
EAL: Detected CPU lcores: 160
EAL: Detected NUMA nodes: 2
EAL: RTE Version: 'DPDK 22.07.0'
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mode 'VA'
EAL: No free 32768 kB hugepages reported on node 0
EAL: No free 32768 kB hugepages reported on node 1
EAL: No free 64 kB hugepages reported on node 0
EAL: No free 64 kB hugepages reported on node 1
EAL: 32 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_i40e (8086:1583) device: 0004:04:00.0 (socket 0)
eth_i40e_dev_init():  >>
i40e_pf_reset(): Core and Global modules ready 0
i40e_init_shared_code(): i40e_init_shared_code
i40e_set_mac_type(): i40e_set_mac_type

i40e_set_mac_type(): i40e_set_mac_type found mac: 1, returns: 0
i40e_init_nvm(): i40e_init_nvm
i40e_allocate_dma_mem_d(): memzone i40e_dma_0 allocated with physical address: 283605835776
i40e_allocate_dma_mem_d(): memzone i40e_dma_1 allocated with physical address: 283605827584
i40e_allocate_dma_mem_d(): memzone i40e_dma_2 allocated with physical address: 283605819392
i40e_allocate_dma_mem_d(): memzone i40e_dma_3 allocated with physical address: 283605811200
i40e_allocate_dma_mem_d(): memzone i40e_dma_4 allocated with physical address: 283605803008
i40e_allocate_dma_mem_d(): memzone i40e_dma_5 allocated with physical address: 283605794816
i40e_allocate_dma_mem_d(): memzone i40e_dma_6 allocated with physical address: 283605786624
i40e_allocate_dma_mem_d(): memzone i40e_dma_7 allocated with physical address: 283605778432
i40e_allocate_dma_mem_d(): memzone i40e_dma_8 allocated with physical address: 283605770240
i40e_allocate_dma_mem_d(): memzone i40e_dma_9 allocated with physical address: 283605762048
i40e_allocate_dma_mem_d(): memzone i40e_dma_10 allocated with physical address: 283605753856
i40e_allocate_dma_mem_d(): memzone i40e_dma_11 allocated with physical address: 283605745664
i40e_allocate_dma_mem_d(): memzone i40e_dma_12 allocated with physical address: 283605737472
i40e_allocate_dma_mem_d(): memzone i40e_dma_13 allocated with physical address: 283605729280
i40e_allocate_dma_mem_d(): memzone i40e_dma_14 allocated with physical address: 283605721088
i40e_allocate_dma_mem_d(): memzone i40e_dma_15 allocated with physical address: 283605712896
i40e_allocate_dma_mem_d(): memzone i40e_dma_16 allocated with physical address: 283605704704
i40e_allocate_dma_mem_d(): memzone i40e_dma_17 allocated with physical address: 283605696512
i40e_allocate_dma_mem_d(): memzone i40e_dma_18 allocated with physical address: 283605688320
i40e_allocate_dma_mem_d(): memzone i40e_dma_19 allocated with physical address: 283605680128
i40e_allocate_dma_mem_d(): memzone i40e_dma_20 allocated with physical address: 283605671936
i40e_allocate_dma_mem_d(): memzone i40e_dma_21 allocated with physical address: 283605663744
i40e_allocate_dma_mem_d(): memzone i40e_dma_22 allocated with physical address: 283605655552
i40e_allocate_dma_mem_d(): memzone i40e_dma_23 allocated with physical address: 283605647360
i40e_allocate_dma_mem_d(): memzone i40e_dma_24 allocated with physical address: 283605639168
i40e_allocate_dma_mem_d(): memzone i40e_dma_25 allocated with physical address: 283605630976
i40e_allocate_dma_mem_d(): memzone i40e_dma_26 allocated with physical address: 283605622784
i40e_allocate_dma_mem_d(): memzone i40e_dma_27 allocated with physical address: 283605614592
i40e_allocate_dma_mem_d(): memzone i40e_dma_28 allocated with physical address: 283605606400
i40e_allocate_dma_mem_d(): memzone i40e_dma_29 allocated with physical address: 283605598208
i40e_allocate_dma_mem_d(): memzone i40e_dma_30 allocated with physical address: 283605590016
i40e_allocate_dma_mem_d(): memzone i40e_dma_31 allocated with physical address: 283605581824
i40e_allocate_dma_mem_d(): memzone i40e_dma_32 allocated with physical address: 283605573632
i40e_allocate_dma_mem_d(): memzone i40e_dma_33 allocated with physical address: 283605569536
i40e_allocate_dma_mem_d(): memzone i40e_dma_34 allocated with physical address: 283605561344
i40e_allocate_dma_mem_d(): memzone i40e_dma_35 allocated with physical address: 283605553152
i40e_allocate_dma_mem_d(): memzone i40e_dma_36 allocated with physical address: 283605544960
i40e_allocate_dma_mem_d(): memzone i40e_dma_37 allocated with physical address: 283605536768
i40e_allocate_dma_mem_d(): memzone i40e_dma_38 allocated with physical address: 283605528576
i40e_allocate_dma_mem_d(): memzone i40e_dma_39 allocated with physical address: 283605520384
i40e_allocate_dma_mem_d(): memzone i40e_dma_40 allocated with physical address: 283605512192
i40e_allocate_dma_mem_d(): memzone i40e_dma_41 allocated with physical address: 283605504000
i40e_allocate_dma_mem_d(): memzone i40e_dma_42 allocated with physical address: 283605495808
i40e_allocate_dma_mem_d(): memzone i40e_dma_43 allocated with physical address: 283605487616
i40e_allocate_dma_mem_d(): memzone i40e_dma_44 allocated with physical address: 283605479424
i40e_allocate_dma_mem_d(): memzone i40e_dma_45 allocated with physical address: 283605471232
i40e_allocate_dma_mem_d(): memzone i40e_dma_46 allocated with physical address: 283605463040
i40e_allocate_dma_mem_d(): memzone i40e_dma_47 allocated with physical address: 283605454848
i40e_allocate_dma_mem_d(): memzone i40e_dma_48 allocated with physical address: 283605446656
i40e_allocate_dma_mem_d(): memzone i40e_dma_49 allocated with physical address: 283605438464
i40e_allocate_dma_mem_d(): memzone i40e_dma_50 allocated with physical address: 283605430272
i40e_allocate_dma_mem_d(): memzone i40e_dma_51 allocated with physical address: 283605422080
i40e_allocate_dma_mem_d(): memzone i40e_dma_52 allocated with physical address: 283605413888
i40e_allocate_dma_mem_d(): memzone i40e_dma_53 allocated with physical address: 283605405696
i40e_allocate_dma_mem_d(): memzone i40e_dma_54 allocated with physical address: 283605397504
i40e_allocate_dma_mem_d(): memzone i40e_dma_55 allocated with physical address: 283605389312
i40e_allocate_dma_mem_d(): memzone i40e_dma_56 allocated with physical address: 283605381120
i40e_allocate_dma_mem_d(): memzone i40e_dma_57 allocated with physical address: 283605372928
i40e_allocate_dma_mem_d(): memzone i40e_dma_58 allocated with physical address: 283605364736
i40e_allocate_dma_mem_d(): memzone i40e_dma_59 allocated with physical address: 283605356544
i40e_allocate_dma_mem_d(): memzone i40e_dma_60 allocated with physical address: 283605348352
i40e_allocate_dma_mem_d(): memzone i40e_dma_61 allocated with physical address: 283605340160
i40e_allocate_dma_mem_d(): memzone i40e_dma_62 allocated with physical address: 283605331968
i40e_allocate_dma_mem_d(): memzone i40e_dma_63 allocated with physical address: 283605323776
i40e_allocate_dma_mem_d(): memzone i40e_dma_64 allocated with physical address: 283605315584
i40e_allocate_dma_mem_d(): memzone i40e_dma_65 allocated with physical address: 283605307392
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_aq_release_resource(): i40e_aq_release_resource
eth_i40e_dev_init(): FW 8.3 API 1.13 NVM 08.03.00 eetrack 8000a4ae
i40e_enable_extended_tag(): Extended Tag has already been enabled
i40e_check_write_reg(): [0x002507c0] original: 0x00000000
i40e_check_write_reg(): [0x002507c0] after: 0x00000000
i40e_check_write_reg(): [0x002507e0] original: 0x0001801e
i40e_check_write_reg(): [0x002507e0] after: 0x0001801e
i40e_check_write_reg(): [0x00250840] original: 0x00000000
i40e_check_write_reg(): [0x00250840] after: 0x00000000
i40e_check_write_reg(): [0x00250860] original: 0x0001801e
i40e_check_write_reg(): [0x00250860] after: 0x0001801e
i40e_check_write_reg(): [0x00250880] original: 0x80000000
i40e_check_write_reg(): [0x00250880] after: 0x80000000
i40e_check_write_reg(): [0x002508a0] original: 0x0001801f
i40e_check_write_reg(): [0x002508a0] after: 0x0001801f
i40e_check_write_reg(): [0x002508c0] original: 0x00000000
i40e_check_write_reg(): [0x002508c0] after: 0x00000000
i40e_check_write_reg(): [0x002508e0] original: 0x00018018
i40e_check_write_reg(): [0x002508e0] after: 0x00018018
i40e_check_write_reg(): [0x00250900] original: 0x00000000
i40e_check_write_reg(): [0x00250900] after: 0x00000000
i40e_check_write_reg(): [0x00250920] original: 0x00018018
i40e_check_write_reg(): [0x00250920] after: 0x00018018
i40e_check_write_reg(): [0x00250a40] original: 0x00000000
i40e_check_write_reg(): [0x00250a40] after: 0x00000000
i40e_check_write_reg(): [0x00250a60] original: 0x0007fffe
i40e_check_write_reg(): [0x00250a60] after: 0x0007fffe
i40e_check_write_reg(): [0x00250ac0] original: 0x00000000
i40e_check_write_reg(): [0x00250ac0] after: 0x00000000
i40e_check_write_reg(): [0x00250ae0] original: 0x0007fffe
i40e_check_write_reg(): [0x00250ae0] after: 0x0007fffe
i40e_check_write_reg(): [0x00250b00] original: 0x80000000
i40e_check_write_reg(): [0x00250b00] after: 0x80000000
i40e_check_write_reg(): [0x00250b20] original: 0x0007ffff
i40e_check_write_reg(): [0x00250b20] after: 0x0007ffff
i40e_check_write_reg(): [0x00250b40] original: 0x00000000
i40e_check_write_reg(): [0x00250b40] after: 0x00000000
i40e_check_write_reg(): [0x00250b60] original: 0x0007fff8
i40e_check_write_reg(): [0x00250b60] after: 0x0007fff8
i40e_check_write_reg(): [0x00250b80] original: 0x00000000
i40e_check_write_reg(): [0x00250b80] after: 0x00000000
i40e_check_write_reg(): [0x00250ba0] original: 0x0007fff8
i40e_check_write_reg(): [0x00250ba0] after: 0x0007fff8
i40e_check_write_reg(): [0x00250fc0] original: 0x00004000
i40e_check_write_reg(): [0x00250fc0] after: 0x00004000
i40e_check_write_reg(): [0x00250fe0] original: 0x00000000
i40e_check_write_reg(): [0x00250fe0] after: 0x00000000
eth_i40e_dev_init(): Global register 0x0026c7a0 is changed with 0x28
i40e_configure_registers(): Read from 0x26ce00: 0x203f0200
i40e_configure_registers(): Read from 0x26ce08: 0x11f0200
i40e_get_swr_pm_cfg(): Device 0x1583 with GL_SWR_PM_UP_THR value - 0x06060606
i40e_configure_registers(): Read from 0x269fbc: 0x6060606
i40e_pf_parameter_init(): 64 VMDQ VSIs, 4 queues per VMDQ VSI, in total 256 queues
i40e_allocate_dma_mem_d(): memzone i40e_dma_66 allocated with physical address: 283605180416
i40e_validate_mac_addr(): i40e_validate_mac_addr
i40e_update_default_filter_setting(): Cannot remove the default macvlan filter
i40e_vsi_get_bw_config(): VSI bw limit:0
i40e_vsi_get_bw_config(): VSI max_bw:0
i40e_vsi_get_bw_config():       VSI TC0:share credits 1
i40e_vsi_get_bw_config():       VSI TC0:credits 0
i40e_vsi_get_bw_config():       VSI TC0: max credits: 0
i40e_vsi_get_bw_config():       VSI TC1:share credits 0
i40e_vsi_get_bw_config():       VSI TC1:credits 0
i40e_vsi_get_bw_config():       VSI TC1: max credits: 0
i40e_vsi_get_bw_config():       VSI TC2:share credits 0
i40e_vsi_get_bw_config():       VSI TC2:credits 0
i40e_vsi_get_bw_config():       VSI TC2: max credits: 0
i40e_vsi_get_bw_config():       VSI TC3:share credits 0
i40e_vsi_get_bw_config():       VSI TC3:credits 0
i40e_vsi_get_bw_config():       VSI TC3: max credits: 0
i40e_vsi_get_bw_config():       VSI TC4:share credits 0
i40e_vsi_get_bw_config():       VSI TC4:credits 0
i40e_vsi_get_bw_config():       VSI TC4: max credits: 0
i40e_vsi_get_bw_config():       VSI TC5:share credits 0
i40e_vsi_get_bw_config():       VSI TC5:credits 0
i40e_vsi_get_bw_config():       VSI TC5: max credits: 0
i40e_vsi_get_bw_config():       VSI TC6:share credits 0
i40e_vsi_get_bw_config():       VSI TC6:credits 0
i40e_vsi_get_bw_config():       VSI TC6: max credits: 0
i40e_vsi_get_bw_config():       VSI TC7:share credits 0
i40e_vsi_get_bw_config():       VSI TC7:credits 0
i40e_vsi_get_bw_config():       VSI TC7: max credits: 0
i40e_pf_setup(): Hardware capability of hash lookup table size: 512
i40e_update_flow_control(): Link auto negotiation not completed
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_read_nvm_buffer_srctl(): i40e_read_nvm_buffer_srctl
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_pf_host_init():  >>
i40e_init_filter_invalidation(): FDIR INVALPRIO set to guaranteed first
i40e_init_fdir_filter_list(): FDIR guarantee space: 512, best_effort space 7168.
i40e_update_vsi_stats(): ***************** VSI[6] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[6] stats end *******************
EAL: Probe PCI driver: net_i40e (8086:1583) device: 0004:04:00.1 (socket 0)
eth_i40e_dev_init():  >>
i40e_pf_reset(): Core and Global modules ready 0
i40e_init_shared_code(): i40e_init_shared_code
i40e_set_mac_type(): i40e_set_mac_type

i40e_set_mac_type(): i40e_set_mac_type found mac: 1, returns: 0
i40e_init_nvm(): i40e_init_nvm
i40e_allocate_dma_mem_d(): memzone i40e_dma_67 allocated with physical address: 283608416256
i40e_allocate_dma_mem_d(): memzone i40e_dma_68 allocated with physical address: 283608408064
i40e_allocate_dma_mem_d(): memzone i40e_dma_69 allocated with physical address: 283608399872
i40e_allocate_dma_mem_d(): memzone i40e_dma_70 allocated with physical address: 283608391680
i40e_allocate_dma_mem_d(): memzone i40e_dma_71 allocated with physical address: 283608383488
i40e_allocate_dma_mem_d(): memzone i40e_dma_72 allocated with physical address: 283608375296
i40e_allocate_dma_mem_d(): memzone i40e_dma_73 allocated with physical address: 283608367104
i40e_allocate_dma_mem_d(): memzone i40e_dma_74 allocated with physical address: 283608358912
i40e_allocate_dma_mem_d(): memzone i40e_dma_75 allocated with physical address: 283606835200
i40e_allocate_dma_mem_d(): memzone i40e_dma_76 allocated with physical address: 283606827008
i40e_allocate_dma_mem_d(): memzone i40e_dma_77 allocated with physical address: 283606818816
i40e_allocate_dma_mem_d(): memzone i40e_dma_78 allocated with physical address: 283606810624
i40e_allocate_dma_mem_d(): memzone i40e_dma_79 allocated with physical address: 283606802432
i40e_allocate_dma_mem_d(): memzone i40e_dma_80 allocated with physical address: 283606794240
i40e_allocate_dma_mem_d(): memzone i40e_dma_81 allocated with physical address: 283606786048
i40e_allocate_dma_mem_d(): memzone i40e_dma_82 allocated with physical address: 283606777856
i40e_allocate_dma_mem_d(): memzone i40e_dma_83 allocated with physical address: 283606769664
i40e_allocate_dma_mem_d(): memzone i40e_dma_84 allocated with physical address: 283606761472
i40e_allocate_dma_mem_d(): memzone i40e_dma_85 allocated with physical address: 283606753280
i40e_allocate_dma_mem_d(): memzone i40e_dma_86 allocated with physical address: 283606745088
i40e_allocate_dma_mem_d(): memzone i40e_dma_87 allocated with physical address: 283606736896
i40e_allocate_dma_mem_d(): memzone i40e_dma_88 allocated with physical address: 283606728704
i40e_allocate_dma_mem_d(): memzone i40e_dma_89 allocated with physical address: 283606720512
i40e_allocate_dma_mem_d(): memzone i40e_dma_90 allocated with physical address: 283606712320
i40e_allocate_dma_mem_d(): memzone i40e_dma_91 allocated with physical address: 283606704128
i40e_allocate_dma_mem_d(): memzone i40e_dma_92 allocated with physical address: 283606695936
i40e_allocate_dma_mem_d(): memzone i40e_dma_93 allocated with physical address: 283606687744
i40e_allocate_dma_mem_d(): memzone i40e_dma_94 allocated with physical address: 283606679552
i40e_allocate_dma_mem_d(): memzone i40e_dma_95 allocated with physical address: 283606671360
i40e_allocate_dma_mem_d(): memzone i40e_dma_96 allocated with physical address: 283606663168
i40e_allocate_dma_mem_d(): memzone i40e_dma_97 allocated with physical address: 283606654976
i40e_allocate_dma_mem_d(): memzone i40e_dma_98 allocated with physical address: 283606646784
i40e_allocate_dma_mem_d(): memzone i40e_dma_99 allocated with physical address: 283606638592
i40e_allocate_dma_mem_d(): memzone i40e_dma_100 allocated with physical address: 283608354816
i40e_allocate_dma_mem_d(): memzone i40e_dma_101 allocated with physical address: 283606630400
i40e_allocate_dma_mem_d(): memzone i40e_dma_102 allocated with physical address: 283606622208
i40e_allocate_dma_mem_d(): memzone i40e_dma_103 allocated with physical address: 283606614016
i40e_allocate_dma_mem_d(): memzone i40e_dma_104 allocated with physical address: 283606605824
i40e_allocate_dma_mem_d(): memzone i40e_dma_105 allocated with physical address: 283606597632
i40e_allocate_dma_mem_d(): memzone i40e_dma_106 allocated with physical address: 283606589440
i40e_allocate_dma_mem_d(): memzone i40e_dma_107 allocated with physical address: 283606581248
i40e_allocate_dma_mem_d(): memzone i40e_dma_108 allocated with physical address: 283606573056
i40e_allocate_dma_mem_d(): memzone i40e_dma_109 allocated with physical address: 283606564864
i40e_allocate_dma_mem_d(): memzone i40e_dma_110 allocated with physical address: 283606556672
i40e_allocate_dma_mem_d(): memzone i40e_dma_111 allocated with physical address: 283606548480
i40e_allocate_dma_mem_d(): memzone i40e_dma_112 allocated with physical address: 283606540288
i40e_allocate_dma_mem_d(): memzone i40e_dma_113 allocated with physical address: 283606532096
i40e_allocate_dma_mem_d(): memzone i40e_dma_114 allocated with physical address: 283606523904
i40e_allocate_dma_mem_d(): memzone i40e_dma_115 allocated with physical address: 283606515712
i40e_allocate_dma_mem_d(): memzone i40e_dma_116 allocated with physical address: 283606507520
i40e_allocate_dma_mem_d(): memzone i40e_dma_117 allocated with physical address: 283606499328
i40e_allocate_dma_mem_d(): memzone i40e_dma_118 allocated with physical address: 283606491136
i40e_allocate_dma_mem_d(): memzone i40e_dma_119 allocated with physical address: 283606482944
i40e_allocate_dma_mem_d(): memzone i40e_dma_120 allocated with physical address: 283606474752
i40e_allocate_dma_mem_d(): memzone i40e_dma_121 allocated with physical address: 283606466560
i40e_allocate_dma_mem_d(): memzone i40e_dma_122 allocated with physical address: 283606458368
i40e_allocate_dma_mem_d(): memzone i40e_dma_123 allocated with physical address: 283606450176
i40e_allocate_dma_mem_d(): memzone i40e_dma_124 allocated with physical address: 283606441984
i40e_allocate_dma_mem_d(): memzone i40e_dma_125 allocated with physical address: 283606433792
i40e_allocate_dma_mem_d(): memzone i40e_dma_126 allocated with physical address: 283606425600
i40e_allocate_dma_mem_d(): memzone i40e_dma_127 allocated with physical address: 283606417408
i40e_allocate_dma_mem_d(): memzone i40e_dma_128 allocated with physical address: 283606409216
i40e_allocate_dma_mem_d(): memzone i40e_dma_129 allocated with physical address: 283606401024
i40e_allocate_dma_mem_d(): memzone i40e_dma_130 allocated with physical address: 283606392832
i40e_allocate_dma_mem_d(): memzone i40e_dma_131 allocated with physical address: 283606384640
i40e_allocate_dma_mem_d(): memzone i40e_dma_132 allocated with physical address: 283606376448
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_aq_release_resource(): i40e_aq_release_resource
eth_i40e_dev_init(): FW 8.3 API 1.13 NVM 08.03.00 eetrack 8000a4ae
i40e_enable_extended_tag(): Extended Tag has already been enabled
i40e_check_write_reg(): [0x002507c0] original: 0x00000000
i40e_check_write_reg(): [0x002507c0] after: 0x00000000
i40e_check_write_reg(): [0x002507e0] original: 0x0001801e
i40e_check_write_reg(): [0x002507e0] after: 0x0001801e
i40e_check_write_reg(): [0x00250840] original: 0x00000000
i40e_check_write_reg(): [0x00250840] after: 0x00000000
i40e_check_write_reg(): [0x00250860] original: 0x0001801e
i40e_check_write_reg(): [0x00250860] after: 0x0001801e
i40e_check_write_reg(): [0x00250880] original: 0x80000000
i40e_check_write_reg(): [0x00250880] after: 0x80000000
i40e_check_write_reg(): [0x002508a0] original: 0x0001801f
i40e_check_write_reg(): [0x002508a0] after: 0x0001801f
i40e_check_write_reg(): [0x002508c0] original: 0x00000000
i40e_check_write_reg(): [0x002508c0] after: 0x00000000
i40e_check_write_reg(): [0x002508e0] original: 0x00018018
i40e_check_write_reg(): [0x002508e0] after: 0x00018018
i40e_check_write_reg(): [0x00250900] original: 0x00000000
i40e_check_write_reg(): [0x00250900] after: 0x00000000
i40e_check_write_reg(): [0x00250920] original: 0x00018018
i40e_check_write_reg(): [0x00250920] after: 0x00018018
i40e_check_write_reg(): [0x00250a40] original: 0x00000000
i40e_check_write_reg(): [0x00250a40] after: 0x00000000
i40e_check_write_reg(): [0x00250a60] original: 0x0007fffe
i40e_check_write_reg(): [0x00250a60] after: 0x0007fffe
i40e_check_write_reg(): [0x00250ac0] original: 0x00000000
i40e_check_write_reg(): [0x00250ac0] after: 0x00000000
i40e_check_write_reg(): [0x00250ae0] original: 0x0007fffe
i40e_check_write_reg(): [0x00250ae0] after: 0x0007fffe
i40e_check_write_reg(): [0x00250b00] original: 0x80000000
i40e_check_write_reg(): [0x00250b00] after: 0x80000000
i40e_check_write_reg(): [0x00250b20] original: 0x0007ffff
i40e_check_write_reg(): [0x00250b20] after: 0x0007ffff
i40e_check_write_reg(): [0x00250b40] original: 0x00000000
i40e_check_write_reg(): [0x00250b40] after: 0x00000000
i40e_check_write_reg(): [0x00250b60] original: 0x0007fff8
i40e_check_write_reg(): [0x00250b60] after: 0x0007fff8
i40e_check_write_reg(): [0x00250b80] original: 0x00000000
i40e_check_write_reg(): [0x00250b80] after: 0x00000000
i40e_check_write_reg(): [0x00250ba0] original: 0x0007fff8
i40e_check_write_reg(): [0x00250ba0] after: 0x0007fff8
i40e_check_write_reg(): [0x00250fc0] original: 0x00004000
i40e_check_write_reg(): [0x00250fc0] after: 0x00004000
i40e_check_write_reg(): [0x00250fe0] original: 0x00000000
i40e_check_write_reg(): [0x00250fe0] after: 0x00000000
eth_i40e_dev_init(): Global register 0x0026c7a0 is changed with 0x28
i40e_configure_registers(): Read from 0x26ce00: 0x203f0200
i40e_configure_registers(): Read from 0x26ce08: 0x11f0200
i40e_get_swr_pm_cfg(): Device 0x1583 with GL_SWR_PM_UP_THR value - 0x06060606
i40e_configure_registers(): Read from 0x269fbc: 0x6060606
i40e_pf_parameter_init(): 64 VMDQ VSIs, 4 queues per VMDQ VSI, in total 256 queues
i40e_allocate_dma_mem_d(): memzone i40e_dma_133 allocated with physical address: 283604762624
i40e_validate_mac_addr(): i40e_validate_mac_addr
i40e_update_default_filter_setting(): Cannot remove the default macvlan filter
i40e_vsi_get_bw_config(): VSI bw limit:0
i40e_vsi_get_bw_config(): VSI max_bw:0
i40e_vsi_get_bw_config():       VSI TC0:share credits 1
i40e_vsi_get_bw_config():       VSI TC0:credits 0
i40e_vsi_get_bw_config():       VSI TC0: max credits: 0
i40e_vsi_get_bw_config():       VSI TC1:share credits 0
i40e_vsi_get_bw_config():       VSI TC1:credits 0
i40e_vsi_get_bw_config():       VSI TC1: max credits: 0
i40e_vsi_get_bw_config():       VSI TC2:share credits 0
i40e_vsi_get_bw_config():       VSI TC2:credits 0
i40e_vsi_get_bw_config():       VSI TC2: max credits: 0
i40e_vsi_get_bw_config():       VSI TC3:share credits 0
i40e_vsi_get_bw_config():       VSI TC3:credits 0
i40e_vsi_get_bw_config():       VSI TC3: max credits: 0
i40e_vsi_get_bw_config():       VSI TC4:share credits 0
i40e_vsi_get_bw_config():       VSI TC4:credits 0
i40e_vsi_get_bw_config():       VSI TC4: max credits: 0
i40e_vsi_get_bw_config():       VSI TC5:share credits 0
i40e_vsi_get_bw_config():       VSI TC5:credits 0
i40e_vsi_get_bw_config():       VSI TC5: max credits: 0
i40e_vsi_get_bw_config():       VSI TC6:share credits 0
i40e_vsi_get_bw_config():       VSI TC6:credits 0
i40e_vsi_get_bw_config():       VSI TC6: max credits: 0
i40e_vsi_get_bw_config():       VSI TC7:share credits 0
i40e_vsi_get_bw_config():       VSI TC7:credits 0
i40e_vsi_get_bw_config():       VSI TC7: max credits: 0
i40e_pf_setup(): Hardware capability of hash lookup table size: 512
i40e_update_flow_control(): Link auto negotiation not completed
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_read_nvm_buffer_srctl(): i40e_read_nvm_buffer_srctl
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_pf_host_init():  >>
i40e_init_filter_invalidation(): FDIR INVALPRIO set to guaranteed first
i40e_init_fdir_filter_list(): FDIR guarantee space: 512, best_effort space 7168.
i40e_update_vsi_stats(): ***************** VSI[7] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[7] stats end *******************
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=32768, size=16384, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
i40e_set_tx_function_flag(): Vector Tx can be enabled on Tx queue 0.
i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
i40e_set_tx_function(): Using Vector Tx (port 0).
i40e_set_rx_function(): Using Vector Rx (port 0).
i40e_dev_rx_queue_start():  >>
i40e_dev_tx_queue_start():  >>
i40e_phy_conf_link():   Current: abilities 20, link_speed 10
i40e_phy_conf_link():   Config:  abilities 38, link_speed 7e
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 58016, etype_used = 65535, mac_etype_free = 65535, etype_free = 0
i40e_fdir_filter_restore(): FDIR: Guarant count: 0,  Best count: 0
i40e_dev_alarm_handler(): ICR0: adminq event
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_set_mac_max_frame(): Set max frame size at port level not applicable on link down
Port 0: 40:A6:B7:85:E7:80
Configuring Port 1 (socket 0)
i40e_set_tx_function_flag(): Vector Tx can be enabled on Tx queue 0.
i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
i40e_set_tx_function(): Using Vector Tx (port 1).
i40e_set_rx_function(): Using Vector Rx (port 1).
i40e_dev_rx_queue_start():  >>
i40e_dev_tx_queue_start():  >>
i40e_phy_conf_link():   Current: abilities 20, link_speed 10
i40e_phy_conf_link():   Config:  abilities 38, link_speed 7e
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 58016, etype_used = 65535, mac_etype_free = 65535, etype_free = 0
i40e_fdir_filter_restore(): FDIR: Guarant count: 0,  Best count: 0
i40e_dev_alarm_handler(): ICR0: adminq event
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
Port 1: 40:A6:B7:85:E7:81
Checking link statuses...
i40e_dev_alarm_handler(): ICR0: adminq event

Port 0 Link down
Port 1 Link up at 40 Gbps FDX Autoneg
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=64
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
i40e_update_vsi_stats(): ***************** VSI[6] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[6] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************
i40e_update_vsi_stats(): ***************** VSI[7] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[7] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************
testpmd> 
testpmd> 
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 40:A6:B7:85:E7:80
Device name: 0004:04:00.0
Driver name: net_i40e
Firmware-version: 8.30 0x8000a4ae 1.2926.0
Devargs: 
Connect to socket: 0
memory allocation on the socket: 0
Link status: down
Link speed: None
Link duplex: full-duplex
Autoneg status: On
MTU: 1492
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 52
Redirection table size: 512
Supported RSS offload flow types:
  ipv4-frag  ipv4-tcp  ipv4-udp  ipv4-sctp  ipv4-other
  ipv6-frag  ipv6-tcp  ipv6-udp  ipv6-sctp  ipv6-other
  l2-payload
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Maximum number of VMDq pools: 64
Current number of RX queues: 1
Max possible RX queues: 320
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 1
Max possible TX queues: 320
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 255
Max segment number per MTU/TSO: 8
Device capabilities: 0x3( RUNTIME_RX_QUEUE_SETUP RUNTIME_TX_QUEUE_SETUP )
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
i40e_update_vsi_stats(): ***************** VSI[6] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[6] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------
i40e_update_vsi_stats(): ***************** VSI[7] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[7] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
i40e_dev_clear_queues():  >>
i40e_phy_conf_link():   Current: abilities 38, link_speed 10
i40e_phy_conf_link():   Config:  abilities 20, link_speed 10
Done

Stopping port 1...
Stopping ports...
i40e_dev_clear_queues():  >>
i40e_phy_conf_link():   Current: abilities 38, link_speed 10
i40e_phy_conf_link():   Config:  abilities 20, link_speed 10
Done

Shutting down port 0...
Closing ports...
i40e_dev_close():  >>
i40e_dev_free_queues():  >>
i40e_free_dma_mem_d(): memzone i40e_dma_66 to be freed with physical address: 283605180416
i40e_dev_interrupt_handler(): ICR0: adminq event

Port 1: link state change event
i40e_free_dma_mem_d(): memzone i40e_dma_1 to be freed with physical address: 283605827584
i40e_free_dma_mem_d(): memzone i40e_dma_2 to be freed with physical address: 283605819392
i40e_free_dma_mem_d(): memzone i40e_dma_3 to be freed with physical address: 283605811200
i40e_free_dma_mem_d(): memzone i40e_dma_4 to be freed with physical address: 283605803008
i40e_free_dma_mem_d(): memzone i40e_dma_5 to be freed with physical address: 283605794816
i40e_free_dma_mem_d(): memzone i40e_dma_6 to be freed with physical address: 283605786624
i40e_free_dma_mem_d(): memzone i40e_dma_7 to be freed with physical address: 283605778432
i40e_free_dma_mem_d(): memzone i40e_dma_8 to be freed with physical address: 283605770240
i40e_free_dma_mem_d(): memzone i40e_dma_9 to be freed with physical address: 283605762048
i40e_free_dma_mem_d(): memzone i40e_dma_10 to be freed with physical address: 283605753856
i40e_free_dma_mem_d(): memzone i40e_dma_11 to be freed with physical address: 283605745664
i40e_free_dma_mem_d(): memzone i40e_dma_12 to be freed with physical address: 283605737472
i40e_free_dma_mem_d(): memzone i40e_dma_13 to be freed with physical address: 283605729280
i40e_free_dma_mem_d(): memzone i40e_dma_14 to be freed with physical address: 283605721088
i40e_free_dma_mem_d(): memzone i40e_dma_15 to be freed with physical address: 283605712896
i40e_free_dma_mem_d(): memzone i40e_dma_16 to be freed with physical address: 283605704704
i40e_free_dma_mem_d(): memzone i40e_dma_17 to be freed with physical address: 283605696512
i40e_free_dma_mem_d(): memzone i40e_dma_18 to be freed with physical address: 283605688320
i40e_free_dma_mem_d(): memzone i40e_dma_19 to be freed with physical address: 283605680128
i40e_free_dma_mem_d(): memzone i40e_dma_20 to be freed with physical address: 283605671936
i40e_free_dma_mem_d(): memzone i40e_dma_21 to be freed with physical address: 283605663744
i40e_free_dma_mem_d(): memzone i40e_dma_22 to be freed with physical address: 283605655552
i40e_free_dma_mem_d(): memzone i40e_dma_23 to be freed with physical address: 283605647360
i40e_free_dma_mem_d(): memzone i40e_dma_24 to be freed with physical address: 283605639168
i40e_free_dma_mem_d(): memzone i40e_dma_25 to be freed with physical address: 283605630976
i40e_free_dma_mem_d(): memzone i40e_dma_26 to be freed with physical address: 283605622784
i40e_free_dma_mem_d(): memzone i40e_dma_27 to be freed with physical address: 283605614592
i40e_free_dma_mem_d(): memzone i40e_dma_28 to be freed with physical address: 283605606400
i40e_free_dma_mem_d(): memzone i40e_dma_29 to be freed with physical address: 283605598208
i40e_free_dma_mem_d(): memzone i40e_dma_30 to be freed with physical address: 283605590016
i40e_free_dma_mem_d(): memzone i40e_dma_31 to be freed with physical address: 283605581824
i40e_free_dma_mem_d(): memzone i40e_dma_32 to be freed with physical address: 283605573632
i40e_free_dma_mem_d(): memzone i40e_dma_0 to be freed with physical address: 283605835776
i40e_free_dma_mem_d(): memzone i40e_dma_34 to be freed with physical address: 283605561344
i40e_free_dma_mem_d(): memzone i40e_dma_35 to be freed with physical address: 283605553152
i40e_free_dma_mem_d(): memzone i40e_dma_36 to be freed with physical address: 283605544960
i40e_free_dma_mem_d(): memzone i40e_dma_37 to be freed with physical address: 283605536768
i40e_free_dma_mem_d(): memzone i40e_dma_38 to be freed with physical address: 283605528576
i40e_free_dma_mem_d(): memzone i40e_dma_39 to be freed with physical address: 283605520384
i40e_free_dma_mem_d(): memzone i40e_dma_40 to be freed with physical address: 283605512192
i40e_free_dma_mem_d(): memzone i40e_dma_41 to be freed with physical address: 283605504000
i40e_free_dma_mem_d(): memzone i40e_dma_42 to be freed with physical address: 283605495808
i40e_free_dma_mem_d(): memzone i40e_dma_43 to be freed with physical address: 283605487616
i40e_free_dma_mem_d(): memzone i40e_dma_44 to be freed with physical address: 283605479424
i40e_free_dma_mem_d(): memzone i40e_dma_45 to be freed with physical address: 283605471232
i40e_free_dma_mem_d(): memzone i40e_dma_46 to be freed with physical address: 283605463040
i40e_free_dma_mem_d(): memzone i40e_dma_47 to be freed with physical address: 283605454848
i40e_free_dma_mem_d(): memzone i40e_dma_48 to be freed with physical address: 283605446656
i40e_free_dma_mem_d(): memzone i40e_dma_49 to be freed with physical address: 283605438464
i40e_free_dma_mem_d(): memzone i40e_dma_50 to be freed with physical address: 283605430272
i40e_free_dma_mem_d(): memzone i40e_dma_51 to be freed with physical address: 283605422080
i40e_free_dma_mem_d(): memzone i40e_dma_52 to be freed with physical address: 283605413888
i40e_free_dma_mem_d(): memzone i40e_dma_53 to be freed with physical address: 283605405696
i40e_free_dma_mem_d(): memzone i40e_dma_54 to be freed with physical address: 283605397504
i40e_free_dma_mem_d(): memzone i40e_dma_55 to be freed with physical address: 283605389312
i40e_free_dma_mem_d(): memzone i40e_dma_56 to be freed with physical address: 283605381120
i40e_free_dma_mem_d(): memzone i40e_dma_57 to be freed with physical address: 283605372928
i40e_free_dma_mem_d(): memzone i40e_dma_58 to be freed with physical address: 283605364736
i40e_free_dma_mem_d(): memzone i40e_dma_59 to be freed with physical address: 283605356544
i40e_free_dma_mem_d(): memzone i40e_dma_60 to be freed with physical address: 283605348352
i40e_free_dma_mem_d(): memzone i40e_dma_61 to be freed with physical address: 283605340160
i40e_free_dma_mem_d(): memzone i40e_dma_62 to be freed with physical address: 283605331968
i40e_free_dma_mem_d(): memzone i40e_dma_63 to be freed with physical address: 283605323776
i40e_free_dma_mem_d(): memzone i40e_dma_64 to be freed with physical address: 283605315584
i40e_free_dma_mem_d(): memzone i40e_dma_65 to be freed with physical address: 283605307392
i40e_free_dma_mem_d(): memzone i40e_dma_33 to be freed with physical address: 283605569536
i40e_pf_host_uninit():  >>
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
i40e_dev_close():  >>
i40e_dev_free_queues():  >>
i40e_free_dma_mem_d(): memzone i40e_dma_133 to be freed with physical address: 283604762624
i40e_free_dma_mem_d(): memzone i40e_dma_68 to be freed with physical address: 283608408064
i40e_free_dma_mem_d(): memzone i40e_dma_69 to be freed with physical address: 283608399872
i40e_free_dma_mem_d(): memzone i40e_dma_70 to be freed with physical address: 283608391680
i40e_free_dma_mem_d(): memzone i40e_dma_71 to be freed with physical address: 283608383488
i40e_free_dma_mem_d(): memzone i40e_dma_72 to be freed with physical address: 283608375296
i40e_free_dma_mem_d(): memzone i40e_dma_73 to be freed with physical address: 283608367104
i40e_free_dma_mem_d(): memzone i40e_dma_74 to be freed with physical address: 283608358912
i40e_free_dma_mem_d(): memzone i40e_dma_75 to be freed with physical address: 283606835200
i40e_free_dma_mem_d(): memzone i40e_dma_76 to be freed with physical address: 283606827008
i40e_free_dma_mem_d(): memzone i40e_dma_77 to be freed with physical address: 283606818816
i40e_free_dma_mem_d(): memzone i40e_dma_78 to be freed with physical address: 283606810624
i40e_free_dma_mem_d(): memzone i40e_dma_79 to be freed with physical address: 283606802432
i40e_free_dma_mem_d(): memzone i40e_dma_80 to be freed with physical address: 283606794240
i40e_free_dma_mem_d(): memzone i40e_dma_81 to be freed with physical address: 283606786048
i40e_free_dma_mem_d(): memzone i40e_dma_82 to be freed with physical address: 283606777856
i40e_free_dma_mem_d(): memzone i40e_dma_83 to be freed with physical address: 283606769664
i40e_free_dma_mem_d(): memzone i40e_dma_84 to be freed with physical address: 283606761472
i40e_free_dma_mem_d(): memzone i40e_dma_85 to be freed with physical address: 283606753280
i40e_free_dma_mem_d(): memzone i40e_dma_86 to be freed with physical address: 283606745088
i40e_free_dma_mem_d(): memzone i40e_dma_87 to be freed with physical address: 283606736896
i40e_free_dma_mem_d(): memzone i40e_dma_88 to be freed with physical address: 283606728704
i40e_free_dma_mem_d(): memzone i40e_dma_89 to be freed with physical address: 283606720512
i40e_free_dma_mem_d(): memzone i40e_dma_90 to be freed with physical address: 283606712320
i40e_free_dma_mem_d(): memzone i40e_dma_91 to be freed with physical address: 283606704128
i40e_free_dma_mem_d(): memzone i40e_dma_92 to be freed with physical address: 283606695936
i40e_free_dma_mem_d(): memzone i40e_dma_93 to be freed with physical address: 283606687744
i40e_free_dma_mem_d(): memzone i40e_dma_94 to be freed with physical address: 283606679552
i40e_free_dma_mem_d(): memzone i40e_dma_95 to be freed with physical address: 283606671360
i40e_free_dma_mem_d(): memzone i40e_dma_96 to be freed with physical address: 283606663168
i40e_free_dma_mem_d(): memzone i40e_dma_97 to be freed with physical address: 283606654976
i40e_free_dma_mem_d(): memzone i40e_dma_98 to be freed with physical address: 283606646784
i40e_free_dma_mem_d(): memzone i40e_dma_99 to be freed with physical address: 283606638592
i40e_free_dma_mem_d(): memzone i40e_dma_67 to be freed with physical address: 283608416256
i40e_free_dma_mem_d(): memzone i40e_dma_101 to be freed with physical address: 283606630400
i40e_free_dma_mem_d(): memzone i40e_dma_102 to be freed with physical address: 283606622208
i40e_free_dma_mem_d(): memzone i40e_dma_103 to be freed with physical address: 283606614016
i40e_free_dma_mem_d(): memzone i40e_dma_104 to be freed with physical address: 283606605824
i40e_free_dma_mem_d(): memzone i40e_dma_105 to be freed with physical address: 283606597632
i40e_free_dma_mem_d(): memzone i40e_dma_106 to be freed with physical address: 283606589440
i40e_free_dma_mem_d(): memzone i40e_dma_107 to be freed with physical address: 283606581248
i40e_free_dma_mem_d(): memzone i40e_dma_108 to be freed with physical address: 283606573056
i40e_free_dma_mem_d(): memzone i40e_dma_109 to be freed with physical address: 283606564864
i40e_free_dma_mem_d(): memzone i40e_dma_110 to be freed with physical address: 283606556672
i40e_free_dma_mem_d(): memzone i40e_dma_111 to be freed with physical address: 283606548480
i40e_free_dma_mem_d(): memzone i40e_dma_112 to be freed with physical address: 283606540288
i40e_free_dma_mem_d(): memzone i40e_dma_113 to be freed with physical address: 283606532096
i40e_free_dma_mem_d(): memzone i40e_dma_114 to be freed with physical address: 283606523904
i40e_free_dma_mem_d(): memzone i40e_dma_115 to be freed with physical address: 283606515712
i40e_free_dma_mem_d(): memzone i40e_dma_116 to be freed with physical address: 283606507520
i40e_free_dma_mem_d(): memzone i40e_dma_117 to be freed with physical address: 283606499328
i40e_free_dma_mem_d(): memzone i40e_dma_118 to be freed with physical address: 283606491136
i40e_free_dma_mem_d(): memzone i40e_dma_119 to be freed with physical address: 283606482944
i40e_free_dma_mem_d(): memzone i40e_dma_120 to be freed with physical address: 283606474752
i40e_free_dma_mem_d(): memzone i40e_dma_121 to be freed with physical address: 283606466560
i40e_free_dma_mem_d(): memzone i40e_dma_122 to be freed with physical address: 283606458368
i40e_free_dma_mem_d(): memzone i40e_dma_123 to be freed with physical address: 283606450176
i40e_free_dma_mem_d(): memzone i40e_dma_124 to be freed with physical address: 283606441984
i40e_free_dma_mem_d(): memzone i40e_dma_125 to be freed with physical address: 283606433792
i40e_free_dma_mem_d(): memzone i40e_dma_126 to be freed with physical address: 283606425600
i40e_free_dma_mem_d(): memzone i40e_dma_127 to be freed with physical address: 283606417408
i40e_free_dma_mem_d(): memzone i40e_dma_128 to be freed with physical address: 283606409216
i40e_free_dma_mem_d(): memzone i40e_dma_129 to be freed with physical address: 283606401024
i40e_free_dma_mem_d(): memzone i40e_dma_130 to be freed with physical address: 283606392832
i40e_free_dma_mem_d(): memzone i40e_dma_131 to be freed with physical address: 283606384640
i40e_free_dma_mem_d(): memzone i40e_dma_132 to be freed with physical address: 283606376448
i40e_free_dma_mem_d(): memzone i40e_dma_100 to be freed with physical address: 283608354816
i40e_pf_host_uninit():  >>
Port 1 is closed
Done

Bye...

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
  2023-01-20 12:07 ` Testpmd/l3fwd port shutdown failure on Arm Altra systems Juraj Linkeš
@ 2023-02-06  8:52   ` Juraj Linkeš
  2023-02-07  2:09     ` Xing, Beilei
  0 siblings, 1 reply; 9+ messages in thread
From: Juraj Linkeš @ 2023-02-06  8:52 UTC (permalink / raw)
  To: aman.deep.singh, yuying.zhang, Xing, Beilei
  Cc: dev, Ruifeng Wang, Lijian Zhang, Honnappa Nagarahalli

Hello i40e and testpmd maintainers,

A gentle reminder - would you please advise how to debug the issue
described below?

Thanks,
Juraj

On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš <juraj.linkes@pantheon.tech> wrote:
>
> Adding the logfile.
>
>
>
> One thing that's in the logs but didn't explicitly mention is the DPDK version we've tried this with:
>
> EAL: RTE Version: 'DPDK 22.07.0'
>
>
>
> We also tried earlier versions going back to 21.08, with no luck. I also did a quick check on 22.11, also with no luck.
>
>
>
> Juraj
>
>
>
> From: Juraj Linkeš
> Sent: Friday, January 20, 2023 12:56 PM
> To: 'aman.deep.singh@intel.com' <aman.deep.singh@intel.com>; 'yuying.zhang@intel.com' <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; 'Lijian Zhang' <Lijian.Zhang@arm.com>; 'Honnappa Nagarahalli' <Honnappa.Nagarahalli@arm.com>
> Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
>
>
>
> Hello i40e and testpmd maintainers,
>
>
>
> We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io lab.
>
>
>
> A bit of background: along with VPP performance tests (which uses DPDK), we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io as well. This is to catch any performance differences due to VPP updating its DPDK version.
>
>
>
> We're running both l3fwd tests and testpmd tests. The Altra servers are two socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both directions, but nothing gets forwarded (with a slight caveat - put a pin in this). There's nothing special in the tests, just forwarding traffic. The NIC we're testing is xl710-QDA2.
>
>
>
> The same tests are passing on all other testbeds - we have various two node (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with various NICs (Intel 700 and 800 series and the Intel testbeds use some Mellanox NICs as well). We don't have quite the same combination of another three node topology with the same NIC though, so it looks like something with testpmd/l3fwd and xl710-QDA2 on Altra servers.
>
>
>
> VPP performance tests are passing, but l3fwd and testpmd fail. This leads us to believe to it's a software issue, but there could something wrong with the hardware. I'll talk about testpmd from now on, but as far we can tell, the behavior is the same for testpmd and l3fwd.
>
>
>
> Getting back to the caveat mentioned earlier, there seems to be something wrong with port shutdown. When running testpmd on a testbed that hasn't been used for a while it seems that all ports are up right away (we don't see any "Port 0|1: link state change event") and the setup works fine (forwarding works). After restarting testpmd (restarting on one server is sufficient), the ports between DUT1 and DUT2 (but not between DUTs and TG) go down and are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while (measured in minutes, sometimes dozens of minutes; the duration is seemingly random). The ports eventually recover and can be used again, but there's nothing in syslog suggesting what happened.
>
>
>
> What seems to be happening is testpmd put the ports into some faulty state. This only happens on the DUT1 -> DUT2 link though (the ports between the two testpmds), not on TG -> DUT1 link (the TG port is left alone).
>
>
>
> Some more info:
>
> We've come across the issue with this configuration:
>
> OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
>
> Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
>
> Drivers versions: i40e 2.17.15 and iavf 4.3.19.
>
>
>
> As well as with this configuration:
>
> OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
>
> Updated firmware: 8.30 0x8000a4ae 1.2926.0.
>
> Drivers: i40e 2.19.3 and iavf 4.5.3.
>
>
>
> Unsafe noiommu mode is disabled:
>
> cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
>
> N
>
>
>
> We used DPDK 22.07 in manual testing and built it on DUTs, using generic build:
>
> meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y -Dplatform=generic build
>
>
>
> We're running testpmd with this command:
>
> sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0 --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1 --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768 --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384 --nb-cores=1
>
>
>
> And l3fwd (with different macs on the other server):
>
> sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1" --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
>
>
>
> We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but that didn't reveal anything helpful, as far as we can tell - please have a look at the attached log. The faulty port is port0 (starts out as down, then we waited for around 25 minutes for it to go up and then we shut down testpmd).
>
>
>
> We'd like to ask for pointers on what could be the cause or how to debug this issue further.
>
>
>
> Thanks,
> Juraj

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Testpmd/l3fwd port shutdown failure on Arm Altra systems
  2023-02-06  8:52   ` Juraj Linkeš
@ 2023-02-07  2:09     ` Xing, Beilei
  2023-02-21 11:18       ` Juraj Linkeš
  0 siblings, 1 reply; 9+ messages in thread
From: Xing, Beilei @ 2023-02-07  2:09 UTC (permalink / raw)
  To: Juraj Linkeš, Singh, Aman Deep, Zhang, Yuying, Yang, Qiming
  Cc: dev, Ruifeng Wang, Zhang, Lijian, Honnappa Nagarahalli

Hi Qiming,

Could you please help on this? Thanks.

BR,
Beilei

> -----Original Message-----
> From: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Sent: Monday, February 6, 2023 4:53 PM
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; Zhang, Lijian
> <Lijian.Zhang@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>
> Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> 
> Hello i40e and testpmd maintainers,
> 
> A gentle reminder - would you please advise how to debug the issue described
> below?
> 
> Thanks,
> Juraj
> 
> On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš <juraj.linkes@pantheon.tech>
> wrote:
> >
> > Adding the logfile.
> >
> >
> >
> > One thing that's in the logs but didn't explicitly mention is the DPDK version
> we've tried this with:
> >
> > EAL: RTE Version: 'DPDK 22.07.0'
> >
> >
> >
> > We also tried earlier versions going back to 21.08, with no luck. I also did a
> quick check on 22.11, also with no luck.
> >
> >
> >
> > Juraj
> >
> >
> >
> > From: Juraj Linkeš
> > Sent: Friday, January 20, 2023 12:56 PM
> > To: 'aman.deep.singh@intel.com' <aman.deep.singh@intel.com>;
> > 'yuying.zhang@intel.com' <yuying.zhang@intel.com>; Xing, Beilei
> > <beilei.xing@intel.com>
> > Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; 'Lijian Zhang'
> > <Lijian.Zhang@arm.com>; 'Honnappa Nagarahalli'
> > <Honnappa.Nagarahalli@arm.com>
> > Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> >
> >
> >
> > Hello i40e and testpmd maintainers,
> >
> >
> >
> > We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io
> lab.
> >
> >
> >
> > A bit of background: along with VPP performance tests (which uses DPDK),
> we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io
> as well. This is to catch any performance differences due to VPP updating its
> DPDK version.
> >
> >
> >
> > We're running both l3fwd tests and testpmd tests. The Altra servers are two
> socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both
> directions, but nothing gets forwarded (with a slight caveat - put a pin in this).
> There's nothing special in the tests, just forwarding traffic. The NIC we're
> testing is xl710-QDA2.
> >
> >
> >
> > The same tests are passing on all other testbeds - we have various two node
> (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with
> various NICs (Intel 700 and 800 series and the Intel testbeds use some
> Mellanox NICs as well). We don't have quite the same combination of another
> three node topology with the same NIC though, so it looks like something with
> testpmd/l3fwd and xl710-QDA2 on Altra servers.
> >
> >
> >
> > VPP performance tests are passing, but l3fwd and testpmd fail. This leads us
> to believe to it's a software issue, but there could something wrong with the
> hardware. I'll talk about testpmd from now on, but as far we can tell, the
> behavior is the same for testpmd and l3fwd.
> >
> >
> >
> > Getting back to the caveat mentioned earlier, there seems to be something
> wrong with port shutdown. When running testpmd on a testbed that hasn't
> been used for a while it seems that all ports are up right away (we don't see
> any "Port 0|1: link state change event") and the setup works fine (forwarding
> works). After restarting testpmd (restarting on one server is sufficient), the
> ports between DUT1 and DUT2 (but not between DUTs and TG) go down and
> are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while
> (measured in minutes, sometimes dozens of minutes; the duration is seemingly
> random). The ports eventually recover and can be used again, but there's
> nothing in syslog suggesting what happened.
> >
> >
> >
> > What seems to be happening is testpmd put the ports into some faulty state.
> This only happens on the DUT1 -> DUT2 link though (the ports between the
> two testpmds), not on TG -> DUT1 link (the TG port is left alone).
> >
> >
> >
> > Some more info:
> >
> > We've come across the issue with this configuration:
> >
> > OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
> >
> > Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
> >
> > Drivers versions: i40e 2.17.15 and iavf 4.3.19.
> >
> >
> >
> > As well as with this configuration:
> >
> > OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
> >
> > Updated firmware: 8.30 0x8000a4ae 1.2926.0.
> >
> > Drivers: i40e 2.19.3 and iavf 4.5.3.
> >
> >
> >
> > Unsafe noiommu mode is disabled:
> >
> > cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> >
> > N
> >
> >
> >
> > We used DPDK 22.07 in manual testing and built it on DUTs, using generic
> build:
> >
> > meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> > -Dplatform=generic build
> >
> >
> >
> > We're running testpmd with this command:
> >
> > sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0
> > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > --nb-cores=1
> >
> >
> >
> > And l3fwd (with different macs on the other server):
> >
> > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a
> > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
> >
> >
> >
> > We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but
> that didn't reveal anything helpful, as far as we can tell - please have a look at
> the attached log. The faulty port is port0 (starts out as down, then we waited
> for around 25 minutes for it to go up and then we shut down testpmd).
> >
> >
> >
> > We'd like to ask for pointers on what could be the cause or how to debug
> this issue further.
> >
> >
> >
> > Thanks,
> > Juraj

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
  2023-02-07  2:09     ` Xing, Beilei
@ 2023-02-21 11:18       ` Juraj Linkeš
  2023-03-08  6:25         ` Juraj Linkeš
  0 siblings, 1 reply; 9+ messages in thread
From: Juraj Linkeš @ 2023-02-21 11:18 UTC (permalink / raw)
  To: Xing, Beilei
  Cc: Singh, Aman Deep, Zhang, Yuying, Yang, Qiming, dev, Ruifeng Wang,
	Zhang, Lijian, Honnappa Nagarahalli

[-- Attachment #1: Type: text/plain, Size: 6804 bytes --]

Hi Qiming,

Just a friendly reminder, would you please take a look?

Thanks,
Juraj


On Tue, Feb 7, 2023 at 3:10 AM Xing, Beilei <beilei.xing@intel.com> wrote:
>
> Hi Qiming,
>
> Could you please help on this? Thanks.
>
> BR,
> Beilei
>
> > -----Original Message-----
> > From: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > Sent: Monday, February 6, 2023 4:53 PM
> > To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> > <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; Zhang, Lijian
> > <Lijian.Zhang@arm.com>; Honnappa Nagarahalli
> > <Honnappa.Nagarahalli@arm.com>
> > Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> >
> > Hello i40e and testpmd maintainers,
> >
> > A gentle reminder - would you please advise how to debug the issue described
> > below?
> >
> > Thanks,
> > Juraj
> >
> > On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš <juraj.linkes@pantheon.tech>
> > wrote:
> > >
> > > Adding the logfile.
> > >
> > >
> > >
> > > One thing that's in the logs but didn't explicitly mention is the DPDK version
> > we've tried this with:
> > >
> > > EAL: RTE Version: 'DPDK 22.07.0'
> > >
> > >
> > >
> > > We also tried earlier versions going back to 21.08, with no luck. I also did a
> > quick check on 22.11, also with no luck.
> > >
> > >
> > >
> > > Juraj
> > >
> > >
> > >
> > > From: Juraj Linkeš
> > > Sent: Friday, January 20, 2023 12:56 PM
> > > To: 'aman.deep.singh@intel.com' <aman.deep.singh@intel.com>;
> > > 'yuying.zhang@intel.com' <yuying.zhang@intel.com>; Xing, Beilei
> > > <beilei.xing@intel.com>
> > > Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; 'Lijian Zhang'
> > > <Lijian.Zhang@arm.com>; 'Honnappa Nagarahalli'
> > > <Honnappa.Nagarahalli@arm.com>
> > > Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > >
> > >
> > >
> > > Hello i40e and testpmd maintainers,
> > >
> > >
> > >
> > > We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io
> > lab.
> > >
> > >
> > >
> > > A bit of background: along with VPP performance tests (which uses DPDK),
> > we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io
> > as well. This is to catch any performance differences due to VPP updating its
> > DPDK version.
> > >
> > >
> > >
> > > We're running both l3fwd tests and testpmd tests. The Altra servers are two
> > socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both
> > directions, but nothing gets forwarded (with a slight caveat - put a pin in this).
> > There's nothing special in the tests, just forwarding traffic. The NIC we're
> > testing is xl710-QDA2.
> > >
> > >
> > >
> > > The same tests are passing on all other testbeds - we have various two node
> > (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with
> > various NICs (Intel 700 and 800 series and the Intel testbeds use some
> > Mellanox NICs as well). We don't have quite the same combination of another
> > three node topology with the same NIC though, so it looks like something with
> > testpmd/l3fwd and xl710-QDA2 on Altra servers.
> > >
> > >
> > >
> > > VPP performance tests are passing, but l3fwd and testpmd fail. This leads us
> > to believe to it's a software issue, but there could something wrong with the
> > hardware. I'll talk about testpmd from now on, but as far we can tell, the
> > behavior is the same for testpmd and l3fwd.
> > >
> > >
> > >
> > > Getting back to the caveat mentioned earlier, there seems to be something
> > wrong with port shutdown. When running testpmd on a testbed that hasn't
> > been used for a while it seems that all ports are up right away (we don't see
> > any "Port 0|1: link state change event") and the setup works fine (forwarding
> > works). After restarting testpmd (restarting on one server is sufficient), the
> > ports between DUT1 and DUT2 (but not between DUTs and TG) go down and
> > are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while
> > (measured in minutes, sometimes dozens of minutes; the duration is seemingly
> > random). The ports eventually recover and can be used again, but there's
> > nothing in syslog suggesting what happened.
> > >
> > >
> > >
> > > What seems to be happening is testpmd put the ports into some faulty state.
> > This only happens on the DUT1 -> DUT2 link though (the ports between the
> > two testpmds), not on TG -> DUT1 link (the TG port is left alone).
> > >
> > >
> > >
> > > Some more info:
> > >
> > > We've come across the issue with this configuration:
> > >
> > > OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
> > >
> > > Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
> > >
> > > Drivers versions: i40e 2.17.15 and iavf 4.3.19.
> > >
> > >
> > >
> > > As well as with this configuration:
> > >
> > > OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
> > >
> > > Updated firmware: 8.30 0x8000a4ae 1.2926.0.
> > >
> > > Drivers: i40e 2.19.3 and iavf 4.5.3.
> > >
> > >
> > >
> > > Unsafe noiommu mode is disabled:
> > >
> > > cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > >
> > > N
> > >
> > >
> > >
> > > We used DPDK 22.07 in manual testing and built it on DUTs, using generic
> > build:
> > >
> > > meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> > > -Dplatform=generic build
> > >
> > >
> > >
> > > We're running testpmd with this command:
> > >
> > > sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0
> > > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > > --nb-cores=1
> > >
> > >
> > >
> > > And l3fwd (with different macs on the other server):
> > >
> > > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a
> > > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
> > >
> > >
> > >
> > > We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but
> > that didn't reveal anything helpful, as far as we can tell - please have a look at
> > the attached log. The faulty port is port0 (starts out as down, then we waited
> > for around 25 minutes for it to go up and then we shut down testpmd).
> > >
> > >
> > >
> > > We'd like to ask for pointers on what could be the cause or how to debug
> > this issue further.
> > >
> > >
> > >
> > > Thanks,
> > > Juraj

[-- Attachment #2: testpmd.log --]
[-- Type: application/octet-stream, Size: 71498 bytes --]

sudo /tmp/openvpp-testing/dpdk/build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0 --in-memory --log-level=pmd,debug -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1 --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768 --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384 --nb-cores=1 --no-lsc-interrupt
EAL: Detected CPU lcores: 160
EAL: Detected NUMA nodes: 2
EAL: RTE Version: 'DPDK 22.07.0'
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mode 'VA'
EAL: No free 32768 kB hugepages reported on node 0
EAL: No free 32768 kB hugepages reported on node 1
EAL: No free 64 kB hugepages reported on node 0
EAL: No free 64 kB hugepages reported on node 1
EAL: 32 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_i40e (8086:1583) device: 0004:04:00.0 (socket 0)
eth_i40e_dev_init():  >>
i40e_pf_reset(): Core and Global modules ready 0
i40e_init_shared_code(): i40e_init_shared_code
i40e_set_mac_type(): i40e_set_mac_type

i40e_set_mac_type(): i40e_set_mac_type found mac: 1, returns: 0
i40e_init_nvm(): i40e_init_nvm
i40e_allocate_dma_mem_d(): memzone i40e_dma_0 allocated with physical address: 283605835776
i40e_allocate_dma_mem_d(): memzone i40e_dma_1 allocated with physical address: 283605827584
i40e_allocate_dma_mem_d(): memzone i40e_dma_2 allocated with physical address: 283605819392
i40e_allocate_dma_mem_d(): memzone i40e_dma_3 allocated with physical address: 283605811200
i40e_allocate_dma_mem_d(): memzone i40e_dma_4 allocated with physical address: 283605803008
i40e_allocate_dma_mem_d(): memzone i40e_dma_5 allocated with physical address: 283605794816
i40e_allocate_dma_mem_d(): memzone i40e_dma_6 allocated with physical address: 283605786624
i40e_allocate_dma_mem_d(): memzone i40e_dma_7 allocated with physical address: 283605778432
i40e_allocate_dma_mem_d(): memzone i40e_dma_8 allocated with physical address: 283605770240
i40e_allocate_dma_mem_d(): memzone i40e_dma_9 allocated with physical address: 283605762048
i40e_allocate_dma_mem_d(): memzone i40e_dma_10 allocated with physical address: 283605753856
i40e_allocate_dma_mem_d(): memzone i40e_dma_11 allocated with physical address: 283605745664
i40e_allocate_dma_mem_d(): memzone i40e_dma_12 allocated with physical address: 283605737472
i40e_allocate_dma_mem_d(): memzone i40e_dma_13 allocated with physical address: 283605729280
i40e_allocate_dma_mem_d(): memzone i40e_dma_14 allocated with physical address: 283605721088
i40e_allocate_dma_mem_d(): memzone i40e_dma_15 allocated with physical address: 283605712896
i40e_allocate_dma_mem_d(): memzone i40e_dma_16 allocated with physical address: 283605704704
i40e_allocate_dma_mem_d(): memzone i40e_dma_17 allocated with physical address: 283605696512
i40e_allocate_dma_mem_d(): memzone i40e_dma_18 allocated with physical address: 283605688320
i40e_allocate_dma_mem_d(): memzone i40e_dma_19 allocated with physical address: 283605680128
i40e_allocate_dma_mem_d(): memzone i40e_dma_20 allocated with physical address: 283605671936
i40e_allocate_dma_mem_d(): memzone i40e_dma_21 allocated with physical address: 283605663744
i40e_allocate_dma_mem_d(): memzone i40e_dma_22 allocated with physical address: 283605655552
i40e_allocate_dma_mem_d(): memzone i40e_dma_23 allocated with physical address: 283605647360
i40e_allocate_dma_mem_d(): memzone i40e_dma_24 allocated with physical address: 283605639168
i40e_allocate_dma_mem_d(): memzone i40e_dma_25 allocated with physical address: 283605630976
i40e_allocate_dma_mem_d(): memzone i40e_dma_26 allocated with physical address: 283605622784
i40e_allocate_dma_mem_d(): memzone i40e_dma_27 allocated with physical address: 283605614592
i40e_allocate_dma_mem_d(): memzone i40e_dma_28 allocated with physical address: 283605606400
i40e_allocate_dma_mem_d(): memzone i40e_dma_29 allocated with physical address: 283605598208
i40e_allocate_dma_mem_d(): memzone i40e_dma_30 allocated with physical address: 283605590016
i40e_allocate_dma_mem_d(): memzone i40e_dma_31 allocated with physical address: 283605581824
i40e_allocate_dma_mem_d(): memzone i40e_dma_32 allocated with physical address: 283605573632
i40e_allocate_dma_mem_d(): memzone i40e_dma_33 allocated with physical address: 283605569536
i40e_allocate_dma_mem_d(): memzone i40e_dma_34 allocated with physical address: 283605561344
i40e_allocate_dma_mem_d(): memzone i40e_dma_35 allocated with physical address: 283605553152
i40e_allocate_dma_mem_d(): memzone i40e_dma_36 allocated with physical address: 283605544960
i40e_allocate_dma_mem_d(): memzone i40e_dma_37 allocated with physical address: 283605536768
i40e_allocate_dma_mem_d(): memzone i40e_dma_38 allocated with physical address: 283605528576
i40e_allocate_dma_mem_d(): memzone i40e_dma_39 allocated with physical address: 283605520384
i40e_allocate_dma_mem_d(): memzone i40e_dma_40 allocated with physical address: 283605512192
i40e_allocate_dma_mem_d(): memzone i40e_dma_41 allocated with physical address: 283605504000
i40e_allocate_dma_mem_d(): memzone i40e_dma_42 allocated with physical address: 283605495808
i40e_allocate_dma_mem_d(): memzone i40e_dma_43 allocated with physical address: 283605487616
i40e_allocate_dma_mem_d(): memzone i40e_dma_44 allocated with physical address: 283605479424
i40e_allocate_dma_mem_d(): memzone i40e_dma_45 allocated with physical address: 283605471232
i40e_allocate_dma_mem_d(): memzone i40e_dma_46 allocated with physical address: 283605463040
i40e_allocate_dma_mem_d(): memzone i40e_dma_47 allocated with physical address: 283605454848
i40e_allocate_dma_mem_d(): memzone i40e_dma_48 allocated with physical address: 283605446656
i40e_allocate_dma_mem_d(): memzone i40e_dma_49 allocated with physical address: 283605438464
i40e_allocate_dma_mem_d(): memzone i40e_dma_50 allocated with physical address: 283605430272
i40e_allocate_dma_mem_d(): memzone i40e_dma_51 allocated with physical address: 283605422080
i40e_allocate_dma_mem_d(): memzone i40e_dma_52 allocated with physical address: 283605413888
i40e_allocate_dma_mem_d(): memzone i40e_dma_53 allocated with physical address: 283605405696
i40e_allocate_dma_mem_d(): memzone i40e_dma_54 allocated with physical address: 283605397504
i40e_allocate_dma_mem_d(): memzone i40e_dma_55 allocated with physical address: 283605389312
i40e_allocate_dma_mem_d(): memzone i40e_dma_56 allocated with physical address: 283605381120
i40e_allocate_dma_mem_d(): memzone i40e_dma_57 allocated with physical address: 283605372928
i40e_allocate_dma_mem_d(): memzone i40e_dma_58 allocated with physical address: 283605364736
i40e_allocate_dma_mem_d(): memzone i40e_dma_59 allocated with physical address: 283605356544
i40e_allocate_dma_mem_d(): memzone i40e_dma_60 allocated with physical address: 283605348352
i40e_allocate_dma_mem_d(): memzone i40e_dma_61 allocated with physical address: 283605340160
i40e_allocate_dma_mem_d(): memzone i40e_dma_62 allocated with physical address: 283605331968
i40e_allocate_dma_mem_d(): memzone i40e_dma_63 allocated with physical address: 283605323776
i40e_allocate_dma_mem_d(): memzone i40e_dma_64 allocated with physical address: 283605315584
i40e_allocate_dma_mem_d(): memzone i40e_dma_65 allocated with physical address: 283605307392
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_aq_release_resource(): i40e_aq_release_resource
eth_i40e_dev_init(): FW 8.3 API 1.13 NVM 08.03.00 eetrack 8000a4ae
i40e_enable_extended_tag(): Extended Tag has already been enabled
i40e_check_write_reg(): [0x002507c0] original: 0x00000000
i40e_check_write_reg(): [0x002507c0] after: 0x00000000
i40e_check_write_reg(): [0x002507e0] original: 0x0001801e
i40e_check_write_reg(): [0x002507e0] after: 0x0001801e
i40e_check_write_reg(): [0x00250840] original: 0x00000000
i40e_check_write_reg(): [0x00250840] after: 0x00000000
i40e_check_write_reg(): [0x00250860] original: 0x0001801e
i40e_check_write_reg(): [0x00250860] after: 0x0001801e
i40e_check_write_reg(): [0x00250880] original: 0x80000000
i40e_check_write_reg(): [0x00250880] after: 0x80000000
i40e_check_write_reg(): [0x002508a0] original: 0x0001801f
i40e_check_write_reg(): [0x002508a0] after: 0x0001801f
i40e_check_write_reg(): [0x002508c0] original: 0x00000000
i40e_check_write_reg(): [0x002508c0] after: 0x00000000
i40e_check_write_reg(): [0x002508e0] original: 0x00018018
i40e_check_write_reg(): [0x002508e0] after: 0x00018018
i40e_check_write_reg(): [0x00250900] original: 0x00000000
i40e_check_write_reg(): [0x00250900] after: 0x00000000
i40e_check_write_reg(): [0x00250920] original: 0x00018018
i40e_check_write_reg(): [0x00250920] after: 0x00018018
i40e_check_write_reg(): [0x00250a40] original: 0x00000000
i40e_check_write_reg(): [0x00250a40] after: 0x00000000
i40e_check_write_reg(): [0x00250a60] original: 0x0007fffe
i40e_check_write_reg(): [0x00250a60] after: 0x0007fffe
i40e_check_write_reg(): [0x00250ac0] original: 0x00000000
i40e_check_write_reg(): [0x00250ac0] after: 0x00000000
i40e_check_write_reg(): [0x00250ae0] original: 0x0007fffe
i40e_check_write_reg(): [0x00250ae0] after: 0x0007fffe
i40e_check_write_reg(): [0x00250b00] original: 0x80000000
i40e_check_write_reg(): [0x00250b00] after: 0x80000000
i40e_check_write_reg(): [0x00250b20] original: 0x0007ffff
i40e_check_write_reg(): [0x00250b20] after: 0x0007ffff
i40e_check_write_reg(): [0x00250b40] original: 0x00000000
i40e_check_write_reg(): [0x00250b40] after: 0x00000000
i40e_check_write_reg(): [0x00250b60] original: 0x0007fff8
i40e_check_write_reg(): [0x00250b60] after: 0x0007fff8
i40e_check_write_reg(): [0x00250b80] original: 0x00000000
i40e_check_write_reg(): [0x00250b80] after: 0x00000000
i40e_check_write_reg(): [0x00250ba0] original: 0x0007fff8
i40e_check_write_reg(): [0x00250ba0] after: 0x0007fff8
i40e_check_write_reg(): [0x00250fc0] original: 0x00004000
i40e_check_write_reg(): [0x00250fc0] after: 0x00004000
i40e_check_write_reg(): [0x00250fe0] original: 0x00000000
i40e_check_write_reg(): [0x00250fe0] after: 0x00000000
eth_i40e_dev_init(): Global register 0x0026c7a0 is changed with 0x28
i40e_configure_registers(): Read from 0x26ce00: 0x203f0200
i40e_configure_registers(): Read from 0x26ce08: 0x11f0200
i40e_get_swr_pm_cfg(): Device 0x1583 with GL_SWR_PM_UP_THR value - 0x06060606
i40e_configure_registers(): Read from 0x269fbc: 0x6060606
i40e_pf_parameter_init(): 64 VMDQ VSIs, 4 queues per VMDQ VSI, in total 256 queues
i40e_allocate_dma_mem_d(): memzone i40e_dma_66 allocated with physical address: 283605180416
i40e_validate_mac_addr(): i40e_validate_mac_addr
i40e_update_default_filter_setting(): Cannot remove the default macvlan filter
i40e_vsi_get_bw_config(): VSI bw limit:0
i40e_vsi_get_bw_config(): VSI max_bw:0
i40e_vsi_get_bw_config():       VSI TC0:share credits 1
i40e_vsi_get_bw_config():       VSI TC0:credits 0
i40e_vsi_get_bw_config():       VSI TC0: max credits: 0
i40e_vsi_get_bw_config():       VSI TC1:share credits 0
i40e_vsi_get_bw_config():       VSI TC1:credits 0
i40e_vsi_get_bw_config():       VSI TC1: max credits: 0
i40e_vsi_get_bw_config():       VSI TC2:share credits 0
i40e_vsi_get_bw_config():       VSI TC2:credits 0
i40e_vsi_get_bw_config():       VSI TC2: max credits: 0
i40e_vsi_get_bw_config():       VSI TC3:share credits 0
i40e_vsi_get_bw_config():       VSI TC3:credits 0
i40e_vsi_get_bw_config():       VSI TC3: max credits: 0
i40e_vsi_get_bw_config():       VSI TC4:share credits 0
i40e_vsi_get_bw_config():       VSI TC4:credits 0
i40e_vsi_get_bw_config():       VSI TC4: max credits: 0
i40e_vsi_get_bw_config():       VSI TC5:share credits 0
i40e_vsi_get_bw_config():       VSI TC5:credits 0
i40e_vsi_get_bw_config():       VSI TC5: max credits: 0
i40e_vsi_get_bw_config():       VSI TC6:share credits 0
i40e_vsi_get_bw_config():       VSI TC6:credits 0
i40e_vsi_get_bw_config():       VSI TC6: max credits: 0
i40e_vsi_get_bw_config():       VSI TC7:share credits 0
i40e_vsi_get_bw_config():       VSI TC7:credits 0
i40e_vsi_get_bw_config():       VSI TC7: max credits: 0
i40e_pf_setup(): Hardware capability of hash lookup table size: 512
i40e_update_flow_control(): Link auto negotiation not completed
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_read_nvm_buffer_srctl(): i40e_read_nvm_buffer_srctl
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_pf_host_init():  >>
i40e_init_filter_invalidation(): FDIR INVALPRIO set to guaranteed first
i40e_init_fdir_filter_list(): FDIR guarantee space: 512, best_effort space 7168.
i40e_update_vsi_stats(): ***************** VSI[6] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[6] stats end *******************
EAL: Probe PCI driver: net_i40e (8086:1583) device: 0004:04:00.1 (socket 0)
eth_i40e_dev_init():  >>
i40e_pf_reset(): Core and Global modules ready 0
i40e_init_shared_code(): i40e_init_shared_code
i40e_set_mac_type(): i40e_set_mac_type

i40e_set_mac_type(): i40e_set_mac_type found mac: 1, returns: 0
i40e_init_nvm(): i40e_init_nvm
i40e_allocate_dma_mem_d(): memzone i40e_dma_67 allocated with physical address: 283608416256
i40e_allocate_dma_mem_d(): memzone i40e_dma_68 allocated with physical address: 283608408064
i40e_allocate_dma_mem_d(): memzone i40e_dma_69 allocated with physical address: 283608399872
i40e_allocate_dma_mem_d(): memzone i40e_dma_70 allocated with physical address: 283608391680
i40e_allocate_dma_mem_d(): memzone i40e_dma_71 allocated with physical address: 283608383488
i40e_allocate_dma_mem_d(): memzone i40e_dma_72 allocated with physical address: 283608375296
i40e_allocate_dma_mem_d(): memzone i40e_dma_73 allocated with physical address: 283608367104
i40e_allocate_dma_mem_d(): memzone i40e_dma_74 allocated with physical address: 283608358912
i40e_allocate_dma_mem_d(): memzone i40e_dma_75 allocated with physical address: 283606835200
i40e_allocate_dma_mem_d(): memzone i40e_dma_76 allocated with physical address: 283606827008
i40e_allocate_dma_mem_d(): memzone i40e_dma_77 allocated with physical address: 283606818816
i40e_allocate_dma_mem_d(): memzone i40e_dma_78 allocated with physical address: 283606810624
i40e_allocate_dma_mem_d(): memzone i40e_dma_79 allocated with physical address: 283606802432
i40e_allocate_dma_mem_d(): memzone i40e_dma_80 allocated with physical address: 283606794240
i40e_allocate_dma_mem_d(): memzone i40e_dma_81 allocated with physical address: 283606786048
i40e_allocate_dma_mem_d(): memzone i40e_dma_82 allocated with physical address: 283606777856
i40e_allocate_dma_mem_d(): memzone i40e_dma_83 allocated with physical address: 283606769664
i40e_allocate_dma_mem_d(): memzone i40e_dma_84 allocated with physical address: 283606761472
i40e_allocate_dma_mem_d(): memzone i40e_dma_85 allocated with physical address: 283606753280
i40e_allocate_dma_mem_d(): memzone i40e_dma_86 allocated with physical address: 283606745088
i40e_allocate_dma_mem_d(): memzone i40e_dma_87 allocated with physical address: 283606736896
i40e_allocate_dma_mem_d(): memzone i40e_dma_88 allocated with physical address: 283606728704
i40e_allocate_dma_mem_d(): memzone i40e_dma_89 allocated with physical address: 283606720512
i40e_allocate_dma_mem_d(): memzone i40e_dma_90 allocated with physical address: 283606712320
i40e_allocate_dma_mem_d(): memzone i40e_dma_91 allocated with physical address: 283606704128
i40e_allocate_dma_mem_d(): memzone i40e_dma_92 allocated with physical address: 283606695936
i40e_allocate_dma_mem_d(): memzone i40e_dma_93 allocated with physical address: 283606687744
i40e_allocate_dma_mem_d(): memzone i40e_dma_94 allocated with physical address: 283606679552
i40e_allocate_dma_mem_d(): memzone i40e_dma_95 allocated with physical address: 283606671360
i40e_allocate_dma_mem_d(): memzone i40e_dma_96 allocated with physical address: 283606663168
i40e_allocate_dma_mem_d(): memzone i40e_dma_97 allocated with physical address: 283606654976
i40e_allocate_dma_mem_d(): memzone i40e_dma_98 allocated with physical address: 283606646784
i40e_allocate_dma_mem_d(): memzone i40e_dma_99 allocated with physical address: 283606638592
i40e_allocate_dma_mem_d(): memzone i40e_dma_100 allocated with physical address: 283608354816
i40e_allocate_dma_mem_d(): memzone i40e_dma_101 allocated with physical address: 283606630400
i40e_allocate_dma_mem_d(): memzone i40e_dma_102 allocated with physical address: 283606622208
i40e_allocate_dma_mem_d(): memzone i40e_dma_103 allocated with physical address: 283606614016
i40e_allocate_dma_mem_d(): memzone i40e_dma_104 allocated with physical address: 283606605824
i40e_allocate_dma_mem_d(): memzone i40e_dma_105 allocated with physical address: 283606597632
i40e_allocate_dma_mem_d(): memzone i40e_dma_106 allocated with physical address: 283606589440
i40e_allocate_dma_mem_d(): memzone i40e_dma_107 allocated with physical address: 283606581248
i40e_allocate_dma_mem_d(): memzone i40e_dma_108 allocated with physical address: 283606573056
i40e_allocate_dma_mem_d(): memzone i40e_dma_109 allocated with physical address: 283606564864
i40e_allocate_dma_mem_d(): memzone i40e_dma_110 allocated with physical address: 283606556672
i40e_allocate_dma_mem_d(): memzone i40e_dma_111 allocated with physical address: 283606548480
i40e_allocate_dma_mem_d(): memzone i40e_dma_112 allocated with physical address: 283606540288
i40e_allocate_dma_mem_d(): memzone i40e_dma_113 allocated with physical address: 283606532096
i40e_allocate_dma_mem_d(): memzone i40e_dma_114 allocated with physical address: 283606523904
i40e_allocate_dma_mem_d(): memzone i40e_dma_115 allocated with physical address: 283606515712
i40e_allocate_dma_mem_d(): memzone i40e_dma_116 allocated with physical address: 283606507520
i40e_allocate_dma_mem_d(): memzone i40e_dma_117 allocated with physical address: 283606499328
i40e_allocate_dma_mem_d(): memzone i40e_dma_118 allocated with physical address: 283606491136
i40e_allocate_dma_mem_d(): memzone i40e_dma_119 allocated with physical address: 283606482944
i40e_allocate_dma_mem_d(): memzone i40e_dma_120 allocated with physical address: 283606474752
i40e_allocate_dma_mem_d(): memzone i40e_dma_121 allocated with physical address: 283606466560
i40e_allocate_dma_mem_d(): memzone i40e_dma_122 allocated with physical address: 283606458368
i40e_allocate_dma_mem_d(): memzone i40e_dma_123 allocated with physical address: 283606450176
i40e_allocate_dma_mem_d(): memzone i40e_dma_124 allocated with physical address: 283606441984
i40e_allocate_dma_mem_d(): memzone i40e_dma_125 allocated with physical address: 283606433792
i40e_allocate_dma_mem_d(): memzone i40e_dma_126 allocated with physical address: 283606425600
i40e_allocate_dma_mem_d(): memzone i40e_dma_127 allocated with physical address: 283606417408
i40e_allocate_dma_mem_d(): memzone i40e_dma_128 allocated with physical address: 283606409216
i40e_allocate_dma_mem_d(): memzone i40e_dma_129 allocated with physical address: 283606401024
i40e_allocate_dma_mem_d(): memzone i40e_dma_130 allocated with physical address: 283606392832
i40e_allocate_dma_mem_d(): memzone i40e_dma_131 allocated with physical address: 283606384640
i40e_allocate_dma_mem_d(): memzone i40e_dma_132 allocated with physical address: 283606376448
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_aq_release_resource(): i40e_aq_release_resource
eth_i40e_dev_init(): FW 8.3 API 1.13 NVM 08.03.00 eetrack 8000a4ae
i40e_enable_extended_tag(): Extended Tag has already been enabled
i40e_check_write_reg(): [0x002507c0] original: 0x00000000
i40e_check_write_reg(): [0x002507c0] after: 0x00000000
i40e_check_write_reg(): [0x002507e0] original: 0x0001801e
i40e_check_write_reg(): [0x002507e0] after: 0x0001801e
i40e_check_write_reg(): [0x00250840] original: 0x00000000
i40e_check_write_reg(): [0x00250840] after: 0x00000000
i40e_check_write_reg(): [0x00250860] original: 0x0001801e
i40e_check_write_reg(): [0x00250860] after: 0x0001801e
i40e_check_write_reg(): [0x00250880] original: 0x80000000
i40e_check_write_reg(): [0x00250880] after: 0x80000000
i40e_check_write_reg(): [0x002508a0] original: 0x0001801f
i40e_check_write_reg(): [0x002508a0] after: 0x0001801f
i40e_check_write_reg(): [0x002508c0] original: 0x00000000
i40e_check_write_reg(): [0x002508c0] after: 0x00000000
i40e_check_write_reg(): [0x002508e0] original: 0x00018018
i40e_check_write_reg(): [0x002508e0] after: 0x00018018
i40e_check_write_reg(): [0x00250900] original: 0x00000000
i40e_check_write_reg(): [0x00250900] after: 0x00000000
i40e_check_write_reg(): [0x00250920] original: 0x00018018
i40e_check_write_reg(): [0x00250920] after: 0x00018018
i40e_check_write_reg(): [0x00250a40] original: 0x00000000
i40e_check_write_reg(): [0x00250a40] after: 0x00000000
i40e_check_write_reg(): [0x00250a60] original: 0x0007fffe
i40e_check_write_reg(): [0x00250a60] after: 0x0007fffe
i40e_check_write_reg(): [0x00250ac0] original: 0x00000000
i40e_check_write_reg(): [0x00250ac0] after: 0x00000000
i40e_check_write_reg(): [0x00250ae0] original: 0x0007fffe
i40e_check_write_reg(): [0x00250ae0] after: 0x0007fffe
i40e_check_write_reg(): [0x00250b00] original: 0x80000000
i40e_check_write_reg(): [0x00250b00] after: 0x80000000
i40e_check_write_reg(): [0x00250b20] original: 0x0007ffff
i40e_check_write_reg(): [0x00250b20] after: 0x0007ffff
i40e_check_write_reg(): [0x00250b40] original: 0x00000000
i40e_check_write_reg(): [0x00250b40] after: 0x00000000
i40e_check_write_reg(): [0x00250b60] original: 0x0007fff8
i40e_check_write_reg(): [0x00250b60] after: 0x0007fff8
i40e_check_write_reg(): [0x00250b80] original: 0x00000000
i40e_check_write_reg(): [0x00250b80] after: 0x00000000
i40e_check_write_reg(): [0x00250ba0] original: 0x0007fff8
i40e_check_write_reg(): [0x00250ba0] after: 0x0007fff8
i40e_check_write_reg(): [0x00250fc0] original: 0x00004000
i40e_check_write_reg(): [0x00250fc0] after: 0x00004000
i40e_check_write_reg(): [0x00250fe0] original: 0x00000000
i40e_check_write_reg(): [0x00250fe0] after: 0x00000000
eth_i40e_dev_init(): Global register 0x0026c7a0 is changed with 0x28
i40e_configure_registers(): Read from 0x26ce00: 0x203f0200
i40e_configure_registers(): Read from 0x26ce08: 0x11f0200
i40e_get_swr_pm_cfg(): Device 0x1583 with GL_SWR_PM_UP_THR value - 0x06060606
i40e_configure_registers(): Read from 0x269fbc: 0x6060606
i40e_pf_parameter_init(): 64 VMDQ VSIs, 4 queues per VMDQ VSI, in total 256 queues
i40e_allocate_dma_mem_d(): memzone i40e_dma_133 allocated with physical address: 283604762624
i40e_validate_mac_addr(): i40e_validate_mac_addr
i40e_update_default_filter_setting(): Cannot remove the default macvlan filter
i40e_vsi_get_bw_config(): VSI bw limit:0
i40e_vsi_get_bw_config(): VSI max_bw:0
i40e_vsi_get_bw_config():       VSI TC0:share credits 1
i40e_vsi_get_bw_config():       VSI TC0:credits 0
i40e_vsi_get_bw_config():       VSI TC0: max credits: 0
i40e_vsi_get_bw_config():       VSI TC1:share credits 0
i40e_vsi_get_bw_config():       VSI TC1:credits 0
i40e_vsi_get_bw_config():       VSI TC1: max credits: 0
i40e_vsi_get_bw_config():       VSI TC2:share credits 0
i40e_vsi_get_bw_config():       VSI TC2:credits 0
i40e_vsi_get_bw_config():       VSI TC2: max credits: 0
i40e_vsi_get_bw_config():       VSI TC3:share credits 0
i40e_vsi_get_bw_config():       VSI TC3:credits 0
i40e_vsi_get_bw_config():       VSI TC3: max credits: 0
i40e_vsi_get_bw_config():       VSI TC4:share credits 0
i40e_vsi_get_bw_config():       VSI TC4:credits 0
i40e_vsi_get_bw_config():       VSI TC4: max credits: 0
i40e_vsi_get_bw_config():       VSI TC5:share credits 0
i40e_vsi_get_bw_config():       VSI TC5:credits 0
i40e_vsi_get_bw_config():       VSI TC5: max credits: 0
i40e_vsi_get_bw_config():       VSI TC6:share credits 0
i40e_vsi_get_bw_config():       VSI TC6:credits 0
i40e_vsi_get_bw_config():       VSI TC6: max credits: 0
i40e_vsi_get_bw_config():       VSI TC7:share credits 0
i40e_vsi_get_bw_config():       VSI TC7:credits 0
i40e_vsi_get_bw_config():       VSI TC7: max credits: 0
i40e_pf_setup(): Hardware capability of hash lookup table size: 512
i40e_update_flow_control(): Link auto negotiation not completed
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_acquire_nvm(): i40e_acquire_nvm
i40e_aq_request_resource(): i40e_aq_request_resource
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_release_nvm(): i40e_release_nvm
i40e_aq_release_resource(): i40e_aq_release_resource
i40e_read_nvm_buffer_srctl(): i40e_read_nvm_buffer_srctl
i40e_read_nvm_word_srctl(): i40e_read_nvm_word_srctl
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_poll_sr_srctl_done_bit(): i40e_poll_sr_srctl_done_bit
i40e_pf_host_init():  >>
i40e_init_filter_invalidation(): FDIR INVALPRIO set to guaranteed first
i40e_init_fdir_filter_list(): FDIR guarantee space: 512, best_effort space 7168.
i40e_update_vsi_stats(): ***************** VSI[7] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[7] stats end *******************
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=32768, size=16384, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
i40e_set_tx_function_flag(): Vector Tx can be enabled on Tx queue 0.
i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
i40e_set_tx_function(): Using Vector Tx (port 0).
i40e_set_rx_function(): Using Vector Rx (port 0).
i40e_dev_rx_queue_start():  >>
i40e_dev_tx_queue_start():  >>
i40e_phy_conf_link():   Current: abilities 20, link_speed 10
i40e_phy_conf_link():   Config:  abilities 38, link_speed 7e
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 58016, etype_used = 65535, mac_etype_free = 65535, etype_free = 0
i40e_fdir_filter_restore(): FDIR: Guarant count: 0,  Best count: 0
i40e_dev_alarm_handler(): ICR0: adminq event
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_set_mac_max_frame(): Set max frame size at port level not applicable on link down
Port 0: 40:A6:B7:85:E7:80
Configuring Port 1 (socket 0)
i40e_set_tx_function_flag(): Vector Tx can be enabled on Tx queue 0.
i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
i40e_set_tx_function(): Using Vector Tx (port 1).
i40e_set_rx_function(): Using Vector Rx (port 1).
i40e_dev_rx_queue_start():  >>
i40e_dev_tx_queue_start():  >>
i40e_phy_conf_link():   Current: abilities 20, link_speed 10
i40e_phy_conf_link():   Config:  abilities 38, link_speed 7e
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 58016, etype_used = 65535, mac_etype_free = 65535, etype_free = 0
i40e_fdir_filter_restore(): FDIR: Guarant count: 0,  Best count: 0
i40e_dev_alarm_handler(): ICR0: adminq event
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
i40e_dev_handle_aq_msg(): Request 2561 is not supported yet
Port 1: 40:A6:B7:85:E7:81
Checking link statuses...
i40e_dev_alarm_handler(): ICR0: adminq event

Port 0 Link down
Port 1 Link up at 40 Gbps FDX Autoneg
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=64
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
i40e_update_vsi_stats(): ***************** VSI[6] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[6] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************
i40e_update_vsi_stats(): ***************** VSI[7] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[7] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************
testpmd> 
testpmd> 
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 40:A6:B7:85:E7:80
Device name: 0004:04:00.0
Driver name: net_i40e
Firmware-version: 8.30 0x8000a4ae 1.2926.0
Devargs: 
Connect to socket: 0
memory allocation on the socket: 0
Link status: down
Link speed: None
Link duplex: full-duplex
Autoneg status: On
MTU: 1492
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 52
Redirection table size: 512
Supported RSS offload flow types:
  ipv4-frag  ipv4-tcp  ipv4-udp  ipv4-sctp  ipv4-other
  ipv6-frag  ipv6-tcp  ipv6-udp  ipv6-sctp  ipv6-other
  l2-payload
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Maximum number of VMDq pools: 64
Current number of RX queues: 1
Max possible RX queues: 320
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 1
Max possible TX queues: 320
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 255
Max segment number per MTU/TSO: 8
Device capabilities: 0x3( RUNTIME_RX_QUEUE_SETUP RUNTIME_TX_QUEUE_SETUP )
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
i40e_update_vsi_stats(): ***************** VSI[6] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[6] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------
i40e_update_vsi_stats(): ***************** VSI[7] stats start *******************
i40e_update_vsi_stats(): rx_bytes:            0
i40e_update_vsi_stats(): rx_unicast:          0
i40e_update_vsi_stats(): rx_multicast:        0
i40e_update_vsi_stats(): rx_broadcast:        0
i40e_update_vsi_stats(): rx_discards:         0
i40e_update_vsi_stats(): rx_unknown_protocol: 0
i40e_update_vsi_stats(): tx_bytes:            0
i40e_update_vsi_stats(): tx_unicast:          0
i40e_update_vsi_stats(): tx_multicast:        0
i40e_update_vsi_stats(): tx_broadcast:        0
i40e_update_vsi_stats(): tx_discards:         0
i40e_update_vsi_stats(): tx_errors:           0
i40e_update_vsi_stats(): ***************** VSI[7] stats end *******************
i40e_dev_stats_get(): ***************** PF stats start *******************
i40e_dev_stats_get(): rx_bytes:            0
i40e_dev_stats_get(): rx_unicast:          0
i40e_dev_stats_get(): rx_multicast:        0
i40e_dev_stats_get(): rx_broadcast:        0
i40e_dev_stats_get(): rx_discards:         0
i40e_dev_stats_get(): rx_unknown_protocol: 0
i40e_dev_stats_get(): tx_bytes:            0
i40e_dev_stats_get(): tx_unicast:          0
i40e_dev_stats_get(): tx_multicast:        0
i40e_dev_stats_get(): tx_broadcast:        0
i40e_dev_stats_get(): tx_discards:         0
i40e_dev_stats_get(): tx_errors:           0
i40e_dev_stats_get(): tx_dropped_link_down:     0
i40e_dev_stats_get(): crc_errors:               0
i40e_dev_stats_get(): illegal_bytes:            0
i40e_dev_stats_get(): error_bytes:              0
i40e_dev_stats_get(): mac_local_faults:         0
i40e_dev_stats_get(): mac_remote_faults:        0
i40e_dev_stats_get(): rx_length_errors:         0
i40e_dev_stats_get(): link_xon_rx:              0
i40e_dev_stats_get(): link_xoff_rx:             0
i40e_dev_stats_get(): priority_xon_rx[0]:      0
i40e_dev_stats_get(): priority_xoff_rx[0]:     0
i40e_dev_stats_get(): priority_xon_rx[1]:      0
i40e_dev_stats_get(): priority_xoff_rx[1]:     0
i40e_dev_stats_get(): priority_xon_rx[2]:      0
i40e_dev_stats_get(): priority_xoff_rx[2]:     0
i40e_dev_stats_get(): priority_xon_rx[3]:      0
i40e_dev_stats_get(): priority_xoff_rx[3]:     0
i40e_dev_stats_get(): priority_xon_rx[4]:      0
i40e_dev_stats_get(): priority_xoff_rx[4]:     0
i40e_dev_stats_get(): priority_xon_rx[5]:      0
i40e_dev_stats_get(): priority_xoff_rx[5]:     0
i40e_dev_stats_get(): priority_xon_rx[6]:      0
i40e_dev_stats_get(): priority_xoff_rx[6]:     0
i40e_dev_stats_get(): priority_xon_rx[7]:      0
i40e_dev_stats_get(): priority_xoff_rx[7]:     0
i40e_dev_stats_get(): link_xon_tx:              0
i40e_dev_stats_get(): link_xoff_tx:             0
i40e_dev_stats_get(): priority_xon_tx[0]:      0
i40e_dev_stats_get(): priority_xoff_tx[0]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[0]:  0
i40e_dev_stats_get(): priority_xon_tx[1]:      0
i40e_dev_stats_get(): priority_xoff_tx[1]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[1]:  0
i40e_dev_stats_get(): priority_xon_tx[2]:      0
i40e_dev_stats_get(): priority_xoff_tx[2]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[2]:  0
i40e_dev_stats_get(): priority_xon_tx[3]:      0
i40e_dev_stats_get(): priority_xoff_tx[3]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[3]:  0
i40e_dev_stats_get(): priority_xon_tx[4]:      0
i40e_dev_stats_get(): priority_xoff_tx[4]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[4]:  0
i40e_dev_stats_get(): priority_xon_tx[5]:      0
i40e_dev_stats_get(): priority_xoff_tx[5]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[5]:  0
i40e_dev_stats_get(): priority_xon_tx[6]:      0
i40e_dev_stats_get(): priority_xoff_tx[6]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[6]:  0
i40e_dev_stats_get(): priority_xon_tx[7]:      0
i40e_dev_stats_get(): priority_xoff_tx[7]:     0
i40e_dev_stats_get(): priority_xon_2_xoff[7]:  0
i40e_dev_stats_get(): rx_size_64:               0
i40e_dev_stats_get(): rx_size_127:              0
i40e_dev_stats_get(): rx_size_255:              0
i40e_dev_stats_get(): rx_size_511:              0
i40e_dev_stats_get(): rx_size_1023:             0
i40e_dev_stats_get(): rx_size_1522:             0
i40e_dev_stats_get(): rx_size_big:              0
i40e_dev_stats_get(): rx_undersize:             0
i40e_dev_stats_get(): rx_fragments:             0
i40e_dev_stats_get(): rx_oversize:              0
i40e_dev_stats_get(): rx_jabber:                0
i40e_dev_stats_get(): tx_size_64:               0
i40e_dev_stats_get(): tx_size_127:              0
i40e_dev_stats_get(): tx_size_255:              0
i40e_dev_stats_get(): tx_size_511:              0
i40e_dev_stats_get(): tx_size_1023:             0
i40e_dev_stats_get(): tx_size_1522:             0
i40e_dev_stats_get(): tx_size_big:              0
i40e_dev_stats_get(): mac_short_packet_dropped: 0
i40e_dev_stats_get(): checksum_error:           0
i40e_dev_stats_get(): fdir_match:               0
i40e_dev_stats_get(): ***************** PF stats end ********************

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
i40e_dev_clear_queues():  >>
i40e_phy_conf_link():   Current: abilities 38, link_speed 10
i40e_phy_conf_link():   Config:  abilities 20, link_speed 10
Done

Stopping port 1...
Stopping ports...
i40e_dev_clear_queues():  >>
i40e_phy_conf_link():   Current: abilities 38, link_speed 10
i40e_phy_conf_link():   Config:  abilities 20, link_speed 10
Done

Shutting down port 0...
Closing ports...
i40e_dev_close():  >>
i40e_dev_free_queues():  >>
i40e_free_dma_mem_d(): memzone i40e_dma_66 to be freed with physical address: 283605180416
i40e_dev_interrupt_handler(): ICR0: adminq event

Port 1: link state change event
i40e_free_dma_mem_d(): memzone i40e_dma_1 to be freed with physical address: 283605827584
i40e_free_dma_mem_d(): memzone i40e_dma_2 to be freed with physical address: 283605819392
i40e_free_dma_mem_d(): memzone i40e_dma_3 to be freed with physical address: 283605811200
i40e_free_dma_mem_d(): memzone i40e_dma_4 to be freed with physical address: 283605803008
i40e_free_dma_mem_d(): memzone i40e_dma_5 to be freed with physical address: 283605794816
i40e_free_dma_mem_d(): memzone i40e_dma_6 to be freed with physical address: 283605786624
i40e_free_dma_mem_d(): memzone i40e_dma_7 to be freed with physical address: 283605778432
i40e_free_dma_mem_d(): memzone i40e_dma_8 to be freed with physical address: 283605770240
i40e_free_dma_mem_d(): memzone i40e_dma_9 to be freed with physical address: 283605762048
i40e_free_dma_mem_d(): memzone i40e_dma_10 to be freed with physical address: 283605753856
i40e_free_dma_mem_d(): memzone i40e_dma_11 to be freed with physical address: 283605745664
i40e_free_dma_mem_d(): memzone i40e_dma_12 to be freed with physical address: 283605737472
i40e_free_dma_mem_d(): memzone i40e_dma_13 to be freed with physical address: 283605729280
i40e_free_dma_mem_d(): memzone i40e_dma_14 to be freed with physical address: 283605721088
i40e_free_dma_mem_d(): memzone i40e_dma_15 to be freed with physical address: 283605712896
i40e_free_dma_mem_d(): memzone i40e_dma_16 to be freed with physical address: 283605704704
i40e_free_dma_mem_d(): memzone i40e_dma_17 to be freed with physical address: 283605696512
i40e_free_dma_mem_d(): memzone i40e_dma_18 to be freed with physical address: 283605688320
i40e_free_dma_mem_d(): memzone i40e_dma_19 to be freed with physical address: 283605680128
i40e_free_dma_mem_d(): memzone i40e_dma_20 to be freed with physical address: 283605671936
i40e_free_dma_mem_d(): memzone i40e_dma_21 to be freed with physical address: 283605663744
i40e_free_dma_mem_d(): memzone i40e_dma_22 to be freed with physical address: 283605655552
i40e_free_dma_mem_d(): memzone i40e_dma_23 to be freed with physical address: 283605647360
i40e_free_dma_mem_d(): memzone i40e_dma_24 to be freed with physical address: 283605639168
i40e_free_dma_mem_d(): memzone i40e_dma_25 to be freed with physical address: 283605630976
i40e_free_dma_mem_d(): memzone i40e_dma_26 to be freed with physical address: 283605622784
i40e_free_dma_mem_d(): memzone i40e_dma_27 to be freed with physical address: 283605614592
i40e_free_dma_mem_d(): memzone i40e_dma_28 to be freed with physical address: 283605606400
i40e_free_dma_mem_d(): memzone i40e_dma_29 to be freed with physical address: 283605598208
i40e_free_dma_mem_d(): memzone i40e_dma_30 to be freed with physical address: 283605590016
i40e_free_dma_mem_d(): memzone i40e_dma_31 to be freed with physical address: 283605581824
i40e_free_dma_mem_d(): memzone i40e_dma_32 to be freed with physical address: 283605573632
i40e_free_dma_mem_d(): memzone i40e_dma_0 to be freed with physical address: 283605835776
i40e_free_dma_mem_d(): memzone i40e_dma_34 to be freed with physical address: 283605561344
i40e_free_dma_mem_d(): memzone i40e_dma_35 to be freed with physical address: 283605553152
i40e_free_dma_mem_d(): memzone i40e_dma_36 to be freed with physical address: 283605544960
i40e_free_dma_mem_d(): memzone i40e_dma_37 to be freed with physical address: 283605536768
i40e_free_dma_mem_d(): memzone i40e_dma_38 to be freed with physical address: 283605528576
i40e_free_dma_mem_d(): memzone i40e_dma_39 to be freed with physical address: 283605520384
i40e_free_dma_mem_d(): memzone i40e_dma_40 to be freed with physical address: 283605512192
i40e_free_dma_mem_d(): memzone i40e_dma_41 to be freed with physical address: 283605504000
i40e_free_dma_mem_d(): memzone i40e_dma_42 to be freed with physical address: 283605495808
i40e_free_dma_mem_d(): memzone i40e_dma_43 to be freed with physical address: 283605487616
i40e_free_dma_mem_d(): memzone i40e_dma_44 to be freed with physical address: 283605479424
i40e_free_dma_mem_d(): memzone i40e_dma_45 to be freed with physical address: 283605471232
i40e_free_dma_mem_d(): memzone i40e_dma_46 to be freed with physical address: 283605463040
i40e_free_dma_mem_d(): memzone i40e_dma_47 to be freed with physical address: 283605454848
i40e_free_dma_mem_d(): memzone i40e_dma_48 to be freed with physical address: 283605446656
i40e_free_dma_mem_d(): memzone i40e_dma_49 to be freed with physical address: 283605438464
i40e_free_dma_mem_d(): memzone i40e_dma_50 to be freed with physical address: 283605430272
i40e_free_dma_mem_d(): memzone i40e_dma_51 to be freed with physical address: 283605422080
i40e_free_dma_mem_d(): memzone i40e_dma_52 to be freed with physical address: 283605413888
i40e_free_dma_mem_d(): memzone i40e_dma_53 to be freed with physical address: 283605405696
i40e_free_dma_mem_d(): memzone i40e_dma_54 to be freed with physical address: 283605397504
i40e_free_dma_mem_d(): memzone i40e_dma_55 to be freed with physical address: 283605389312
i40e_free_dma_mem_d(): memzone i40e_dma_56 to be freed with physical address: 283605381120
i40e_free_dma_mem_d(): memzone i40e_dma_57 to be freed with physical address: 283605372928
i40e_free_dma_mem_d(): memzone i40e_dma_58 to be freed with physical address: 283605364736
i40e_free_dma_mem_d(): memzone i40e_dma_59 to be freed with physical address: 283605356544
i40e_free_dma_mem_d(): memzone i40e_dma_60 to be freed with physical address: 283605348352
i40e_free_dma_mem_d(): memzone i40e_dma_61 to be freed with physical address: 283605340160
i40e_free_dma_mem_d(): memzone i40e_dma_62 to be freed with physical address: 283605331968
i40e_free_dma_mem_d(): memzone i40e_dma_63 to be freed with physical address: 283605323776
i40e_free_dma_mem_d(): memzone i40e_dma_64 to be freed with physical address: 283605315584
i40e_free_dma_mem_d(): memzone i40e_dma_65 to be freed with physical address: 283605307392
i40e_free_dma_mem_d(): memzone i40e_dma_33 to be freed with physical address: 283605569536
i40e_pf_host_uninit():  >>
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
i40e_dev_close():  >>
i40e_dev_free_queues():  >>
i40e_free_dma_mem_d(): memzone i40e_dma_133 to be freed with physical address: 283604762624
i40e_free_dma_mem_d(): memzone i40e_dma_68 to be freed with physical address: 283608408064
i40e_free_dma_mem_d(): memzone i40e_dma_69 to be freed with physical address: 283608399872
i40e_free_dma_mem_d(): memzone i40e_dma_70 to be freed with physical address: 283608391680
i40e_free_dma_mem_d(): memzone i40e_dma_71 to be freed with physical address: 283608383488
i40e_free_dma_mem_d(): memzone i40e_dma_72 to be freed with physical address: 283608375296
i40e_free_dma_mem_d(): memzone i40e_dma_73 to be freed with physical address: 283608367104
i40e_free_dma_mem_d(): memzone i40e_dma_74 to be freed with physical address: 283608358912
i40e_free_dma_mem_d(): memzone i40e_dma_75 to be freed with physical address: 283606835200
i40e_free_dma_mem_d(): memzone i40e_dma_76 to be freed with physical address: 283606827008
i40e_free_dma_mem_d(): memzone i40e_dma_77 to be freed with physical address: 283606818816
i40e_free_dma_mem_d(): memzone i40e_dma_78 to be freed with physical address: 283606810624
i40e_free_dma_mem_d(): memzone i40e_dma_79 to be freed with physical address: 283606802432
i40e_free_dma_mem_d(): memzone i40e_dma_80 to be freed with physical address: 283606794240
i40e_free_dma_mem_d(): memzone i40e_dma_81 to be freed with physical address: 283606786048
i40e_free_dma_mem_d(): memzone i40e_dma_82 to be freed with physical address: 283606777856
i40e_free_dma_mem_d(): memzone i40e_dma_83 to be freed with physical address: 283606769664
i40e_free_dma_mem_d(): memzone i40e_dma_84 to be freed with physical address: 283606761472
i40e_free_dma_mem_d(): memzone i40e_dma_85 to be freed with physical address: 283606753280
i40e_free_dma_mem_d(): memzone i40e_dma_86 to be freed with physical address: 283606745088
i40e_free_dma_mem_d(): memzone i40e_dma_87 to be freed with physical address: 283606736896
i40e_free_dma_mem_d(): memzone i40e_dma_88 to be freed with physical address: 283606728704
i40e_free_dma_mem_d(): memzone i40e_dma_89 to be freed with physical address: 283606720512
i40e_free_dma_mem_d(): memzone i40e_dma_90 to be freed with physical address: 283606712320
i40e_free_dma_mem_d(): memzone i40e_dma_91 to be freed with physical address: 283606704128
i40e_free_dma_mem_d(): memzone i40e_dma_92 to be freed with physical address: 283606695936
i40e_free_dma_mem_d(): memzone i40e_dma_93 to be freed with physical address: 283606687744
i40e_free_dma_mem_d(): memzone i40e_dma_94 to be freed with physical address: 283606679552
i40e_free_dma_mem_d(): memzone i40e_dma_95 to be freed with physical address: 283606671360
i40e_free_dma_mem_d(): memzone i40e_dma_96 to be freed with physical address: 283606663168
i40e_free_dma_mem_d(): memzone i40e_dma_97 to be freed with physical address: 283606654976
i40e_free_dma_mem_d(): memzone i40e_dma_98 to be freed with physical address: 283606646784
i40e_free_dma_mem_d(): memzone i40e_dma_99 to be freed with physical address: 283606638592
i40e_free_dma_mem_d(): memzone i40e_dma_67 to be freed with physical address: 283608416256
i40e_free_dma_mem_d(): memzone i40e_dma_101 to be freed with physical address: 283606630400
i40e_free_dma_mem_d(): memzone i40e_dma_102 to be freed with physical address: 283606622208
i40e_free_dma_mem_d(): memzone i40e_dma_103 to be freed with physical address: 283606614016
i40e_free_dma_mem_d(): memzone i40e_dma_104 to be freed with physical address: 283606605824
i40e_free_dma_mem_d(): memzone i40e_dma_105 to be freed with physical address: 283606597632
i40e_free_dma_mem_d(): memzone i40e_dma_106 to be freed with physical address: 283606589440
i40e_free_dma_mem_d(): memzone i40e_dma_107 to be freed with physical address: 283606581248
i40e_free_dma_mem_d(): memzone i40e_dma_108 to be freed with physical address: 283606573056
i40e_free_dma_mem_d(): memzone i40e_dma_109 to be freed with physical address: 283606564864
i40e_free_dma_mem_d(): memzone i40e_dma_110 to be freed with physical address: 283606556672
i40e_free_dma_mem_d(): memzone i40e_dma_111 to be freed with physical address: 283606548480
i40e_free_dma_mem_d(): memzone i40e_dma_112 to be freed with physical address: 283606540288
i40e_free_dma_mem_d(): memzone i40e_dma_113 to be freed with physical address: 283606532096
i40e_free_dma_mem_d(): memzone i40e_dma_114 to be freed with physical address: 283606523904
i40e_free_dma_mem_d(): memzone i40e_dma_115 to be freed with physical address: 283606515712
i40e_free_dma_mem_d(): memzone i40e_dma_116 to be freed with physical address: 283606507520
i40e_free_dma_mem_d(): memzone i40e_dma_117 to be freed with physical address: 283606499328
i40e_free_dma_mem_d(): memzone i40e_dma_118 to be freed with physical address: 283606491136
i40e_free_dma_mem_d(): memzone i40e_dma_119 to be freed with physical address: 283606482944
i40e_free_dma_mem_d(): memzone i40e_dma_120 to be freed with physical address: 283606474752
i40e_free_dma_mem_d(): memzone i40e_dma_121 to be freed with physical address: 283606466560
i40e_free_dma_mem_d(): memzone i40e_dma_122 to be freed with physical address: 283606458368
i40e_free_dma_mem_d(): memzone i40e_dma_123 to be freed with physical address: 283606450176
i40e_free_dma_mem_d(): memzone i40e_dma_124 to be freed with physical address: 283606441984
i40e_free_dma_mem_d(): memzone i40e_dma_125 to be freed with physical address: 283606433792
i40e_free_dma_mem_d(): memzone i40e_dma_126 to be freed with physical address: 283606425600
i40e_free_dma_mem_d(): memzone i40e_dma_127 to be freed with physical address: 283606417408
i40e_free_dma_mem_d(): memzone i40e_dma_128 to be freed with physical address: 283606409216
i40e_free_dma_mem_d(): memzone i40e_dma_129 to be freed with physical address: 283606401024
i40e_free_dma_mem_d(): memzone i40e_dma_130 to be freed with physical address: 283606392832
i40e_free_dma_mem_d(): memzone i40e_dma_131 to be freed with physical address: 283606384640
i40e_free_dma_mem_d(): memzone i40e_dma_132 to be freed with physical address: 283606376448
i40e_free_dma_mem_d(): memzone i40e_dma_100 to be freed with physical address: 283608354816
i40e_pf_host_uninit():  >>
Port 1 is closed
Done

Bye...

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
  2023-02-21 11:18       ` Juraj Linkeš
@ 2023-03-08  6:25         ` Juraj Linkeš
  2023-04-03  9:27           ` Juraj Linkeš
  0 siblings, 1 reply; 9+ messages in thread
From: Juraj Linkeš @ 2023-03-08  6:25 UTC (permalink / raw)
  To: Xing, Beilei
  Cc: Singh, Aman Deep, Zhang, Yuying, Yang, Qiming, dev, Ruifeng Wang,
	Zhang, Lijian, Honnappa Nagarahalli

Hello Qiming, Beilei,

Another reminder - are you looking at this by any chance?

The high level short description is that testpmd/l3fwd breaks a link
between two servers while VPP (using DPDK) doesn't. This leads us to
believe there's a problem with testpmd/l3fwd/i40e driver in DPDK.

Thanks,
Juraj

On Tue, Feb 21, 2023 at 12:18 PM Juraj Linkeš
<juraj.linkes@pantheon.tech> wrote:
>
> Hi Qiming,
>
> Just a friendly reminder, would you please take a look?
>
> Thanks,
> Juraj
>
>
> On Tue, Feb 7, 2023 at 3:10 AM Xing, Beilei <beilei.xing@intel.com> wrote:
> >
> > Hi Qiming,
> >
> > Could you please help on this? Thanks.
> >
> > BR,
> > Beilei
> >
> > > -----Original Message-----
> > > From: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > Sent: Monday, February 6, 2023 4:53 PM
> > > To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> > > <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > > Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; Zhang, Lijian
> > > <Lijian.Zhang@arm.com>; Honnappa Nagarahalli
> > > <Honnappa.Nagarahalli@arm.com>
> > > Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > >
> > > Hello i40e and testpmd maintainers,
> > >
> > > A gentle reminder - would you please advise how to debug the issue described
> > > below?
> > >
> > > Thanks,
> > > Juraj
> > >
> > > On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > wrote:
> > > >
> > > > Adding the logfile.
> > > >
> > > >
> > > >
> > > > One thing that's in the logs but didn't explicitly mention is the DPDK version
> > > we've tried this with:
> > > >
> > > > EAL: RTE Version: 'DPDK 22.07.0'
> > > >
> > > >
> > > >
> > > > We also tried earlier versions going back to 21.08, with no luck. I also did a
> > > quick check on 22.11, also with no luck.
> > > >
> > > >
> > > >
> > > > Juraj
> > > >
> > > >
> > > >
> > > > From: Juraj Linkeš
> > > > Sent: Friday, January 20, 2023 12:56 PM
> > > > To: 'aman.deep.singh@intel.com' <aman.deep.singh@intel.com>;
> > > > 'yuying.zhang@intel.com' <yuying.zhang@intel.com>; Xing, Beilei
> > > > <beilei.xing@intel.com>
> > > > Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; 'Lijian Zhang'
> > > > <Lijian.Zhang@arm.com>; 'Honnappa Nagarahalli'
> > > > <Honnappa.Nagarahalli@arm.com>
> > > > Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > > >
> > > >
> > > >
> > > > Hello i40e and testpmd maintainers,
> > > >
> > > >
> > > >
> > > > We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io
> > > lab.
> > > >
> > > >
> > > >
> > > > A bit of background: along with VPP performance tests (which uses DPDK),
> > > we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io
> > > as well. This is to catch any performance differences due to VPP updating its
> > > DPDK version.
> > > >
> > > >
> > > >
> > > > We're running both l3fwd tests and testpmd tests. The Altra servers are two
> > > socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both
> > > directions, but nothing gets forwarded (with a slight caveat - put a pin in this).
> > > There's nothing special in the tests, just forwarding traffic. The NIC we're
> > > testing is xl710-QDA2.
> > > >
> > > >
> > > >
> > > > The same tests are passing on all other testbeds - we have various two node
> > > (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with
> > > various NICs (Intel 700 and 800 series and the Intel testbeds use some
> > > Mellanox NICs as well). We don't have quite the same combination of another
> > > three node topology with the same NIC though, so it looks like something with
> > > testpmd/l3fwd and xl710-QDA2 on Altra servers.
> > > >
> > > >
> > > >
> > > > VPP performance tests are passing, but l3fwd and testpmd fail. This leads us
> > > to believe to it's a software issue, but there could something wrong with the
> > > hardware. I'll talk about testpmd from now on, but as far we can tell, the
> > > behavior is the same for testpmd and l3fwd.
> > > >
> > > >
> > > >
> > > > Getting back to the caveat mentioned earlier, there seems to be something
> > > wrong with port shutdown. When running testpmd on a testbed that hasn't
> > > been used for a while it seems that all ports are up right away (we don't see
> > > any "Port 0|1: link state change event") and the setup works fine (forwarding
> > > works). After restarting testpmd (restarting on one server is sufficient), the
> > > ports between DUT1 and DUT2 (but not between DUTs and TG) go down and
> > > are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while
> > > (measured in minutes, sometimes dozens of minutes; the duration is seemingly
> > > random). The ports eventually recover and can be used again, but there's
> > > nothing in syslog suggesting what happened.
> > > >
> > > >
> > > >
> > > > What seems to be happening is testpmd put the ports into some faulty state.
> > > This only happens on the DUT1 -> DUT2 link though (the ports between the
> > > two testpmds), not on TG -> DUT1 link (the TG port is left alone).
> > > >
> > > >
> > > >
> > > > Some more info:
> > > >
> > > > We've come across the issue with this configuration:
> > > >
> > > > OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
> > > >
> > > > Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
> > > >
> > > > Drivers versions: i40e 2.17.15 and iavf 4.3.19.
> > > >
> > > >
> > > >
> > > > As well as with this configuration:
> > > >
> > > > OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
> > > >
> > > > Updated firmware: 8.30 0x8000a4ae 1.2926.0.
> > > >
> > > > Drivers: i40e 2.19.3 and iavf 4.5.3.
> > > >
> > > >
> > > >
> > > > Unsafe noiommu mode is disabled:
> > > >
> > > > cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > > >
> > > > N
> > > >
> > > >
> > > >
> > > > We used DPDK 22.07 in manual testing and built it on DUTs, using generic
> > > build:
> > > >
> > > > meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> > > > -Dplatform=generic build
> > > >
> > > >
> > > >
> > > > We're running testpmd with this command:
> > > >
> > > > sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0
> > > > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > > > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > > > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > > > --nb-cores=1
> > > >
> > > >
> > > >
> > > > And l3fwd (with different macs on the other server):
> > > >
> > > > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a
> > > > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > > > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > > > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
> > > >
> > > >
> > > >
> > > > We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but
> > > that didn't reveal anything helpful, as far as we can tell - please have a look at
> > > the attached log. The faulty port is port0 (starts out as down, then we waited
> > > for around 25 minutes for it to go up and then we shut down testpmd).
> > > >
> > > >
> > > >
> > > > We'd like to ask for pointers on what could be the cause or how to debug
> > > this issue further.
> > > >
> > > >
> > > >
> > > > Thanks,
> > > > Juraj

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
  2023-03-08  6:25         ` Juraj Linkeš
@ 2023-04-03  9:27           ` Juraj Linkeš
  2023-04-04  1:46             ` Yang, Qiming
  0 siblings, 1 reply; 9+ messages in thread
From: Juraj Linkeš @ 2023-04-03  9:27 UTC (permalink / raw)
  To: Xing, Beilei
  Cc: Singh, Aman Deep, Zhang, Yuying, Yang, Qiming, dev, Ruifeng Wang,
	Zhang, Lijian, Honnappa Nagarahalli

[-- Attachment #1: Type: text/plain, Size: 8553 bytes --]

Hello Qiming, Beilei,

Could you please help us debug this issue? Anything that would help with
getting to the bottom of anything that could go wrong during port
init/cleanup would be appreciated - extra eal/testpmd options or even code
changes (such as where could add extra debug messages).

Thanks,
Juraj

On Wed, Mar 8, 2023 at 7:25 AM Juraj Linkeš <juraj.linkes@pantheon.tech>
wrote:

> Hello Qiming, Beilei,
>
> Another reminder - are you looking at this by any chance?
>
> The high level short description is that testpmd/l3fwd breaks a link
> between two servers while VPP (using DPDK) doesn't. This leads us to
> believe there's a problem with testpmd/l3fwd/i40e driver in DPDK.
>
> Thanks,
> Juraj
>
> On Tue, Feb 21, 2023 at 12:18 PM Juraj Linkeš
> <juraj.linkes@pantheon.tech> wrote:
> >
> > Hi Qiming,
> >
> > Just a friendly reminder, would you please take a look?
> >
> > Thanks,
> > Juraj
> >
> >
> > On Tue, Feb 7, 2023 at 3:10 AM Xing, Beilei <beilei.xing@intel.com>
> wrote:
> > >
> > > Hi Qiming,
> > >
> > > Could you please help on this? Thanks.
> > >
> > > BR,
> > > Beilei
> > >
> > > > -----Original Message-----
> > > > From: Juraj Linkeš <juraj.linkes@pantheon.tech>
> > > > Sent: Monday, February 6, 2023 4:53 PM
> > > > To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> > > > <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > > > Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; Zhang, Lijian
> > > > <Lijian.Zhang@arm.com>; Honnappa Nagarahalli
> > > > <Honnappa.Nagarahalli@arm.com>
> > > > Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > > >
> > > > Hello i40e and testpmd maintainers,
> > > >
> > > > A gentle reminder - would you please advise how to debug the issue
> described
> > > > below?
> > > >
> > > > Thanks,
> > > > Juraj
> > > >
> > > > On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš
> <juraj.linkes@pantheon.tech>
> > > > wrote:
> > > > >
> > > > > Adding the logfile.
> > > > >
> > > > >
> > > > >
> > > > > One thing that's in the logs but didn't explicitly mention is the
> DPDK version
> > > > we've tried this with:
> > > > >
> > > > > EAL: RTE Version: 'DPDK 22.07.0'
> > > > >
> > > > >
> > > > >
> > > > > We also tried earlier versions going back to 21.08, with no luck.
> I also did a
> > > > quick check on 22.11, also with no luck.
> > > > >
> > > > >
> > > > >
> > > > > Juraj
> > > > >
> > > > >
> > > > >
> > > > > From: Juraj Linkeš
> > > > > Sent: Friday, January 20, 2023 12:56 PM
> > > > > To: 'aman.deep.singh@intel.com' <aman.deep.singh@intel.com>;
> > > > > 'yuying.zhang@intel.com' <yuying.zhang@intel.com>; Xing, Beilei
> > > > > <beilei.xing@intel.com>
> > > > > Cc: dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; 'Lijian
> Zhang'
> > > > > <Lijian.Zhang@arm.com>; 'Honnappa Nagarahalli'
> > > > > <Honnappa.Nagarahalli@arm.com>
> > > > > Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > > > >
> > > > >
> > > > >
> > > > > Hello i40e and testpmd maintainers,
> > > > >
> > > > >
> > > > >
> > > > > We're hitting an issue with DPDK testpmd on Ampere Altra servers
> in FD.io
> > > > lab.
> > > > >
> > > > >
> > > > >
> > > > > A bit of background: along with VPP performance tests (which uses
> DPDK),
> > > > we're running a small number of basic DPDK testpmd and l3fwd tests
> in FD.io
> > > > as well. This is to catch any performance differences due to VPP
> updating its
> > > > DPDK version.
> > > > >
> > > > >
> > > > >
> > > > > We're running both l3fwd tests and testpmd tests. The Altra
> servers are two
> > > > socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows
> in both
> > > > directions, but nothing gets forwarded (with a slight caveat - put a
> pin in this).
> > > > There's nothing special in the tests, just forwarding traffic. The
> NIC we're
> > > > testing is xl710-QDA2.
> > > > >
> > > > >
> > > > >
> > > > > The same tests are passing on all other testbeds - we have various
> two node
> > > > (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds
> and with
> > > > various NICs (Intel 700 and 800 series and the Intel testbeds use
> some
> > > > Mellanox NICs as well). We don't have quite the same combination of
> another
> > > > three node topology with the same NIC though, so it looks like
> something with
> > > > testpmd/l3fwd and xl710-QDA2 on Altra servers.
> > > > >
> > > > >
> > > > >
> > > > > VPP performance tests are passing, but l3fwd and testpmd fail.
> This leads us
> > > > to believe to it's a software issue, but there could something wrong
> with the
> > > > hardware. I'll talk about testpmd from now on, but as far we can
> tell, the
> > > > behavior is the same for testpmd and l3fwd.
> > > > >
> > > > >
> > > > >
> > > > > Getting back to the caveat mentioned earlier, there seems to be
> something
> > > > wrong with port shutdown. When running testpmd on a testbed that
> hasn't
> > > > been used for a while it seems that all ports are up right away (we
> don't see
> > > > any "Port 0|1: link state change event") and the setup works fine
> (forwarding
> > > > works). After restarting testpmd (restarting on one server is
> sufficient), the
> > > > ports between DUT1 and DUT2 (but not between DUTs and TG) go down and
> > > > are not usable in DPDK, VPP or in Linux (with i40e kernel driver)
> for a while
> > > > (measured in minutes, sometimes dozens of minutes; the duration is
> seemingly
> > > > random). The ports eventually recover and can be used again, but
> there's
> > > > nothing in syslog suggesting what happened.
> > > > >
> > > > >
> > > > >
> > > > > What seems to be happening is testpmd put the ports into some
> faulty state.
> > > > This only happens on the DUT1 -> DUT2 link though (the ports between
> the
> > > > two testpmds), not on TG -> DUT1 link (the TG port is left alone).
> > > > >
> > > > >
> > > > >
> > > > > Some more info:
> > > > >
> > > > > We've come across the issue with this configuration:
> > > > >
> > > > > OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
> > > > >
> > > > > Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
> > > > >
> > > > > Drivers versions: i40e 2.17.15 and iavf 4.3.19.
> > > > >
> > > > >
> > > > >
> > > > > As well as with this configuration:
> > > > >
> > > > > OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
> > > > >
> > > > > Updated firmware: 8.30 0x8000a4ae 1.2926.0.
> > > > >
> > > > > Drivers: i40e 2.19.3 and iavf 4.5.3.
> > > > >
> > > > >
> > > > >
> > > > > Unsafe noiommu mode is disabled:
> > > > >
> > > > > cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > > > >
> > > > > N
> > > > >
> > > > >
> > > > >
> > > > > We used DPDK 22.07 in manual testing and built it on DUTs, using
> generic
> > > > build:
> > > > >
> > > > > meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> > > > > -Dplatform=generic build
> > > > >
> > > > >
> > > > >
> > > > > We're running testpmd with this command:
> > > > >
> > > > > sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a
> 0004:04:00.0
> > > > > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > > > > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > > > > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > > > > --nb-cores=1
> > > > >
> > > > >
> > > > >
> > > > > And l3fwd (with different macs on the other server):
> > > > >
> > > > > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2
> -a
> > > > > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > > > > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > > > > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
> > > > >
> > > > >
> > > > >
> > > > > We tried adding logs with  --log-level=pmd,debug and
> --no-lsc-interrupt, but
> > > > that didn't reveal anything helpful, as far as we can tell - please
> have a look at
> > > > the attached log. The faulty port is port0 (starts out as down, then
> we waited
> > > > for around 25 minutes for it to go up and then we shut down testpmd).
> > > > >
> > > > >
> > > > >
> > > > > We'd like to ask for pointers on what could be the cause or how to
> debug
> > > > this issue further.
> > > > >
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Juraj
>

[-- Attachment #2: Type: text/html, Size: 12557 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Testpmd/l3fwd port shutdown failure on Arm Altra systems
  2023-04-03  9:27           ` Juraj Linkeš
@ 2023-04-04  1:46             ` Yang, Qiming
  2023-04-04  1:52               ` Lijian Zhang
  0 siblings, 1 reply; 9+ messages in thread
From: Yang, Qiming @ 2023-04-04  1:46 UTC (permalink / raw)
  To: Juraj Linkeš, Xing, Beilei
  Cc: Singh, Aman Deep, Zhang, Yuying, dev, Ruifeng Wang, Zhang,
	Lijian, Honnappa Nagarahalli

[-- Attachment #1: Type: text/plain, Size: 9969 bytes --]

Hi, Juraj
I don’t know VPP. Can I narrow down your question? Do means you run testpmd and l3fwd by these cmd in an ARM system but crush?
> sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0
> > > > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > > > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > > > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > > > --nb-cores=1
> > > >
> > > >
> > > >
> > > > And l3fwd (with different macs on the other server):
> > > >
> > > > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a
> > > > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > > > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > > > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3

Qiming

From: Juraj Linkeš <juraj.linkes@pantheon.tech>
Sent: Monday, April 3, 2023 5:27 PM
To: Xing, Beilei <beilei.xing@intel.com>
Cc: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Yang, Qiming <qiming.yang@intel.com>; dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; Zhang, Lijian <Lijian.Zhang@arm.com>; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems

Hello Qiming, Beilei,

Could you please help us debug this issue? Anything that would help with getting to the bottom of anything that could go wrong during port init/cleanup would be appreciated - extra eal/testpmd options or even code changes (such as where could add extra debug messages).

Thanks,
Juraj

On Wed, Mar 8, 2023 at 7:25 AM Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>> wrote:
Hello Qiming, Beilei,

Another reminder - are you looking at this by any chance?

The high level short description is that testpmd/l3fwd breaks a link
between two servers while VPP (using DPDK) doesn't. This leads us to
believe there's a problem with testpmd/l3fwd/i40e driver in DPDK.

Thanks,
Juraj

On Tue, Feb 21, 2023 at 12:18 PM Juraj Linkeš
<juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>> wrote:
>
> Hi Qiming,
>
> Just a friendly reminder, would you please take a look?
>
> Thanks,
> Juraj
>
>
> On Tue, Feb 7, 2023 at 3:10 AM Xing, Beilei <beilei.xing@intel.com<mailto:beilei.xing@intel.com>> wrote:
> >
> > Hi Qiming,
> >
> > Could you please help on this? Thanks.
> >
> > BR,
> > Beilei
> >
> > > -----Original Message-----
> > > From: Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>>
> > > Sent: Monday, February 6, 2023 4:53 PM
> > > To: Singh, Aman Deep <aman.deep.singh@intel.com<mailto:aman.deep.singh@intel.com>>; Zhang, Yuying
> > > <yuying.zhang@intel.com<mailto:yuying.zhang@intel.com>>; Xing, Beilei <beilei.xing@intel.com<mailto:beilei.xing@intel.com>>
> > > Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Ruifeng Wang <Ruifeng.Wang@arm.com<mailto:Ruifeng.Wang@arm.com>>; Zhang, Lijian
> > > <Lijian.Zhang@arm.com<mailto:Lijian.Zhang@arm.com>>; Honnappa Nagarahalli
> > > <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>
> > > Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > >
> > > Hello i40e and testpmd maintainers,
> > >
> > > A gentle reminder - would you please advise how to debug the issue described
> > > below?
> > >
> > > Thanks,
> > > Juraj
> > >
> > > On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>>
> > > wrote:
> > > >
> > > > Adding the logfile.
> > > >
> > > >
> > > >
> > > > One thing that's in the logs but didn't explicitly mention is the DPDK version
> > > we've tried this with:
> > > >
> > > > EAL: RTE Version: 'DPDK 22.07.0'
> > > >
> > > >
> > > >
> > > > We also tried earlier versions going back to 21.08, with no luck. I also did a
> > > quick check on 22.11, also with no luck.
> > > >
> > > >
> > > >
> > > > Juraj
> > > >
> > > >
> > > >
> > > > From: Juraj Linkeš
> > > > Sent: Friday, January 20, 2023 12:56 PM
> > > > To: 'aman.deep.singh@intel.com<mailto:aman.deep.singh@intel.com>' <aman.deep.singh@intel.com<mailto:aman.deep.singh@intel.com>>;
> > > > 'yuying.zhang@intel.com<mailto:yuying.zhang@intel.com>' <yuying.zhang@intel.com<mailto:yuying.zhang@intel.com>>; Xing, Beilei
> > > > <beilei.xing@intel.com<mailto:beilei.xing@intel.com>>
> > > > Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Ruifeng Wang <Ruifeng.Wang@arm.com<mailto:Ruifeng.Wang@arm.com>>; 'Lijian Zhang'
> > > > <Lijian.Zhang@arm.com<mailto:Lijian.Zhang@arm.com>>; 'Honnappa Nagarahalli'
> > > > <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>
> > > > Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > > >
> > > >
> > > >
> > > > Hello i40e and testpmd maintainers,
> > > >
> > > >
> > > >
> > > > We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io
> > > lab.
> > > >
> > > >
> > > >
> > > > A bit of background: along with VPP performance tests (which uses DPDK),
> > > we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io
> > > as well. This is to catch any performance differences due to VPP updating its
> > > DPDK version.
> > > >
> > > >
> > > >
> > > > We're running both l3fwd tests and testpmd tests. The Altra servers are two
> > > socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both
> > > directions, but nothing gets forwarded (with a slight caveat - put a pin in this).
> > > There's nothing special in the tests, just forwarding traffic. The NIC we're
> > > testing is xl710-QDA2.
> > > >
> > > >
> > > >
> > > > The same tests are passing on all other testbeds - we have various two node
> > > (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with
> > > various NICs (Intel 700 and 800 series and the Intel testbeds use some
> > > Mellanox NICs as well). We don't have quite the same combination of another
> > > three node topology with the same NIC though, so it looks like something with
> > > testpmd/l3fwd and xl710-QDA2 on Altra servers.
> > > >
> > > >
> > > >
> > > > VPP performance tests are passing, but l3fwd and testpmd fail. This leads us
> > > to believe to it's a software issue, but there could something wrong with the
> > > hardware. I'll talk about testpmd from now on, but as far we can tell, the
> > > behavior is the same for testpmd and l3fwd.
> > > >
> > > >
> > > >
> > > > Getting back to the caveat mentioned earlier, there seems to be something
> > > wrong with port shutdown. When running testpmd on a testbed that hasn't
> > > been used for a while it seems that all ports are up right away (we don't see
> > > any "Port 0|1: link state change event") and the setup works fine (forwarding
> > > works). After restarting testpmd (restarting on one server is sufficient), the
> > > ports between DUT1 and DUT2 (but not between DUTs and TG) go down and
> > > are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while
> > > (measured in minutes, sometimes dozens of minutes; the duration is seemingly
> > > random). The ports eventually recover and can be used again, but there's
> > > nothing in syslog suggesting what happened.
> > > >
> > > >
> > > >
> > > > What seems to be happening is testpmd put the ports into some faulty state.
> > > This only happens on the DUT1 -> DUT2 link though (the ports between the
> > > two testpmds), not on TG -> DUT1 link (the TG port is left alone).
> > > >
> > > >
> > > >
> > > > Some more info:
> > > >
> > > > We've come across the issue with this configuration:
> > > >
> > > > OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
> > > >
> > > > Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
> > > >
> > > > Drivers versions: i40e 2.17.15 and iavf 4.3.19.
> > > >
> > > >
> > > >
> > > > As well as with this configuration:
> > > >
> > > > OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
> > > >
> > > > Updated firmware: 8.30 0x8000a4ae 1.2926.0.
> > > >
> > > > Drivers: i40e 2.19.3 and iavf 4.5.3.
> > > >
> > > >
> > > >
> > > > Unsafe noiommu mode is disabled:
> > > >
> > > > cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > > >
> > > > N
> > > >
> > > >
> > > >
> > > > We used DPDK 22.07 in manual testing and built it on DUTs, using generic
> > > build:
> > > >
> > > > meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> > > > -Dplatform=generic build
> > > >
> > > >
> > > >
> > > > We're running testpmd with this command:
> > > >
> > > > sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0
> > > > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > > > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > > > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > > > --nb-cores=1
> > > >
> > > >
> > > >
> > > > And l3fwd (with different macs on the other server):
> > > >
> > > > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a
> > > > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > > > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > > > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
> > > >
> > > >
> > > >
> > > > We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but
> > > that didn't reveal anything helpful, as far as we can tell - please have a look at
> > > the attached log. The faulty port is port0 (starts out as down, then we waited
> > > for around 25 minutes for it to go up and then we shut down testpmd).
> > > >
> > > >
> > > >
> > > > We'd like to ask for pointers on what could be the cause or how to debug
> > > this issue further.
> > > >
> > > >
> > > >
> > > > Thanks,
> > > > Juraj

[-- Attachment #2: Type: text/html, Size: 16772 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Testpmd/l3fwd port shutdown failure on Arm Altra systems
  2023-04-04  1:46             ` Yang, Qiming
@ 2023-04-04  1:52               ` Lijian Zhang
  0 siblings, 0 replies; 9+ messages in thread
From: Lijian Zhang @ 2023-04-04  1:52 UTC (permalink / raw)
  To: Yang, Qiming, Juraj Linkeš, Xing, Beilei
  Cc: Singh, Aman Deep, Zhang, Yuying, dev, Ruifeng Wang, Honnappa Nagarahalli

[-- Attachment #1: Type: text/plain, Size: 11290 bytes --]

Hi Qiming,
It’s not an issue in VPP. It’s the XL710 NIC link down issue in DPDK testpmd.
The ethernet links btw two XL710 NICs occasionally go down in FD.io VPP lab.

If possible, could we have a talk on this issue this afternoon?

Thanks.
From: Yang, Qiming <qiming.yang@intel.com>
Sent: Tuesday, April 4, 2023 9:47 AM
To: Juraj Linkeš <juraj.linkes@pantheon.tech>; Xing, Beilei <beilei.xing@intel.com>
Cc: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; dev@dpdk.org; Ruifeng Wang <Ruifeng.Wang@arm.com>; Lijian Zhang <Lijian.Zhang@arm.com>; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Subject: RE: Testpmd/l3fwd port shutdown failure on Arm Altra systems

Hi, Juraj
I don’t know VPP. Can I narrow down your question? Do means you run testpmd and l3fwd by these cmd in an ARM system but crush?
> sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0
> > > > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > > > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > > > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > > > --nb-cores=1
> > > >
> > > >
> > > >
> > > > And l3fwd (with different macs on the other server):
> > > >
> > > > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a
> > > > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > > > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > > > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
Qiming

From: Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>>
Sent: Monday, April 3, 2023 5:27 PM
To: Xing, Beilei <beilei.xing@intel.com<mailto:beilei.xing@intel.com>>
Cc: Singh, Aman Deep <aman.deep.singh@intel.com<mailto:aman.deep.singh@intel.com>>; Zhang, Yuying <yuying.zhang@intel.com<mailto:yuying.zhang@intel.com>>; Yang, Qiming <qiming.yang@intel.com<mailto:qiming.yang@intel.com>>; dev@dpdk.org<mailto:dev@dpdk.org>; Ruifeng Wang <Ruifeng.Wang@arm.com<mailto:Ruifeng.Wang@arm.com>>; Zhang, Lijian <Lijian.Zhang@arm.com<mailto:Lijian.Zhang@arm.com>>; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>
Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems

Hello Qiming, Beilei,

Could you please help us debug this issue? Anything that would help with getting to the bottom of anything that could go wrong during port init/cleanup would be appreciated - extra eal/testpmd options or even code changes (such as where could add extra debug messages).

Thanks,
Juraj

On Wed, Mar 8, 2023 at 7:25 AM Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>> wrote:
Hello Qiming, Beilei,

Another reminder - are you looking at this by any chance?

The high level short description is that testpmd/l3fwd breaks a link
between two servers while VPP (using DPDK) doesn't. This leads us to
believe there's a problem with testpmd/l3fwd/i40e driver in DPDK.

Thanks,
Juraj

On Tue, Feb 21, 2023 at 12:18 PM Juraj Linkeš
<juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>> wrote:
>
> Hi Qiming,
>
> Just a friendly reminder, would you please take a look?
>
> Thanks,
> Juraj
>
>
> On Tue, Feb 7, 2023 at 3:10 AM Xing, Beilei <beilei.xing@intel.com<mailto:beilei.xing@intel.com>> wrote:
> >
> > Hi Qiming,
> >
> > Could you please help on this? Thanks.
> >
> > BR,
> > Beilei
> >
> > > -----Original Message-----
> > > From: Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>>
> > > Sent: Monday, February 6, 2023 4:53 PM
> > > To: Singh, Aman Deep <aman.deep.singh@intel.com<mailto:aman.deep.singh@intel.com>>; Zhang, Yuying
> > > <yuying.zhang@intel.com<mailto:yuying.zhang@intel.com>>; Xing, Beilei <beilei.xing@intel.com<mailto:beilei.xing@intel.com>>
> > > Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Ruifeng Wang <Ruifeng.Wang@arm.com<mailto:Ruifeng.Wang@arm.com>>; Zhang, Lijian
> > > <Lijian.Zhang@arm.com<mailto:Lijian.Zhang@arm.com>>; Honnappa Nagarahalli
> > > <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>
> > > Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > >
> > > Hello i40e and testpmd maintainers,
> > >
> > > A gentle reminder - would you please advise how to debug the issue described
> > > below?
> > >
> > > Thanks,
> > > Juraj
> > >
> > > On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš <juraj.linkes@pantheon.tech<mailto:juraj.linkes@pantheon.tech>>
> > > wrote:
> > > >
> > > > Adding the logfile.
> > > >
> > > >
> > > >
> > > > One thing that's in the logs but didn't explicitly mention is the DPDK version
> > > we've tried this with:
> > > >
> > > > EAL: RTE Version: 'DPDK 22.07.0'
> > > >
> > > >
> > > >
> > > > We also tried earlier versions going back to 21.08, with no luck. I also did a
> > > quick check on 22.11, also with no luck.
> > > >
> > > >
> > > >
> > > > Juraj
> > > >
> > > >
> > > >
> > > > From: Juraj Linkeš
> > > > Sent: Friday, January 20, 2023 12:56 PM
> > > > To: 'aman.deep.singh@intel.com<mailto:aman.deep.singh@intel.com>' <aman.deep.singh@intel.com<mailto:aman.deep.singh@intel.com>>;
> > > > 'yuying.zhang@intel.com<mailto:yuying.zhang@intel.com>' <yuying.zhang@intel.com<mailto:yuying.zhang@intel.com>>; Xing, Beilei
> > > > <beilei.xing@intel.com<mailto:beilei.xing@intel.com>>
> > > > Cc: dev@dpdk.org<mailto:dev@dpdk.org>; Ruifeng Wang <Ruifeng.Wang@arm.com<mailto:Ruifeng.Wang@arm.com>>; 'Lijian Zhang'
> > > > <Lijian.Zhang@arm.com<mailto:Lijian.Zhang@arm.com>>; 'Honnappa Nagarahalli'
> > > > <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>>
> > > > Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> > > >
> > > >
> > > >
> > > > Hello i40e and testpmd maintainers,
> > > >
> > > >
> > > >
> > > > We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io
> > > lab.
> > > >
> > > >
> > > >
> > > > A bit of background: along with VPP performance tests (which uses DPDK),
> > > we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io
> > > as well. This is to catch any performance differences due to VPP updating its
> > > DPDK version.
> > > >
> > > >
> > > >
> > > > We're running both l3fwd tests and testpmd tests. The Altra servers are two
> > > socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both
> > > directions, but nothing gets forwarded (with a slight caveat - put a pin in this).
> > > There's nothing special in the tests, just forwarding traffic. The NIC we're
> > > testing is xl710-QDA2.
> > > >
> > > >
> > > >
> > > > The same tests are passing on all other testbeds - we have various two node
> > > (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with
> > > various NICs (Intel 700 and 800 series and the Intel testbeds use some
> > > Mellanox NICs as well). We don't have quite the same combination of another
> > > three node topology with the same NIC though, so it looks like something with
> > > testpmd/l3fwd and xl710-QDA2 on Altra servers.
> > > >
> > > >
> > > >
> > > > VPP performance tests are passing, but l3fwd and testpmd fail. This leads us
> > > to believe to it's a software issue, but there could something wrong with the
> > > hardware. I'll talk about testpmd from now on, but as far we can tell, the
> > > behavior is the same for testpmd and l3fwd.
> > > >
> > > >
> > > >
> > > > Getting back to the caveat mentioned earlier, there seems to be something
> > > wrong with port shutdown. When running testpmd on a testbed that hasn't
> > > been used for a while it seems that all ports are up right away (we don't see
> > > any "Port 0|1: link state change event") and the setup works fine (forwarding
> > > works). After restarting testpmd (restarting on one server is sufficient), the
> > > ports between DUT1 and DUT2 (but not between DUTs and TG) go down and
> > > are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while
> > > (measured in minutes, sometimes dozens of minutes; the duration is seemingly
> > > random). The ports eventually recover and can be used again, but there's
> > > nothing in syslog suggesting what happened.
> > > >
> > > >
> > > >
> > > > What seems to be happening is testpmd put the ports into some faulty state.
> > > This only happens on the DUT1 -> DUT2 link though (the ports between the
> > > two testpmds), not on TG -> DUT1 link (the TG port is left alone).
> > > >
> > > >
> > > >
> > > > Some more info:
> > > >
> > > > We've come across the issue with this configuration:
> > > >
> > > > OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
> > > >
> > > > Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
> > > >
> > > > Drivers versions: i40e 2.17.15 and iavf 4.3.19.
> > > >
> > > >
> > > >
> > > > As well as with this configuration:
> > > >
> > > > OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
> > > >
> > > > Updated firmware: 8.30 0x8000a4ae 1.2926.0.
> > > >
> > > > Drivers: i40e 2.19.3 and iavf 4.5.3.
> > > >
> > > >
> > > >
> > > > Unsafe noiommu mode is disabled:
> > > >
> > > > cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> > > >
> > > > N
> > > >
> > > >
> > > >
> > > > We used DPDK 22.07 in manual testing and built it on DUTs, using generic
> > > build:
> > > >
> > > > meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> > > > -Dplatform=generic build
> > > >
> > > >
> > > >
> > > > We're running testpmd with this command:
> > > >
> > > > sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0
> > > > --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1
> > > > --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768
> > > > --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384
> > > > --nb-cores=1
> > > >
> > > >
> > > >
> > > > And l3fwd (with different macs on the other server):
> > > >
> > > > sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a
> > > > 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype
> > > > --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1"
> > > > --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3
> > > >
> > > >
> > > >
> > > > We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but
> > > that didn't reveal anything helpful, as far as we can tell - please have a look at
> > > the attached log. The faulty port is port0 (starts out as down, then we waited
> > > for around 25 minutes for it to go up and then we shut down testpmd).
> > > >
> > > >
> > > >
> > > > We'd like to ask for pointers on what could be the cause or how to debug
> > > this issue further.
> > > >
> > > >
> > > >
> > > > Thanks,
> > > > Juraj
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

[-- Attachment #2: Type: text/html, Size: 18973 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Testpmd/l3fwd port shutdown failure on Arm Altra systems
@ 2023-01-20 11:56 Juraj Linkeš
  0 siblings, 0 replies; 9+ messages in thread
From: Juraj Linkeš @ 2023-01-20 11:56 UTC (permalink / raw)
  To: aman.deep.singh, yuying.zhang, Xing, Beilei
  Cc: dev, Ruifeng Wang, Lijian Zhang, Honnappa Nagarahalli

[-- Attachment #1: Type: text/plain, Size: 4062 bytes --]

Hello i40e and testpmd maintainers,

We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io lab.

A bit of background: along with VPP performance tests (which uses DPDK), we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io as well. This is to catch any performance differences due to VPP updating its DPDK version.

We're running both l3fwd tests and testpmd tests. The Altra servers are two socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both directions, but nothing gets forwarded (with a slight caveat - put a pin in this). There's nothing special in the tests, just forwarding traffic. The NIC we're testing is xl710-QDA2.

The same tests are passing on all other testbeds - we have various two node (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with various NICs (Intel 700 and 800 series and the Intel testbeds use some Mellanox NICs as well). We don't have quite the same combination of another three node topology with the same NIC though, so it looks like something with testpmd/l3fwd and xl710-QDA2 on Altra servers.

VPP performance tests are passing, but l3fwd and testpmd fail. This leads us to believe to it's a software issue, but there could something wrong with the hardware. I'll talk about testpmd from now on, but as far we can tell, the behavior is the same for testpmd and l3fwd.

Getting back to the caveat mentioned earlier, there seems to be something wrong with port shutdown. When running testpmd on a testbed that hasn't been used for a while it seems that all ports are up right away (we don't see any "Port 0|1: link state change event") and the setup works fine (forwarding works). After restarting testpmd (restarting on one server is sufficient), the ports between DUT1 and DUT2 (but not between DUTs and TG) go down and are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while (measured in minutes, sometimes dozens of minutes; the duration is seemingly random). The ports eventually recover and can be used again, but there's nothing in syslog suggesting what happened.

What seems to be happening is testpmd put the ports into some faulty state. This only happens on the DUT1 -> DUT2 link though (the ports between the two testpmds), not on TG -> DUT1 link (the TG port is left alone).

Some more info:
We've come across the issue with this configuration:
OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
Drivers versions: i40e 2.17.15 and iavf 4.3.19.

As well as with this configuration:
OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
Updated firmware: 8.30 0x8000a4ae 1.2926.0.
Drivers: i40e 2.19.3 and iavf 4.5.3.

Unsafe noiommu mode is disabled:
cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
N

We used DPDK 22.07 in manual testing and built it on DUTs, using generic build:
meson -Dexamples=l3fwd -Dc_args=-DRTE_LIBRTE_I40E_16BYTE_RX_DESC=y -Dplatform=generic build

We're running testpmd with this command:
sudo build/app/dpdk-testpmd -v -l 1,2 -a 0004:04:00.1 -a 0004:04:00.0 --in-memory -- -i --forward-mode=io --burst=64 --txq=1 --rxq=1 --tx-offloads=0x0 --numa --auto-start --total-num-mbufs=32768 --nb-ports=2 --portmask=0x3 --max-pkt-len=1518 --mbuf-size=16384 --nb-cores=1

And l3fwd (with different macs on the other server):
sudo /tmp/openvpp-testing/dpdk/build/examples/dpdk-l3fwd -v -l 1,2 -a 0004:04:00.0 -a 0004:04:00.1 --in-memory -- --parse-ptype --eth-dest="0,40:a6:b7:85:e7:79" --eth-dest="1,3c:fd:fe:c3:e7:a1" --config="(0, 0, 2),(1, 0, 2)" -P -L -p 0x3

We tried adding logs with  --log-level=pmd,debug and --no-lsc-interrupt, but that didn't reveal anything helpful, as far as we can tell - please have a look at the attached log. The faulty port is port0 (starts out as down, then we waited for around 25 minutes for it to go up and then we shut down testpmd).

We'd like to ask for pointers on what could be the cause or how to debug this issue further.

Thanks,
Juraj

[-- Attachment #2: Type: text/html, Size: 9552 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-04-12  6:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <AQHlNO1vQvNhBNQAf+dkWoX3B17UKg==>
2023-01-20 12:07 ` Testpmd/l3fwd port shutdown failure on Arm Altra systems Juraj Linkeš
2023-02-06  8:52   ` Juraj Linkeš
2023-02-07  2:09     ` Xing, Beilei
2023-02-21 11:18       ` Juraj Linkeš
2023-03-08  6:25         ` Juraj Linkeš
2023-04-03  9:27           ` Juraj Linkeš
2023-04-04  1:46             ` Yang, Qiming
2023-04-04  1:52               ` Lijian Zhang
2023-01-20 11:56 Juraj Linkeš

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).