Hi:
  
 I have a question about the dpdk bond.  My Openstack vm use 4 vfs support by SR-IOV. And i can not start dpdk-testpmd with dpdk bond mode 1. The error is "i40evf_dev_tx_queue_start(): Failed to switch TX queue 0 on".

 My host environment is£º
 Two Intel x710 NIC, and slicing each NIC to 16 vfs. My VM use 4 vfs like this:
 [root@overcloud-computelowconfig-0 ~]# ip link show ens2f0
 8: ens2f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 6c:fe:54:01:12:30 brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether fa:16:3e:04:1d:fd brd ff:ff:ff:ff:ff:ff, vlan 362, spoof checking off, link-state enable, trust off
   ¡­¡­
    vf 14     link/ether fa:16:3e:96:9f:4a brd ff:ff:ff:ff:ff:ff, vlan 361, spoof checking off, link-state enable, trust off
[root@overcloud-computelowconfig-0 ~]# ip link show ens2f1
9: ens2f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    vf 12     link/ether fa:16:3e:bc:9a:2f brd ff:ff:ff:ff:ff:ff, vlan 362, spoof checking off, link-state enable, trust off
    ¡­¡­
    vf 14     link/ether fa:16:3e:63:4b:e9 brd ff:ff:ff:ff:ff:ff, vlan 361, spoof checking off, link-state enable, trust off

My vm environment is£º
DPDK version dpdk-21.02.
[root@sriovtest-upf-42-gsu-0 app]# dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:00:09.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
0000:00:0a.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
0000:00:0b.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf
0000:00:0c.0 'XL710/X710 Virtual Function' drv=igb_uio unused=i40evf

My error is:
[root@sriovtest-upf-42-gsu-0 app]# ./dpdk-testpmd -l 1-4 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:00:0a.0,slave=0000:00:0c.0,primary=0000:00:0a.0' -- --port-topology=chained
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available 1048576 kB hugepages reported
EAL: Probing VFIO support...
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:03.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device

EAL: Requested device 0000:00:03.0 cannot be used
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:04.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device

EAL: Requested device 0000:00:04.0 cannot be used
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device

EAL: Requested device 0000:00:05.0 cannot be used
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:06.0 (socket 0)
eth_virtio_pci_init(): Failed to init PCI device

EAL: Requested device 0000:00:06.0 cannot be used
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:09.0 (socket 0)
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0a.0 (socket 0)
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0b.0 (socket 0)
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0c.0 (socket 0)
EAL: No legacy callbacks, legacy socket not created
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Port 3: link state change event
Configuring Port 0 (socket 0)

Port 0: link state change event

Port 0: link state change event
Port 0: FA:16:3E:96:9F:4A
Configuring Port 1 (socket 0)

Port 1: link state change event

Port 1: link state change event
Port 1: FA:16:3E:04:1D:FD
Configuring Port 2 (socket 0)

Port 2: link state change event

Port 2: link state change event
Port 2: FA:16:3E:63:4B:E9
Configuring Port 3 (socket 0)

Port 3: link state change event

Port 3: link state change event
Port 3: FA:16:3E:BC:9A:2F
Configuring Port 4 (socket 0)

Port 1: link state change event
_i40evf_execute_vf_cmd(): No response for 8
i40evf_switch_queue(): Fail to switch TX 0 on
i40evf_dev_tx_queue_start(): Failed to switch TX queue 0 on
i40evf_start_queues(): Fail to start queue 0
i40evf_dev_start(): enable queues failed
_i40evf_execute_vf_cmd(): No response for 11
i40evf_add_del_all_mac_addr(): fail to execute command OP_DEL_ETHER_ADDRESS
slave_configure(1829) - rte_eth_dev_start: port=1, err (-1)
bond_ethdev_start(2002) - bonded port (4) failed to reconfigure slave device (1)
Fail to start port 4

So, how can i fix this error?  

I look forward to hearing from you soon.


zhanggongqin@bjcktech.com