DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
@ 2019-09-06 23:11 Mike DeVico
  2019-09-09 20:39 ` Thomas Monjalon
  0 siblings, 1 reply; 7+ messages in thread
From: Mike DeVico @ 2019-09-06 23:11 UTC (permalink / raw)
  To: dev

Hello,

I am having an issue getting the DCB feature to work with an Intel
X710 Quad SFP+ NIC.

Here’s my setup:

1.      DPDK 18.08 built with the following I40E configs:

CONFIG_RTE_LIBRTE_I40E_PMD=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8

2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver
============================================
0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e

       Network devices using kernel driver
       ===================================
       0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
       0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*

       Other Network devices
       =====================
       <none>

3.      We have a custom FPGA board connected to port 1 of the X710 NIC that’s broadcasting
a packet tagged with VLAN 1 and PCP 2.

4.      I use the vmdq_dcb example app and configure the card with 16 pools/8 queue each
as follows:
       sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3


The apps starts up fine and successfully probes the card as shown below:

sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:3b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
vmdq queue base: 64 pool base 1
Configured vmdq pool num: 16, each vmdq pool has 8 queues
Port 0 MAC: e8 ea 6a 27 b5 4d
Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
vmdq queue base: 64 pool base 1
Configured vmdq pool num: 16, each vmdq pool has 8 queues
Port 1 MAC: e8 ea 6a 27 b5 4e
Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f

Skipping disabled port 2

Skipping disabled port 3
Core 0(lcore 1) reading queues 64-191

However, when I issue the SIGHUP I see that the packets
are being put into the first queue of Pool 1 as follows:

Pool 0: 0 0 0 0 0 0 0 0
Pool 1: 10 0 0 0 0 0 0 0
Pool 2: 0 0 0 0 0 0 0 0
Pool 3: 0 0 0 0 0 0 0 0
Pool 4: 0 0 0 0 0 0 0 0
Pool 5: 0 0 0 0 0 0 0 0
Pool 6: 0 0 0 0 0 0 0 0
Pool 7: 0 0 0 0 0 0 0 0
Pool 8: 0 0 0 0 0 0 0 0
Pool 9: 0 0 0 0 0 0 0 0
Pool 10: 0 0 0 0 0 0 0 0
Pool 11: 0 0 0 0 0 0 0 0
Pool 12: 0 0 0 0 0 0 0 0
Pool 13: 0 0 0 0 0 0 0 0
Pool 14: 0 0 0 0 0 0 0 0
Pool 15: 0 0 0 0 0 0 0 0
Finished handling signal 1

Since the packets are being tagged with PCP 2 they should be getting
mapped to 3rd queue of Pool 1, right?

As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC and
the packets show up in the expected queue. (Note, to get it to work I had
to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)

Here’s that setup:

/opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver
============================================
0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe

Network devices using kernel driver
===================================
0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0 drv=i40e unused=igb_uio
0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1 drv=i40e unused=igb_uio
0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2 drv=i40e unused=igb_uio
0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3 drv=i40e unused=igb_uio

Other Network devices
=====================
<none>

sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:3b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:af:00.0 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:af:00.1 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
vmdq queue base: 0 pool base 0
Port 0 MAC: 00 1b 21 bf 71 24
Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
vmdq queue base: 0 pool base 0
Port 1 MAC: 00 1b 21 bf 71 26
Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff

Now when I send the SIGHUP, I see the packets being routed to
the expected queue:

Pool 0: 0 0 0 0 0 0 0 0
Pool 1: 0 0 58 0 0 0 0 0
Pool 2: 0 0 0 0 0 0 0 0
Pool 3: 0 0 0 0 0 0 0 0
Pool 4: 0 0 0 0 0 0 0 0
Pool 5: 0 0 0 0 0 0 0 0
Pool 6: 0 0 0 0 0 0 0 0
Pool 7: 0 0 0 0 0 0 0 0
Pool 8: 0 0 0 0 0 0 0 0
Pool 9: 0 0 0 0 0 0 0 0
Pool 10: 0 0 0 0 0 0 0 0
Pool 11: 0 0 0 0 0 0 0 0
Pool 12: 0 0 0 0 0 0 0 0
Pool 13: 0 0 0 0 0 0 0 0
Pool 14: 0 0 0 0 0 0 0 0
Pool 15: 0 0 0 0 0 0 0 0
Finished handling signal 1

What am I missing?

Thankyou in advance,
--Mike


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-06 23:11 [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC Mike DeVico
@ 2019-09-09 20:39 ` Thomas Monjalon
  2019-09-12 17:06   ` Mike DeVico
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Monjalon @ 2019-09-09 20:39 UTC (permalink / raw)
  To: Mike DeVico
  Cc: dev, Beilei Xing, Qi Zhang, Bruce Richardson, Konstantin Ananyev,
	ferruh.yigit

Adding i40e maintainers and few more.

07/09/2019 01:11, Mike DeVico:
> Hello,
> 
> I am having an issue getting the DCB feature to work with an Intel
> X710 Quad SFP+ NIC.
> 
> Here’s my setup:
> 
> 1.      DPDK 18.08 built with the following I40E configs:
> 
> CONFIG_RTE_LIBRTE_I40E_PMD=y
> CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
> CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
> CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
> CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
> CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
> 
> 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
> 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
> 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
> 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
> 
>        Network devices using kernel driver
>        ===================================
>        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
>        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
> 
>        Other Network devices
>        =====================
>        <none>
> 
> 3.      We have a custom FPGA board connected to port 1 of the X710 NIC that’s broadcasting
> a packet tagged with VLAN 1 and PCP 2.
> 
> 4.      I use the vmdq_dcb example app and configure the card with 16 pools/8 queue each
> as follows:
>        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
> 
> 
> The apps starts up fine and successfully probes the card as shown below:
> 
> sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
> EAL: Detected 80 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Probing VFIO support...
> EAL: PCI device 0000:02:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:02:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:3b:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:3b:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:3b:00.2 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:3b:00.3 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> vmdq queue base: 64 pool base 1
> Configured vmdq pool num: 16, each vmdq pool has 8 queues
> Port 0 MAC: e8 ea 6a 27 b5 4d
> Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
> Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
> Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
> Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
> Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
> Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
> Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
> Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
> Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
> Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
> Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
> Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
> Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
> Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
> Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
> Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
> vmdq queue base: 64 pool base 1
> Configured vmdq pool num: 16, each vmdq pool has 8 queues
> Port 1 MAC: e8 ea 6a 27 b5 4e
> Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
> Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
> Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
> Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
> Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
> Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
> Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
> Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
> Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
> Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
> Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
> Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
> Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
> Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
> Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
> Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
> 
> Skipping disabled port 2
> 
> Skipping disabled port 3
> Core 0(lcore 1) reading queues 64-191
> 
> However, when I issue the SIGHUP I see that the packets
> are being put into the first queue of Pool 1 as follows:
> 
> Pool 0: 0 0 0 0 0 0 0 0
> Pool 1: 10 0 0 0 0 0 0 0
> Pool 2: 0 0 0 0 0 0 0 0
> Pool 3: 0 0 0 0 0 0 0 0
> Pool 4: 0 0 0 0 0 0 0 0
> Pool 5: 0 0 0 0 0 0 0 0
> Pool 6: 0 0 0 0 0 0 0 0
> Pool 7: 0 0 0 0 0 0 0 0
> Pool 8: 0 0 0 0 0 0 0 0
> Pool 9: 0 0 0 0 0 0 0 0
> Pool 10: 0 0 0 0 0 0 0 0
> Pool 11: 0 0 0 0 0 0 0 0
> Pool 12: 0 0 0 0 0 0 0 0
> Pool 13: 0 0 0 0 0 0 0 0
> Pool 14: 0 0 0 0 0 0 0 0
> Pool 15: 0 0 0 0 0 0 0 0
> Finished handling signal 1
> 
> Since the packets are being tagged with PCP 2 they should be getting
> mapped to 3rd queue of Pool 1, right?
> 
> As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC and
> the packets show up in the expected queue. (Note, to get it to work I had
> to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
> 
> Here’s that setup:
> 
> /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
> 
> Network devices using kernel driver
> ===================================
> 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
> 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
> 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0 drv=i40e unused=igb_uio
> 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1 drv=i40e unused=igb_uio
> 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2 drv=i40e unused=igb_uio
> 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3 drv=i40e unused=igb_uio
> 
> Other Network devices
> =====================
> <none>
> 
> sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
> EAL: Detected 80 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Probing VFIO support...
> EAL: PCI device 0000:02:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:02:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:3b:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:3b:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:3b:00.2 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:3b:00.3 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:af:00.0 on NUMA socket 1
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device 0000:af:00.1 on NUMA socket 1
> EAL:   probe driver: 8086:10fb net_ixgbe
> vmdq queue base: 0 pool base 0
> Port 0 MAC: 00 1b 21 bf 71 24
> Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
> Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
> vmdq queue base: 0 pool base 0
> Port 1 MAC: 00 1b 21 bf 71 26
> Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
> Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
> 
> Now when I send the SIGHUP, I see the packets being routed to
> the expected queue:
> 
> Pool 0: 0 0 0 0 0 0 0 0
> Pool 1: 0 0 58 0 0 0 0 0
> Pool 2: 0 0 0 0 0 0 0 0
> Pool 3: 0 0 0 0 0 0 0 0
> Pool 4: 0 0 0 0 0 0 0 0
> Pool 5: 0 0 0 0 0 0 0 0
> Pool 6: 0 0 0 0 0 0 0 0
> Pool 7: 0 0 0 0 0 0 0 0
> Pool 8: 0 0 0 0 0 0 0 0
> Pool 9: 0 0 0 0 0 0 0 0
> Pool 10: 0 0 0 0 0 0 0 0
> Pool 11: 0 0 0 0 0 0 0 0
> Pool 12: 0 0 0 0 0 0 0 0
> Pool 13: 0 0 0 0 0 0 0 0
> Pool 14: 0 0 0 0 0 0 0 0
> Pool 15: 0 0 0 0 0 0 0 0
> Finished handling signal 1
> 
> What am I missing?
> 
> Thankyou in advance,
> --Mike
> 
> 






^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-09 20:39 ` Thomas Monjalon
@ 2019-09-12 17:06   ` Mike DeVico
  2019-09-12 17:46     ` Jeff Weeks
  0 siblings, 1 reply; 7+ messages in thread
From: Mike DeVico @ 2019-09-12 17:06 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Beilei Xing, Qi Zhang, Bruce Richardson, Konstantin Ananyev,
	ferruh.yigit

Still no hits...

--Mike

On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:

    [EXTERNAL SENDER]
    
    Adding i40e maintainers and few more.
    
    07/09/2019 01:11, Mike DeVico:
    > Hello,
    >
    > I am having an issue getting the DCB feature to work with an Intel
    > X710 Quad SFP+ NIC.
    >
    > Here’s my setup:
    >
    > 1.      DPDK 18.08 built with the following I40E configs:
    >
    > CONFIG_RTE_LIBRTE_I40E_PMD=y
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >
    > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    >
    >        Network devices using kernel driver
    >        ===================================
    >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    >
    >        Other Network devices
    >        =====================
    >        <none>
    >
    > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC that’s broadcasting
    > a packet tagged with VLAN 1 and PCP 2.
    >
    > 4.      I use the vmdq_dcb example app and configure the card with 16 pools/8 queue each
    > as follows:
    >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >
    >
    > The apps starts up fine and successfully probes the card as shown below:
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 0 MAC: e8 ea 6a 27 b5 4d
    > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 1 MAC: e8 ea 6a 27 b5 4e
    > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >
    > Skipping disabled port 2
    >
    > Skipping disabled port 3
    > Core 0(lcore 1) reading queues 64-191
    >
    > However, when I issue the SIGHUP I see that the packets
    > are being put into the first queue of Pool 1 as follows:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 10 0 0 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > Since the packets are being tagged with PCP 2 they should be getting
    > mapped to 3rd queue of Pool 1, right?
    >
    > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC and
    > the packets show up in the expected queue. (Note, to get it to work I had
    > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >
    > Here’s that setup:
    >
    > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    >
    > Network devices using kernel driver
    > ===================================
    > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0 drv=i40e unused=igb_uio
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1 drv=i40e unused=igb_uio
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2 drv=i40e unused=igb_uio
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3 drv=i40e unused=igb_uio
    >
    > Other Network devices
    > =====================
    > <none>
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > vmdq queue base: 0 pool base 0
    > Port 0 MAC: 00 1b 21 bf 71 24
    > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    > vmdq queue base: 0 pool base 0
    > Port 1 MAC: 00 1b 21 bf 71 26
    > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >
    > Now when I send the SIGHUP, I see the packets being routed to
    > the expected queue:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 0 0 58 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > What am I missing?
    >
    > Thankyou in advance,
    > --Mike
    >
    >
    
    
    
    
    
    


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-12 17:06   ` Mike DeVico
@ 2019-09-12 17:46     ` Jeff Weeks
  2019-09-12 18:10       ` Mike DeVico
  0 siblings, 1 reply; 7+ messages in thread
From: Jeff Weeks @ 2019-09-12 17:46 UTC (permalink / raw)
  To: Mike DeVico, Thomas Monjalon
  Cc: dev, Beilei Xing, Qi Zhang, Bruce Richardson, Konstantin Ananyev,
	ferruh.yigit

I don't have much else to add, except that I also see dcb fail on the same NIC:


  i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.


My card doesn't receive any packets, though; not sure if it's related to this, or not.


--Jeff

________________________________
/dev/jeff_weeks.x2936
Sandvine Incorporated


________________________________
From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico <mdevico@xcom-labs.com>
Sent: Thursday, September 12, 2019 1:06 PM
To: Thomas Monjalon
Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin Ananyev; ferruh.yigit@intel.com
Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

[EXTERNAL]

Still no hits...

--Mike

On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:

    [EXTERNAL SENDER]

    Adding i40e maintainers and few more.

    07/09/2019 01:11, Mike DeVico:
    > Hello,
    >
    > I am having an issue getting the DCB feature to work with an Intel
    > X710 Quad SFP+ NIC.
    >
    > Here’s my setup:
    >
    > 1.      DPDK 18.08 built with the following I40E configs:
    >
    > CONFIG_RTE_LIBRTE_I40E_PMD=y
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >
    > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    >
    >        Network devices using kernel driver
    >        ===================================
    >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    >
    >        Other Network devices
    >        =====================
    >        <none>
    >
    > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC that’s broadcasting
    > a packet tagged with VLAN 1 and PCP 2.
    >
    > 4.      I use the vmdq_dcb example app and configure the card with 16 pools/8 queue each
    > as follows:
    >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >
    >
    > The apps starts up fine and successfully probes the card as shown below:
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 0 MAC: e8 ea 6a 27 b5 4d
    > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 1 MAC: e8 ea 6a 27 b5 4e
    > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >
    > Skipping disabled port 2
    >
    > Skipping disabled port 3
    > Core 0(lcore 1) reading queues 64-191
    >
    > However, when I issue the SIGHUP I see that the packets
    > are being put into the first queue of Pool 1 as follows:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 10 0 0 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > Since the packets are being tagged with PCP 2 they should be getting
    > mapped to 3rd queue of Pool 1, right?
    >
    > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC and
    > the packets show up in the expected queue. (Note, to get it to work I had
    > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >
    > Here’s that setup:
    >
    > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    >
    > Network devices using kernel driver
    > ===================================
    > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0 drv=i40e unused=igb_uio
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1 drv=i40e unused=igb_uio
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2 drv=i40e unused=igb_uio
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3 drv=i40e unused=igb_uio
    >
    > Other Network devices
    > =====================
    > <none>
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > vmdq queue base: 0 pool base 0
    > Port 0 MAC: 00 1b 21 bf 71 24
    > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    > vmdq queue base: 0 pool base 0
    > Port 1 MAC: 00 1b 21 bf 71 26
    > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >
    > Now when I send the SIGHUP, I see the packets being routed to
    > the expected queue:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 0 0 58 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > What am I missing?
    >
    > Thankyou in advance,
    > --Mike
    >
    >








^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-12 17:46     ` Jeff Weeks
@ 2019-09-12 18:10       ` Mike DeVico
  2019-09-26 16:02         ` Zhang, Helin
  0 siblings, 1 reply; 7+ messages in thread
From: Mike DeVico @ 2019-09-12 18:10 UTC (permalink / raw)
  To: Jeff Weeks, Thomas Monjalon
  Cc: dev, Beilei Xing, Qi Zhang, Bruce Richardson, Konstantin Ananyev,
	ferruh.yigit

Hi Jeff,

Thanks for chiming in...

Yeah, In my case I get the packets, but they end up being put in queue 0 instead of 2.

--Mike

From: Jeff Weeks <jweeks@sandvine.com>
Date: Thursday, September 12, 2019 at 10:47 AM
To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi Zhang <qi.z.zhang@intel.com>, Bruce Richardson <bruce.richardson@intel.com>, Konstantin Ananyev <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com" <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

[EXTERNAL SENDER]

I don't have much else to add, except that I also see dcb fail on the same NIC:



  i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.



My card doesn't receive any packets, though; not sure if it's related to this, or not.



--Jeff

________________________________
/dev/jeff_weeks.x2936
Sandvine Incorporated

________________________________
From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico <mdevico@xcom-labs.com>
Sent: Thursday, September 12, 2019 1:06 PM
To: Thomas Monjalon
Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin Ananyev; ferruh.yigit@intel.com
Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

[EXTERNAL]

Still no hits...

--Mike

On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:

    [EXTERNAL SENDER]

    Adding i40e maintainers and few more.

    07/09/2019 01:11, Mike DeVico:
    > Hello,
    >
    > I am having an issue getting the DCB feature to work with an Intel
    > X710 Quad SFP+ NIC.
    >
    > Here’s my setup:
    >
    > 1.      DPDK 18.08 built with the following I40E configs:
    >
    > CONFIG_RTE_LIBRTE_I40E_PMD=y
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >
    > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    >
    >        Network devices using kernel driver
    >        ===================================
    >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    >
    >        Other Network devices
    >        =====================
    >        <none>
    >
    > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC that’s broadcasting
    > a packet tagged with VLAN 1 and PCP 2.
    >
    > 4.      I use the vmdq_dcb example app and configure the card with 16 pools/8 queue each
    > as follows:
    >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >
    >
    > The apps starts up fine and successfully probes the card as shown below:
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 0 MAC: e8 ea 6a 27 b5 4d
    > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 1 MAC: e8 ea 6a 27 b5 4e
    > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >
    > Skipping disabled port 2
    >
    > Skipping disabled port 3
    > Core 0(lcore 1) reading queues 64-191
    >
    > However, when I issue the SIGHUP I see that the packets
    > are being put into the first queue of Pool 1 as follows:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 10 0 0 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > Since the packets are being tagged with PCP 2 they should be getting
    > mapped to 3rd queue of Pool 1, right?
    >
    > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC and
    > the packets show up in the expected queue. (Note, to get it to work I had
    > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >
    > Here’s that setup:
    >
    > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    >
    > Network devices using kernel driver
    > ===================================
    > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0 drv=i40e unused=igb_uio
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1 drv=i40e unused=igb_uio
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2 drv=i40e unused=igb_uio
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3 drv=i40e unused=igb_uio
    >
    > Other Network devices
    > =====================
    > <none>
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > vmdq queue base: 0 pool base 0
    > Port 0 MAC: 00 1b 21 bf 71 24
    > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    > vmdq queue base: 0 pool base 0
    > Port 1 MAC: 00 1b 21 bf 71 26
    > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >
    > Now when I send the SIGHUP, I see the packets being routed to
    > the expected queue:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 0 0 58 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > What am I missing?
    >
    > Thankyou in advance,
    > --Mike
    >
    >







^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-12 18:10       ` Mike DeVico
@ 2019-09-26 16:02         ` Zhang, Helin
  0 siblings, 0 replies; 7+ messages in thread
From: Zhang, Helin @ 2019-09-26 16:02 UTC (permalink / raw)
  To: Mike DeVico, Jeff Weeks, Thomas Monjalon
  Cc: dev, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Zhang, Xiao

Hi Mike

Can you check if you are using the right combination of DPDK version and NIC firmware, and kernel driver if you are using?
You can find the recommended combination from http://doc.dpdk.org/guides/nics/i40e.html#recommended-matching-list. Hopefully that helps!

Regards,
Helin

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mike DeVico
> Sent: Friday, September 13, 2019 2:10 AM
> To: Jeff Weeks; Thomas Monjalon
> Cc: dev@dpdk.org; Xing, Beilei; Zhang, Qi Z; Richardson, Bruce; Ananyev,
> Konstantin; Yigit, Ferruh
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> Hi Jeff,
> 
> Thanks for chiming in...
> 
> Yeah, In my case I get the packets, but they end up being put in queue 0
> instead of 2.
> 
> --Mike
> 
> From: Jeff Weeks <jweeks@sandvine.com>
> Date: Thursday, September 12, 2019 at 10:47 AM
> To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon
> <thomas@monjalon.net>
> Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi
> Zhang <qi.z.zhang@intel.com>, Bruce Richardson
> <bruce.richardson@intel.com>, Konstantin Ananyev
> <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com"
> <ferruh.yigit@intel.com>
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> [EXTERNAL SENDER]
> 
> I don't have much else to add, except that I also see dcb fail on the same NIC:
> 
> 
> 
>   i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.
> 
> 
> 
> My card doesn't receive any packets, though; not sure if it's related to this, or
> not.
> 
> 
> 
> --Jeff
> 
> ________________________________
> /dev/jeff_weeks.x2936
> Sandvine Incorporated
> 
> ________________________________
> From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico
> <mdevico@xcom-labs.com>
> Sent: Thursday, September 12, 2019 1:06 PM
> To: Thomas Monjalon
> Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin
> Ananyev; ferruh.yigit@intel.com
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> [EXTERNAL]
> 
> Still no hits...
> 
> --Mike
> 
> On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
> 
>     [EXTERNAL SENDER]
> 
>     Adding i40e maintainers and few more.
> 
>     07/09/2019 01:11, Mike DeVico:
>     > Hello,
>     >
>     > I am having an issue getting the DCB feature to work with an Intel
>     > X710 Quad SFP+ NIC.
>     >
>     > Here’s my setup:
>     >
>     > 1.      DPDK 18.08 built with the following I40E configs:
>     >
>     > CONFIG_RTE_LIBRTE_I40E_PMD=y
>     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
>     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
>     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
>     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
>     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
>     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
>     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
>     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
>     >
>     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >
>     > Network devices using DPDK-compatible driver
>     > ============================================
>     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     >
>     >        Network devices using kernel driver
>     >        ===================================
>     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
> drv=igb unused=igb_uio *Active*
>     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
> drv=igb unused=igb_uio *Active*
>     >
>     >        Other Network devices
>     >        =====================
>     >        <none>
>     >
>     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
> that’s broadcasting
>     > a packet tagged with VLAN 1 and PCP 2.
>     >
>     > 4.      I use the vmdq_dcb example app and configure the card with 16
> pools/8 queue each
>     > as follows:
>     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >
>     >
>     > The apps starts up fine and successfully probes the card as shown below:
>     >
>     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     > EAL: Detected 80 lcore(s)
>     > EAL: Detected 2 NUMA nodes
>     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     > EAL: Probing VFIO support...
>     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > vmdq queue base: 64 pool base 1
>     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     > Port 0 MAC: e8 ea 6a 27 b5 4d
>     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
>     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
>     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
>     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
>     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
>     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
>     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
>     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
>     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
>     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
>     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
>     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
>     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
>     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
>     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
>     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
>     > vmdq queue base: 64 pool base 1
>     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     > Port 1 MAC: e8 ea 6a 27 b5 4e
>     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
>     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
>     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
>     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
>     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
>     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
>     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
>     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
>     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
>     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
>     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
>     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
>     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
>     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
>     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
>     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
>     >
>     > Skipping disabled port 2
>     >
>     > Skipping disabled port 3
>     > Core 0(lcore 1) reading queues 64-191
>     >
>     > However, when I issue the SIGHUP I see that the packets
>     > are being put into the first queue of Pool 1 as follows:
>     >
>     > Pool 0: 0 0 0 0 0 0 0 0
>     > Pool 1: 10 0 0 0 0 0 0 0
>     > Pool 2: 0 0 0 0 0 0 0 0
>     > Pool 3: 0 0 0 0 0 0 0 0
>     > Pool 4: 0 0 0 0 0 0 0 0
>     > Pool 5: 0 0 0 0 0 0 0 0
>     > Pool 6: 0 0 0 0 0 0 0 0
>     > Pool 7: 0 0 0 0 0 0 0 0
>     > Pool 8: 0 0 0 0 0 0 0 0
>     > Pool 9: 0 0 0 0 0 0 0 0
>     > Pool 10: 0 0 0 0 0 0 0 0
>     > Pool 11: 0 0 0 0 0 0 0 0
>     > Pool 12: 0 0 0 0 0 0 0 0
>     > Pool 13: 0 0 0 0 0 0 0 0
>     > Pool 14: 0 0 0 0 0 0 0 0
>     > Pool 15: 0 0 0 0 0 0 0 0
>     > Finished handling signal 1
>     >
>     > Since the packets are being tagged with PCP 2 they should be getting
>     > mapped to 3rd queue of Pool 1, right?
>     >
>     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
> and
>     > the packets show up in the expected queue. (Note, to get it to work I had
>     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
>     >
>     > Here’s that setup:
>     >
>     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >
>     > Network devices using DPDK-compatible driver
>     > ============================================
>     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> drv=igb_uio unused=ixgbe
>     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> drv=igb_uio unused=ixgbe
>     >
>     > Network devices using kernel driver
>     > ===================================
>     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb
> unused=igb_uio *Active*
>     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb
> unused=igb_uio *Active*
>     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f0 drv=i40e unused=igb_uio
>     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f1 drv=i40e unused=igb_uio
>     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f2 drv=i40e unused=igb_uio
>     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f3 drv=i40e unused=igb_uio
>     >
>     > Other Network devices
>     > =====================
>     > <none>
>     >
>     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     > EAL: Detected 80 lcore(s)
>     > EAL: Detected 2 NUMA nodes
>     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     > EAL: Probing VFIO support...
>     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
>     > EAL:   probe driver: 8086:10fb net_ixgbe
>     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
>     > EAL:   probe driver: 8086:10fb net_ixgbe
>     > vmdq queue base: 0 pool base 0
>     > Port 0 MAC: 00 1b 21 bf 71 24
>     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     > vmdq queue base: 0 pool base 0
>     > Port 1 MAC: 00 1b 21 bf 71 26
>     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     >
>     > Now when I send the SIGHUP, I see the packets being routed to
>     > the expected queue:
>     >
>     > Pool 0: 0 0 0 0 0 0 0 0
>     > Pool 1: 0 0 58 0 0 0 0 0
>     > Pool 2: 0 0 0 0 0 0 0 0
>     > Pool 3: 0 0 0 0 0 0 0 0
>     > Pool 4: 0 0 0 0 0 0 0 0
>     > Pool 5: 0 0 0 0 0 0 0 0
>     > Pool 6: 0 0 0 0 0 0 0 0
>     > Pool 7: 0 0 0 0 0 0 0 0
>     > Pool 8: 0 0 0 0 0 0 0 0
>     > Pool 9: 0 0 0 0 0 0 0 0
>     > Pool 10: 0 0 0 0 0 0 0 0
>     > Pool 11: 0 0 0 0 0 0 0 0
>     > Pool 12: 0 0 0 0 0 0 0 0
>     > Pool 13: 0 0 0 0 0 0 0 0
>     > Pool 14: 0 0 0 0 0 0 0 0
>     > Pool 15: 0 0 0 0 0 0 0 0
>     > Finished handling signal 1
>     >
>     > What am I missing?
>     >
>     > Thankyou in advance,
>     > --Mike
>     >
>     >
> 
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
@ 2019-08-14 17:53 Mike DeVico
  0 siblings, 0 replies; 7+ messages in thread
From: Mike DeVico @ 2019-08-14 17:53 UTC (permalink / raw)
  To: dev

Hello,

I am having an issue getting the DCB feature to work with an Intel
X710 Quad SFP+ NIC.

Here’s my setup:


  1.  DPDK 18.08 built with the following I40E configs:


CONFIG_RTE_LIBRTE_I40E_PMD=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8


  1.  /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net


Network devices using DPDK-compatible driver

============================================

0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e

0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e

0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e

0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e

       Network devices using kernel driver
       ===================================
       0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
       0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*

       Other Network devices
       =====================
       <none>


  1.  We have a custom FPGA board connected to port 1 of the X710 NIC that’s broadcasting
a packet tagged with VLAN 1 and PCP 2.


  1.  I use the vmdq_dcb example app and configure the card with 16 pools/8 queue each
as follows:
       sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3

The apps starts up fine and successfully probes the card as shown below:

sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:3b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
vmdq queue base: 64 pool base 1
Configured vmdq pool num: 16, each vmdq pool has 8 queues
Port 0 MAC: e8 ea 6a 27 b5 4d
Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
vmdq queue base: 64 pool base 1
Configured vmdq pool num: 16, each vmdq pool has 8 queues
Port 1 MAC: e8 ea 6a 27 b5 4e
Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f

Skipping disabled port 2

Skipping disabled port 3
Core 0(lcore 1) reading queues 64-191

However, when I issue the SIGHUP I see that the packets
are being put into the first queue of Pool 1 as follows:

Pool 0: 0 0 0 0 0 0 0 0
Pool 1: 10 0 0 0 0 0 0 0
Pool 2: 0 0 0 0 0 0 0 0
Pool 3: 0 0 0 0 0 0 0 0
Pool 4: 0 0 0 0 0 0 0 0
Pool 5: 0 0 0 0 0 0 0 0
Pool 6: 0 0 0 0 0 0 0 0
Pool 7: 0 0 0 0 0 0 0 0
Pool 8: 0 0 0 0 0 0 0 0
Pool 9: 0 0 0 0 0 0 0 0
Pool 10: 0 0 0 0 0 0 0 0
Pool 11: 0 0 0 0 0 0 0 0
Pool 12: 0 0 0 0 0 0 0 0
Pool 13: 0 0 0 0 0 0 0 0
Pool 14: 0 0 0 0 0 0 0 0
Pool 15: 0 0 0 0 0 0 0 0
Finished handling signal 1

Since the packets are being tagged with PCP 2 they should be getting
mapped to 3rd queue of Pool 1, right?

As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC and
the packets show up in the expected queue. (Note, to get it to work I had
to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)

Here’s that setup:

/opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver
============================================
0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe

Network devices using kernel driver
===================================
0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0 drv=i40e unused=igb_uio
0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1 drv=i40e unused=igb_uio
0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2 drv=i40e unused=igb_uio
0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3 drv=i40e unused=igb_uio

Other Network devices
=====================
<none>

sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:02:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:3b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.2 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:3b:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:af:00.0 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:af:00.1 on NUMA socket 1
EAL:   probe driver: 8086:10fb net_ixgbe
vmdq queue base: 0 pool base 0
Port 0 MAC: 00 1b 21 bf 71 24
Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
vmdq queue base: 0 pool base 0
Port 1 MAC: 00 1b 21 bf 71 26
Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff

Now when I send the SIGHUP, I see the packets being routed to
the expected queue:

Pool 0: 0 0 0 0 0 0 0 0
Pool 1: 0 0 58 0 0 0 0 0
Pool 2: 0 0 0 0 0 0 0 0
Pool 3: 0 0 0 0 0 0 0 0
Pool 4: 0 0 0 0 0 0 0 0
Pool 5: 0 0 0 0 0 0 0 0
Pool 6: 0 0 0 0 0 0 0 0
Pool 7: 0 0 0 0 0 0 0 0
Pool 8: 0 0 0 0 0 0 0 0
Pool 9: 0 0 0 0 0 0 0 0
Pool 10: 0 0 0 0 0 0 0 0
Pool 11: 0 0 0 0 0 0 0 0
Pool 12: 0 0 0 0 0 0 0 0
Pool 13: 0 0 0 0 0 0 0 0
Pool 14: 0 0 0 0 0 0 0 0
Pool 15: 0 0 0 0 0 0 0 0
Finished handling signal 1

What am I missing?

Thankyou in advance,
--Mike

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-09-26 16:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-06 23:11 [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC Mike DeVico
2019-09-09 20:39 ` Thomas Monjalon
2019-09-12 17:06   ` Mike DeVico
2019-09-12 17:46     ` Jeff Weeks
2019-09-12 18:10       ` Mike DeVico
2019-09-26 16:02         ` Zhang, Helin
  -- strict thread matches above, loose matches on Subject: below --
2019-08-14 17:53 Mike DeVico

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).