DPDK usage discussions
 help / color / Atom feed
* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
@ 2019-10-10 21:12 Mike DeVico
  0 siblings, 0 replies; 20+ messages in thread
From: Mike DeVico @ 2019-10-10 21:12 UTC (permalink / raw)
  To: Zhang, Helin, Jeff Weeks, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Zhang, Xiao, Wong1, Samuel,
	Kobi Cohen-Arazi

I was also able reproduce the same issue by connecting an actual Ferrybridge to one port (0000:82.0)

and then use the vmdq_dcb app to receive and forward the packet to another port (0000:82.1)

which I have connected via a passive copper cable to a third port (0000:82.2) which I then read

using my rxtest app.



So it looks like this:



Ferrybridge -- > Port 0000:82.0 --> vmdq_dcb --> 0000:82.1 --> passive copper cable directly connected to --> 0000:82.2 --> rxtest app.



Packet received by the vmdq_dcb app (on 0000:82:00.0)



RX Packet at [0x7f47778797c0], len=60

00000000: 02 00 00 00 00 01 E8 EA 6A 27 B8 FA 81 00 40 01 | ........j'....@.

00000010: 08 00 00 09 00 00 00 00 00 01 80 86 36 00 01 0F | ............6...

00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................

00000030: 00 00 00 00 00 00 00 00 00 00 00 00 |  |  |  |  | ............

Forwarding to queue: 74 (Port 0000:82.00.1)



Packet as seen from my simple rxtest app listening on Port 0000:82.00.2.



dump mbuf at 0x7f813eab4e00, iova=dfeab4e80, buf_len=2176

  pkt_len=60, ol_flags=180, nb_segs=1, in_port=0

  segment at 0x7f813eab4e00, data=0x7f813eab4f00, data_len=60

  Dump data at [0x7f813eab4f00], len=60

00000000: 02 00 00 00 00 01 E8 EA 6A 27 B8 FA 81 00 00 01 | ........j'......

00000010: 08 00 00 09 00 00 00 00 00 01 80 86 36 00 01 0F | ............6...

00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................

00000030: 00 00 00 00 00 00 00 00 00 00 00 00 |  |  |  |  | ............



Why does the transmitted packet go out with the PCP/DEI portion of the VLAN tag

zeroed out and what do I need to configure to prevent this from happening?



Thanks!

--Mike





On 10/3/19, 4:56 PM, "users on behalf of Mike DeVico" <users-bounces@dpdk.org on behalf of mdevico@xcom-labs.com> wrote:



    [EXTERNAL SENDER]



    I set up a basic loopback test by connecting two ports of



    the same X710 NIC with a 10G passive copper cable.







    In my case, the two ports map to PCI addresses



    0000:82:00.0 and 0000:82:00.1. 0000:82:00.0 is bound



    to igb_uio and .1 is left bound to the kernel driver (i40e)



    which is mapped to linux eth iface p4p1.







    I then created a simple txtest app (see attached) based on the



    vmdq_dcb example app that sends out a fixed packet once per



    second over a specified port. It also prints out the packet each



    time it sends it.







    I run txtest as follows:







    sudo ./txtest -w 0000:82:00.0 -c 0x3







    TX Packet at [0x7fa407c2ee80], len=60



    00000000: 00 00 AE AE 00 00 E8 EA 6A 27 B8 F9 81 00 40 01 | ........j'....@.



    00000010: 08 00 00 08 00 00 00 00 AA BB CC DD 00 00 00 00 | ................



    00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................



    00000030: 00 00 00 00 00 00 00 00 00 00 00 00 |  |  |  |  | ............







    ...







    Note the byte shown in red. This corresponds to the PCP field in the VLAN header.







    I then run tcpdump on p2p4 (the receiving end of the loopback) as shown:







    sudo tcpdump -xxi p4p2







    19:02:08.124694 IP0



            0x0000:  0000 aeae 0000 e8ea 6a27 b8f9 8100 0001



            0x0010:  0800 0008 0000 0000 aabb ccdd 0000 0000



            0x0020:  0000 0000 0000 0000 0000 0000 0000 0000



            0x0030:  0000 0000 0000 0000 0000 0000



    ...







    Once again, note the PCP byte shown in red. The source of my



    problem would appear to be that the PCP field is somehow getting



    set to 0 even though it’s being passed to rte_eth_tx_burst as 0x40.



    Here’s the section of the code in my main.c.







    lcore_main(void *arg)



    {



        uint32_t lcore_id = rte_lcore_id();







        struct rte_mempool *mbuf_pool = (struct rte_mempool*)arg;







        RTE_LOG(INFO, TXTEST, "tx entering main loop on lcore %u\n", lcore_id);







        static const uint8_t tx_data[] = {



             0x00, 0x00, 0xAE, 0xAE, 0x00, 0x00, 0xE8, 0xEA, 0x6A, 0x27, 0xB8, 0xF9, 0x81, 0x00, 0x40, 0x01,



             0x08, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, 0x00, 0x00, 0x00, 0x00,



             0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,



             0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00



        };







        struct rte_mbuf* tx_pkt      = rte_pktmbuf_alloc(mbuf_pool);



        uint8_t*         tx_pkt_dptr = (uint8_t *) rte_pktmbuf_append(tx_pkt, sizeof(tx_data));



        memcpy(tx_pkt_dptr,tx_data,sizeof(tx_data));



        const uint8_t* dptr = rte_pktmbuf_mtod(tx_pkt,uint8_t*);







        while (!force_quit) {



            uint32_t nb_rx = rte_eth_tx_burst(0, 0, &tx_pkt, 1);



           if (nb_rx == 1) {



                rte_hexdump(stdout, "TX Packet", dptr,



                            rte_pktmbuf_data_len(tx_pkt));



            }



            sleep(1);



        }



        return 0;



    }







    As can be seen, the packet is fixed and it is dumped immediately



    after being sent, so why is the PCP field going out over the wire



    as a 0?







    For reference, I’m using dpdk version 18.08 and the only config change



    is to set CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8 (from the default 4).



    I did this because our actual application requires the RX to be configured with



    16 pools with 8 TCs queues per pool and I wanted to resemble the target



    config as closely as possible.







    Any insight/assistance would be greatly appreciated.







    Thank you in advance.



    --Mike







    On 9/29/19, 7:22 PM, "Zhang, Helin" <helin.zhang@intel.com> wrote:







        [EXTERNAL SENDER]







        Hi Mike







        There you need to try to use the example applications (e.g. testpmd) to reproduce your issue. Then send out the detailed information to the maintainers. Thanks!







        Regards,



        Helin







        -----Original Message-----



        From: Mike DeVico [mailto:mdevico@xcom-labs.com]



        Sent: Thursday, September 26, 2019 9:32 PM



        To: Zhang, Helin <helin.zhang@intel.com>; Jeff Weeks <jweeks@sandvine.com>; Thomas Monjalon <thomas@monjalon.net>



        Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Xiao <xiao.zhang@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>



        Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC







        Hi Helin,







        Yes, and the reason the RX packets were not being queued to the proper queue was due to RSS not being enabled/configured. Once I did this, the RX packets were placed in the proper queue.







        That being said, what I see now is that the TX side seems to have an issue. So the way it works is that the Ferrybridge broadcasts out what's called a Present packet at 1s intervals. Once the host application detects the packet (which it now does) it verifies that the packet is correctly formatted and such and then sends a packet back to the Ferrybridge to tell it to stop sending this packet. However, that TX packet apparently is not going out because I continue to receive Present packets from the Ferrybridge at the 1s interval. What's not clear to me is what queue I should be sending this packet to. I actually tried sending it out all 128 queues, but I still keep receiving the Present packet. What I lack is the ability to actually sniff what's going out over the wire.







        Any ideas how to approach this issue?







        Thanks in advance,



        --Mike







        On 9/26/19, 9:02 AM, "Zhang, Helin" <helin.zhang@intel.com> wrote:







            [EXTERNAL SENDER]







            Hi Mike







            Can you check if you are using the right combination of DPDK version and NIC firmware, and kernel driver if you are using?



            You can find the recommended combination from https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fi40e.html%23recommended-matching-list&amp;data=02%7C01%7Cmdevico%40xcom-labs.com%7Cf2f84f03821d4326691b08d7485d4837%7C30eb8c3e540c4f279dfa86b6e7bfafa8%7C0%7C0%7C637057437871195895&amp;sdata=WOxDFmbaGcLNX1m4heyB5uu6dv5lY10XPcI13wYTxzA%3D&amp;reserved=0. Hopefully that helps!







            Regards,



            Helin







            > -----Original Message-----



            > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mike DeVico



            > Sent: Friday, September 13, 2019 2:10 AM



            > To: Jeff Weeks; Thomas Monjalon



            > Cc: dev@dpdk.org; Xing, Beilei; Zhang, Qi Z; Richardson, Bruce; Ananyev,



            > Konstantin; Yigit, Ferruh



            > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC



            >



            > Hi Jeff,



            >



            > Thanks for chiming in...



            >



            > Yeah, In my case I get the packets, but they end up being put in queue 0



            > instead of 2.



            >



            > --Mike



            >



            > From: Jeff Weeks <jweeks@sandvine.com>



            > Date: Thursday, September 12, 2019 at 10:47 AM



            > To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon



            > <thomas@monjalon.net>



            > Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi



            > Zhang <qi.z.zhang@intel.com>, Bruce Richardson



            > <bruce.richardson@intel.com>, Konstantin Ananyev



            > <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com"



            > <ferruh.yigit@intel.com>



            > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC



            >



            > [EXTERNAL SENDER]



            >



            > I don't have much else to add, except that I also see dcb fail on the same NIC:



            >



            >



            >



            >   i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.



            >



            >



            >



            > My card doesn't receive any packets, though; not sure if it's related to this, or



            > not.



            >



            >



            >



            > --Jeff



            >



            > ________________________________



            > /dev/jeff_weeks.x2936



            > Sandvine Incorporated



            >



            > ________________________________



            > From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico



            > <mdevico@xcom-labs.com>



            > Sent: Thursday, September 12, 2019 1:06 PM



            > To: Thomas Monjalon



            > Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin



            > Ananyev; ferruh.yigit@intel.com



            > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC



            >



            > [EXTERNAL]



            >



            > Still no hits...



            >



            > --Mike



            >



            > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:



            >



            >     [EXTERNAL SENDER]



            >



            >     Adding i40e maintainers and few more.



            >



            >     07/09/2019 01:11, Mike DeVico:



            >     > Hello,



            >     >



            >     > I am having an issue getting the DCB feature to work with an Intel



            >     > X710 Quad SFP+ NIC.



            >     >



            >     > Here’s my setup:



            >     >



            >     > 1.      DPDK 18.08 built with the following I40E configs:



            >     >



            >     > CONFIG_RTE_LIBRTE_I40E_PMD=y



            >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n



            >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n



            >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n



            >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y



            >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y



            >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n



            >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64



            >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8



            >     >



            >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net



            >     >



            >     > Network devices using DPDK-compatible driver



            >     > ============================================



            >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio



            > unused=i40e



            >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio



            > unused=i40e



            >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio



            > unused=i40e



            >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio



            > unused=i40e



            >     >



            >     >        Network devices using kernel driver



            >     >        ===================================



            >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0



            > drv=igb unused=igb_uio *Active*



            >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1



            > drv=igb unused=igb_uio *Active*



            >     >



            >     >        Other Network devices



            >     >        =====================



            >     >        <none>



            >     >



            >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC



            > that’s broadcasting



            >     > a packet tagged with VLAN 1 and PCP 2.



            >     >



            >     > 4.      I use the vmdq_dcb example app and configure the card with 16



            > pools/8 queue each



            >     > as follows:



            >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3



            >     >



            >     >



            >     > The apps starts up fine and successfully probes the card as shown below:



            >     >



            >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3



            >     > EAL: Detected 80 lcore(s)



            >     > EAL: Detected 2 NUMA nodes



            >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket



            >     > EAL: Probing VFIO support...



            >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1521 net_e1000_igb



            >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1521 net_e1000_igb



            >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > vmdq queue base: 64 pool base 1



            >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues



            >     > Port 0 MAC: e8 ea 6a 27 b5 4d



            >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00



            >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01



            >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02



            >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03



            >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04



            >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05



            >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06



            >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07



            >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08



            >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09



            >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a



            >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b



            >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c



            >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d



            >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e



            >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f



            >     > vmdq queue base: 64 pool base 1



            >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues



            >     > Port 1 MAC: e8 ea 6a 27 b5 4e



            >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00



            >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01



            >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02



            >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03



            >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04



            >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05



            >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06



            >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07



            >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08



            >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09



            >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a



            >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b



            >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c



            >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d



            >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e



            >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f



            >     >



            >     > Skipping disabled port 2



            >     >



            >     > Skipping disabled port 3



            >     > Core 0(lcore 1) reading queues 64-191



            >     >



            >     > However, when I issue the SIGHUP I see that the packets



            >     > are being put into the first queue of Pool 1 as follows:



            >     >



            >     > Pool 0: 0 0 0 0 0 0 0 0



            >     > Pool 1: 10 0 0 0 0 0 0 0



            >     > Pool 2: 0 0 0 0 0 0 0 0



            >     > Pool 3: 0 0 0 0 0 0 0 0



            >     > Pool 4: 0 0 0 0 0 0 0 0



            >     > Pool 5: 0 0 0 0 0 0 0 0



            >     > Pool 6: 0 0 0 0 0 0 0 0



            >     > Pool 7: 0 0 0 0 0 0 0 0



            >     > Pool 8: 0 0 0 0 0 0 0 0



            >     > Pool 9: 0 0 0 0 0 0 0 0



            >     > Pool 10: 0 0 0 0 0 0 0 0



            >     > Pool 11: 0 0 0 0 0 0 0 0



            >     > Pool 12: 0 0 0 0 0 0 0 0



            >     > Pool 13: 0 0 0 0 0 0 0 0



            >     > Pool 14: 0 0 0 0 0 0 0 0



            >     > Pool 15: 0 0 0 0 0 0 0 0



            >     > Finished handling signal 1



            >     >



            >     > Since the packets are being tagged with PCP 2 they should be getting



            >     > mapped to 3rd queue of Pool 1, right?



            >     >



            >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC



            > and



            >     > the packets show up in the expected queue. (Note, to get it to work I had



            >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)



            >     >



            >     > Here’s that setup:



            >     >



            >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net



            >     >



            >     > Network devices using DPDK-compatible driver



            >     > ============================================



            >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'



            > drv=igb_uio unused=ixgbe



            >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'



            > drv=igb_uio unused=ixgbe



            >     >



            >     > Network devices using kernel driver



            >     > ===================================



            >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb



            > unused=igb_uio *Active*



            >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb



            > unused=igb_uio *Active*



            >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'



            > if=enp59s0f0 drv=i40e unused=igb_uio



            >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'



            > if=enp59s0f1 drv=i40e unused=igb_uio



            >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'



            > if=enp59s0f2 drv=i40e unused=igb_uio



            >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'



            > if=enp59s0f3 drv=i40e unused=igb_uio



            >     >



            >     > Other Network devices



            >     > =====================



            >     > <none>



            >     >



            >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3



            >     > EAL: Detected 80 lcore(s)



            >     > EAL: Detected 2 NUMA nodes



            >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket



            >     > EAL: Probing VFIO support...



            >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1521 net_e1000_igb



            >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1521 net_e1000_igb



            >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0



            >     > EAL:   probe driver: 8086:1572 net_i40e



            >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1



            >     > EAL:   probe driver: 8086:10fb net_ixgbe



            >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1



            >     > EAL:   probe driver: 8086:10fb net_ixgbe



            >     > vmdq queue base: 0 pool base 0



            >     > Port 0 MAC: 00 1b 21 bf 71 24



            >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff



            >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff



            >     > vmdq queue base: 0 pool base 0



            >     > Port 1 MAC: 00 1b 21 bf 71 26



            >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff



            >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff



            >     >



            >     > Now when I send the SIGHUP, I see the packets being routed to



            >     > the expected queue:



            >     >



            >     > Pool 0: 0 0 0 0 0 0 0 0



            >     > Pool 1: 0 0 58 0 0 0 0 0



            >     > Pool 2: 0 0 0 0 0 0 0 0



            >     > Pool 3: 0 0 0 0 0 0 0 0



            >     > Pool 4: 0 0 0 0 0 0 0 0



            >     > Pool 5: 0 0 0 0 0 0 0 0



            >     > Pool 6: 0 0 0 0 0 0 0 0



            >     > Pool 7: 0 0 0 0 0 0 0 0



            >     > Pool 8: 0 0 0 0 0 0 0 0



            >     > Pool 9: 0 0 0 0 0 0 0 0



            >     > Pool 10: 0 0 0 0 0 0 0 0



            >     > Pool 11: 0 0 0 0 0 0 0 0



            >     > Pool 12: 0 0 0 0 0 0 0 0



            >     > Pool 13: 0 0 0 0 0 0 0 0



            >     > Pool 14: 0 0 0 0 0 0 0 0



            >     > Pool 15: 0 0 0 0 0 0 0 0



            >     > Finished handling signal 1



            >     >



            >     > What am I missing?



            >     >



            >     > Thankyou in advance,



            >     > --Mike



            >     >



            >     >



            >



            >



            >



            >



            >

















    -------------- next part --------------

    A non-text attachment was scrubbed...

    Name: txtest.tar

    Type: application/x-tar

    Size: 20480 bytes

    Desc: txtest.tar

    URL: <https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.dpdk.org%2Farchives%2Fusers%2Fattachments%2F20191003%2F5ed5e742%2Fattachment.tar&amp;data=02%7C01%7Cmdevico%40xcom-labs.com%7Cf2f84f03821d4326691b08d7485d4837%7C30eb8c3e540c4f279dfa86b6e7bfafa8%7C0%7C0%7C637057437871195895&amp;sdata=o2NiRH3D2gZ2d7r870YXNTPcKGhm05Twx7lAJP1bsGA%3D&amp;reserved=0>



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-10-10 21:23 ` Christensen, ChadX M
@ 2019-10-10 21:25   ` Mike DeVico
  0 siblings, 0 replies; 20+ messages in thread
From: Mike DeVico @ 2019-10-10 21:25 UTC (permalink / raw)
  To: Christensen, ChadX M, Zhang, Xiao
  Cc: Thomas Monjalon, users, Xing, Beilei, Zhang, Qi Z, Richardson,
	Bruce, Ananyev, Konstantin, Yigit, Ferruh, Tia Cassett, Wu,
	Jingjing, Wong1,  Samuel

Sure, see attached email I just sent to the users@dpdk.org list.

--Mike


On 10/10/19, 2:23 PM, "Christensen, ChadX M" <chadx.m.christensen@intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Xiao,
    RX side is working fine at this time for Mike, but now the Tx side has an issue.
    
    Hi Mike,
    Please describe the issue on the Tx side and let's see if Xiao can help with that as well.
    
    Thanks,
    
    Chad Christensen | Ecosystem Enablement Manager
    chadx.m.christensen@intel.com | (801) 786-5703
    
    -----Original Message-----
    From: Mike DeVico <mdevico@xcom-labs.com>
    Sent: Friday, September 20, 2019 3:57 PM
    To: Zhang, Xiao <xiao.zhang@intel.com>
    Cc: Christensen, ChadX M <chadx.m.christensen@intel.com>; Thomas Monjalon <thomas@monjalon.net>; users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
    Subject: Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    
    I figured it out!!
    
    All I needed to do was change the rss_hf from:
    
    eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
                                              ETH_RSS_UDP |
                                              ETH_RSS_TCP |
                                              ETH_RSS_SCTP;
    to simply:
    
    eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD
    
    And now I see:
    
    Pool 0: 0 0 0 0 0 0 0 0
    Pool 1: 0 0 16 0 0 0 0 0
    Pool 2: 0 0 0 0 0 0 0 0
    Pool 3: 0 0 0 0 0 0 0 0
    Pool 4: 0 0 0 0 0 0 0 0
    Pool 5: 0 0 0 0 0 0 0 0
    Pool 6: 0 0 0 0 0 0 0 0
    Pool 7: 0 0 0 0 0 0 0 0
    Pool 8: 0 0 0 0 0 0 0 0
    Pool 9: 0 0 0 0 0 0 0 0
    Pool 10: 0 0 0 0 0 0 0 0
    Pool 11: 0 0 0 0 0 0 0 0
    Pool 12: 0 0 0 0 0 0 0 0
    Pool 13: 0 0 0 0 0 0 0 0
    Pool 14: 0 0 0 0 0 0 0 0
    Pool 15: 0 0 0 0 0 0 0 0
    Finished handling signal 1
    
    Which is exactly how it should be!!!
    
    So in summary, we definitely need to enable rss, but we also need to set the rss_hf to simply ETH_RSS_L2_PAYLOAD so that it completely ignores any L3 fields.
    
    --Mike
    
    
    On 9/19/19, 6:34 AM, "users on behalf of Mike DeVico" <users-bounces@dpdk.org on behalf of mdevico@xcom-labs.com> wrote:
    
        [EXTERNAL SENDER]
    
        Hi Xiao,
    
        Thanks for looking into this!
    
        So here’s the situation...
    
        This is a raw Ethernet packet. No IP. This
        exact setup works fine with an 82599ES.
        It looks like the hardware limitation with
        the x710 is the real problem. If we have to
        enable RSS to make it work and RSS requires a valid IP addr/port, then it’s a  catch-22 for us unless there is something we can change in the driver to account for this.
    
        Thanks!
        —Mike
    
        > On Sep 18, 2019, at 7:52 PM, Zhang, Xiao <xiao.zhang@intel.com> wrote:
        >
        > [EXTERNAL SENDER]
        >
        >> -----Original Message-----
        >> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
        >> Sent: Thursday, September 19, 2019 9:23 AM
        >> To: Christensen, ChadX M <chadx.m.christensen@intel.com>; Zhang, Xiao
        >> <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
        >> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
        >> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
        >> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
        >> <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing
        >> <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
        >> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
        >>
        >> As suggested I tried the following:
        >>
        >> I have an Intel FlexRAN FerryBridge broadcasting a packet 1/s which looks like
        >> the following (sudo tcpdump -i p7p1 -xx):
        >>
        >>        0x0000:  ffff ffff ffff 0000 aeae 0000 8100 4001
        >>        0x0010:  0800 0009 0000 0000 0001 8086 3600 010f
        >>        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000
        >>        0x0030:  0000 0000 0000 0000 0000 0000
        >
        > There is error in the packets as I checked with wireshark, could you try with normal packets?
        >
        > No issue with following packet as I tried:
        > 0000   ff ff ff ff ff ff 00 40 05 40 ef 24 81 00 40 01
        > 0010   08 00 45 00 00 34 3b 64 40 00 40 06 b7 9b 83 97
        > 0020   20 81 83 97 20 15 04 95 17 70 51 d4 ee 9c 51 a5
        > 0030   5b 36 80 10 7c 70 12 c7 00 00 01 01 08 0a 00 04
        > 0040   f0 d4 01 99 a3 fd
        >
        >>
        >> The first 12 bytes are the dest/src MAC address followed by the 802.1Q Header
        >> (8100 4001) If you crack this, the MS 16 bits are the TPID which is set to 8100 by
        >> the Ferrybridge.
        >> The next 16 bits (0x4001) make up the PCP bits [15:13], the DEI [12] and the VID
        >> [11:0]. So if you crack the 0x4001 this makes the PCP 2 (010b), the DEI 0 and VID
        >> 1 (000000000001b).
        >>
        >> Given this I expect the packets to but placed in Pool 1/Queue 2 (based on VID 1
        >> and PCP 2).
        >> However, when I run:
        >>
        >> ./vmdq_dcb_app -w 0000:05:00.0 -w 0000:05:00.1 -l 1 -- -p 3 --nb-pools 16 --nb-
        >> tcs 8 --enable-rss
        >> EAL: Detected 24 lcore(s)
        >> EAL: Detected 2 NUMA nodes
        >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
        >> EAL: Probing VFIO support...
        >> EAL: PCI device 0000:05:00.0 on NUMA socket 0
        >> EAL:   probe driver: 8086:1572 net_i40e
        >> EAL: PCI device 0000:05:00.1 on NUMA socket 0
        >> EAL:   probe driver: 8086:1572 net_i40e
        >> vmdq queue base: 64 pool base 1
        >> Configured vmdq pool num: 16, each vmdq pool has 8 queues Port 0 modified
        >> RSS hash function based on hardware support,requested:0x3bffc
        >> configured:0x3ef8 Port 0 MAC: e8 ea 6a 27 b5 4d Port 0 vmdq pool 0 set mac
        >> 52:54:00:12:00:00 Port 0 vmdq pool 1 set mac 52:54:00:12:00:01 Port 0 vmdq
        >> pool 2 set mac 52:54:00:12:00:02 Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
        >> Port 0 vmdq pool 4 set mac 52:54:00:12:00:04 Port 0 vmdq pool 5 set mac
        >> 52:54:00:12:00:05 Port 0 vmdq pool 6 set mac 52:54:00:12:00:06 Port 0 vmdq
        >> pool 7 set mac 52:54:00:12:00:07 Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
        >> Port 0 vmdq pool 9 set mac 52:54:00:12:00:09 Port 0 vmdq pool 10 set mac
        >> 52:54:00:12:00:0a Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b Port 0 vmdq
        >> pool 12 set mac 52:54:00:12:00:0c Port 0 vmdq pool 13 set mac
        >> 52:54:00:12:00:0d Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e Port 0 vmdq
        >> pool 15 set mac 52:54:00:12:00:0f vmdq queue base: 64 pool base 1 Configured
        >> vmdq pool num: 16, each vmdq pool has 8 queues Port 1 modified RSS hash
        >> function based on hardware support,requested:0x3bffc configured:0x3ef8 Port
        >> 1 MAC: e8 ea 6a 27 b5 4e Port 1 vmdq pool 0 set mac 52:54:00:12:01:00 Port 1
        >> vmdq pool 1 set mac 52:54:00:12:01:01 Port 1 vmdq pool 2 set mac
        >> 52:54:00:12:01:02 Port 1 vmdq pool 3 set mac 52:54:00:12:01:03 Port 1 vmdq
        >> pool 4 set mac 52:54:00:12:01:04 Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
        >> Port 1 vmdq pool 6 set mac 52:54:00:12:01:06 Port 1 vmdq pool 7 set mac
        >> 52:54:00:12:01:07 Port 1 vmdq pool 8 set mac 52:54:00:12:01:08 Port 1 vmdq
        >> pool 9 set mac 52:54:00:12:01:09 Port 1 vmdq pool 10 set mac
        >> 52:54:00:12:01:0a Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b Port 1 vmdq
        >> pool 12 set mac 52:54:00:12:01:0c Port 1 vmdq pool 13 set mac
        >> 52:54:00:12:01:0d Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e Port 1 vmdq
        >> pool 15 set mac 52:54:00:12:01:0f Core 0(lcore 1) reading queues 64-191
        >>
        >> <SIGHUP>
        >>
        >> Pool 0: 0 0 0 0 0 0 0 0
        >> Pool 1: 119 0 0 0 0 0 0 0
        >> Pool 2: 0 0 0 0 0 0 0 0
        >> Pool 3: 0 0 0 0 0 0 0 0
        >> Pool 4: 0 0 0 0 0 0 0 0
        >> Pool 5: 0 0 0 0 0 0 0 0
        >> Pool 6: 0 0 0 0 0 0 0 0
        >> Pool 7: 0 0 0 0 0 0 0 0
        >> Pool 8: 0 0 0 0 0 0 0 0
        >> Pool 9: 0 0 0 0 0 0 0 0
        >> Pool 10: 0 0 0 0 0 0 0 0
        >> Pool 11: 0 0 0 0 0 0 0 0
        >> Pool 12: 0 0 0 0 0 0 0 0
        >> Pool 13: 0 0 0 0 0 0 0 0
        >> Pool 14: 0 0 0 0 0 0 0 0
        >> Pool 15: 0 0 0 0 0 0 0 0
        >>
        >> Even with --enable-rss, the packets are still being placed in VLAN Pool 1/Queue 0
        >> instead of VLAN Pool 1/Queue 2.
        >>
        >> As I mentioned in my original email, if I use an 82599ES (dual 10G NIC), it all
        >> works as expected.
        >>
        >> What am I missing?
        >> --Mike
        >>
        >> On 9/18/19, 7:54 AM, "Christensen, ChadX M" <chadx.m.christensen@intel.com>
        >> wrote:
        >>
        >>    [EXTERNAL SENDER]
        >>
        >>    Hi Mike,
        >>
        >>    Did that resolve it?
        >>
        >>    Thanks,
        >>
        >>    Chad Christensen | Ecosystem Enablement Manager
        >>    chadx.m.christensen@intel.com | (801) 786-5703
        >>
        >>    -----Original Message-----
        >>    From: Mike DeVico <mdevico@xcom-labs.com>
        >>    Sent: Wednesday, September 18, 2019 8:17 AM
        >>    To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon
        >> <thomas@monjalon.net>
        >>    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
        >> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
        >> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
        >> <ferruh.yigit@intel.com>; Christensen, ChadX M
        >> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu,
        >> Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
        >>    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
        >>
        >>    Sure enough, I see it now. I'll give it a try.
        >>
        >>    Thanks!!!
        >>    --Mike
        >>
        >>    On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
        >>
        >>        [EXTERNAL SENDER]
        >>
        >>> -----Original Message-----
        >>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
        >>> Sent: Wednesday, September 18, 2019 3:03 PM
        >>> To: Zhang, Xiao <xiao.zhang@intel.com>
        >>> Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing,
        >> Beilei
        >>> <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson,
        >> Bruce
        >>> <bruce.richardson@intel.com>; Ananyev, Konstantin
        >>> <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
        >>> Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
        >>> <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
        >>> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
        >>>
        >>> 18/09/2019 09:02, Zhang, Xiao:
        >>>>
        >>>> There is some hardware limitation and need to enable RSS to distribute
        >>> packets for X710.
        >>>
        >>> Is this limitation documented?
        >>
        >>        Yes, it's documented in doc/guides/nics/i40e.rst
        >>
        >>        "DCB works only when RSS is enabled."
        >>
        >>>
        >>
        >>
        >>
        >>
        >
    
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-20 21:57 Mike DeVico
@ 2019-10-10 21:23 ` Christensen, ChadX M
  2019-10-10 21:25   ` Mike DeVico
  0 siblings, 1 reply; 20+ messages in thread
From: Christensen, ChadX M @ 2019-10-10 21:23 UTC (permalink / raw)
  To: Mike DeVico, Zhang, Xiao
  Cc: Thomas Monjalon, users, Xing, Beilei, Zhang, Qi Z, Richardson,
	Bruce, Ananyev, Konstantin, Yigit, Ferruh, Tia Cassett, Wu,
	Jingjing, Wong1, Samuel

Hi Xiao,
RX side is working fine at this time for Mike, but now the Tx side has an issue.

Hi Mike,
Please describe the issue on the Tx side and let's see if Xiao can help with that as well.

Thanks,

Chad Christensen | Ecosystem Enablement Manager
chadx.m.christensen@intel.com | (801) 786-5703

-----Original Message-----
From: Mike DeVico <mdevico@xcom-labs.com> 
Sent: Friday, September 20, 2019 3:57 PM
To: Zhang, Xiao <xiao.zhang@intel.com>
Cc: Christensen, ChadX M <chadx.m.christensen@intel.com>; Thomas Monjalon <thomas@monjalon.net>; users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
Subject: Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

I figured it out!!

All I needed to do was change the rss_hf from:

eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
 					  ETH_RSS_UDP |
 				 	  ETH_RSS_TCP |
 					  ETH_RSS_SCTP;
to simply:

eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD

And now I see:

Pool 0: 0 0 0 0 0 0 0 0
Pool 1: 0 0 16 0 0 0 0 0
Pool 2: 0 0 0 0 0 0 0 0
Pool 3: 0 0 0 0 0 0 0 0
Pool 4: 0 0 0 0 0 0 0 0
Pool 5: 0 0 0 0 0 0 0 0
Pool 6: 0 0 0 0 0 0 0 0
Pool 7: 0 0 0 0 0 0 0 0
Pool 8: 0 0 0 0 0 0 0 0
Pool 9: 0 0 0 0 0 0 0 0
Pool 10: 0 0 0 0 0 0 0 0
Pool 11: 0 0 0 0 0 0 0 0
Pool 12: 0 0 0 0 0 0 0 0
Pool 13: 0 0 0 0 0 0 0 0
Pool 14: 0 0 0 0 0 0 0 0
Pool 15: 0 0 0 0 0 0 0 0
Finished handling signal 1

Which is exactly how it should be!!!

So in summary, we definitely need to enable rss, but we also need to set the rss_hf to simply ETH_RSS_L2_PAYLOAD so that it completely ignores any L3 fields.

--Mike


On 9/19/19, 6:34 AM, "users on behalf of Mike DeVico" <users-bounces@dpdk.org on behalf of mdevico@xcom-labs.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Xiao,
    
    Thanks for looking into this!
    
    So here’s the situation...
    
    This is a raw Ethernet packet. No IP. This
    exact setup works fine with an 82599ES.
    It looks like the hardware limitation with
    the x710 is the real problem. If we have to
    enable RSS to make it work and RSS requires a valid IP addr/port, then it’s a  catch-22 for us unless there is something we can change in the driver to account for this.
    
    Thanks!
    —Mike
    
    > On Sep 18, 2019, at 7:52 PM, Zhang, Xiao <xiao.zhang@intel.com> wrote:
    >
    > [EXTERNAL SENDER]
    >
    >> -----Original Message-----
    >> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
    >> Sent: Thursday, September 19, 2019 9:23 AM
    >> To: Christensen, ChadX M <chadx.m.christensen@intel.com>; Zhang, Xiao
    >> <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
    >> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
    >> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
    >> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
    >> <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing
    >> <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
    >> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >>
    >> As suggested I tried the following:
    >>
    >> I have an Intel FlexRAN FerryBridge broadcasting a packet 1/s which looks like
    >> the following (sudo tcpdump -i p7p1 -xx):
    >>
    >>        0x0000:  ffff ffff ffff 0000 aeae 0000 8100 4001
    >>        0x0010:  0800 0009 0000 0000 0001 8086 3600 010f
    >>        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000
    >>        0x0030:  0000 0000 0000 0000 0000 0000
    >
    > There is error in the packets as I checked with wireshark, could you try with normal packets?
    >
    > No issue with following packet as I tried:
    > 0000   ff ff ff ff ff ff 00 40 05 40 ef 24 81 00 40 01
    > 0010   08 00 45 00 00 34 3b 64 40 00 40 06 b7 9b 83 97
    > 0020   20 81 83 97 20 15 04 95 17 70 51 d4 ee 9c 51 a5
    > 0030   5b 36 80 10 7c 70 12 c7 00 00 01 01 08 0a 00 04
    > 0040   f0 d4 01 99 a3 fd
    >
    >>
    >> The first 12 bytes are the dest/src MAC address followed by the 802.1Q Header
    >> (8100 4001) If you crack this, the MS 16 bits are the TPID which is set to 8100 by
    >> the Ferrybridge.
    >> The next 16 bits (0x4001) make up the PCP bits [15:13], the DEI [12] and the VID
    >> [11:0]. So if you crack the 0x4001 this makes the PCP 2 (010b), the DEI 0 and VID
    >> 1 (000000000001b).
    >>
    >> Given this I expect the packets to but placed in Pool 1/Queue 2 (based on VID 1
    >> and PCP 2).
    >> However, when I run:
    >>
    >> ./vmdq_dcb_app -w 0000:05:00.0 -w 0000:05:00.1 -l 1 -- -p 3 --nb-pools 16 --nb-
    >> tcs 8 --enable-rss
    >> EAL: Detected 24 lcore(s)
    >> EAL: Detected 2 NUMA nodes
    >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >> EAL: Probing VFIO support...
    >> EAL: PCI device 0000:05:00.0 on NUMA socket 0
    >> EAL:   probe driver: 8086:1572 net_i40e
    >> EAL: PCI device 0000:05:00.1 on NUMA socket 0
    >> EAL:   probe driver: 8086:1572 net_i40e
    >> vmdq queue base: 64 pool base 1
    >> Configured vmdq pool num: 16, each vmdq pool has 8 queues Port 0 modified
    >> RSS hash function based on hardware support,requested:0x3bffc
    >> configured:0x3ef8 Port 0 MAC: e8 ea 6a 27 b5 4d Port 0 vmdq pool 0 set mac
    >> 52:54:00:12:00:00 Port 0 vmdq pool 1 set mac 52:54:00:12:00:01 Port 0 vmdq
    >> pool 2 set mac 52:54:00:12:00:02 Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    >> Port 0 vmdq pool 4 set mac 52:54:00:12:00:04 Port 0 vmdq pool 5 set mac
    >> 52:54:00:12:00:05 Port 0 vmdq pool 6 set mac 52:54:00:12:00:06 Port 0 vmdq
    >> pool 7 set mac 52:54:00:12:00:07 Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    >> Port 0 vmdq pool 9 set mac 52:54:00:12:00:09 Port 0 vmdq pool 10 set mac
    >> 52:54:00:12:00:0a Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b Port 0 vmdq
    >> pool 12 set mac 52:54:00:12:00:0c Port 0 vmdq pool 13 set mac
    >> 52:54:00:12:00:0d Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e Port 0 vmdq
    >> pool 15 set mac 52:54:00:12:00:0f vmdq queue base: 64 pool base 1 Configured
    >> vmdq pool num: 16, each vmdq pool has 8 queues Port 1 modified RSS hash
    >> function based on hardware support,requested:0x3bffc configured:0x3ef8 Port
    >> 1 MAC: e8 ea 6a 27 b5 4e Port 1 vmdq pool 0 set mac 52:54:00:12:01:00 Port 1
    >> vmdq pool 1 set mac 52:54:00:12:01:01 Port 1 vmdq pool 2 set mac
    >> 52:54:00:12:01:02 Port 1 vmdq pool 3 set mac 52:54:00:12:01:03 Port 1 vmdq
    >> pool 4 set mac 52:54:00:12:01:04 Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    >> Port 1 vmdq pool 6 set mac 52:54:00:12:01:06 Port 1 vmdq pool 7 set mac
    >> 52:54:00:12:01:07 Port 1 vmdq pool 8 set mac 52:54:00:12:01:08 Port 1 vmdq
    >> pool 9 set mac 52:54:00:12:01:09 Port 1 vmdq pool 10 set mac
    >> 52:54:00:12:01:0a Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b Port 1 vmdq
    >> pool 12 set mac 52:54:00:12:01:0c Port 1 vmdq pool 13 set mac
    >> 52:54:00:12:01:0d Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e Port 1 vmdq
    >> pool 15 set mac 52:54:00:12:01:0f Core 0(lcore 1) reading queues 64-191
    >>
    >> <SIGHUP>
    >>
    >> Pool 0: 0 0 0 0 0 0 0 0
    >> Pool 1: 119 0 0 0 0 0 0 0
    >> Pool 2: 0 0 0 0 0 0 0 0
    >> Pool 3: 0 0 0 0 0 0 0 0
    >> Pool 4: 0 0 0 0 0 0 0 0
    >> Pool 5: 0 0 0 0 0 0 0 0
    >> Pool 6: 0 0 0 0 0 0 0 0
    >> Pool 7: 0 0 0 0 0 0 0 0
    >> Pool 8: 0 0 0 0 0 0 0 0
    >> Pool 9: 0 0 0 0 0 0 0 0
    >> Pool 10: 0 0 0 0 0 0 0 0
    >> Pool 11: 0 0 0 0 0 0 0 0
    >> Pool 12: 0 0 0 0 0 0 0 0
    >> Pool 13: 0 0 0 0 0 0 0 0
    >> Pool 14: 0 0 0 0 0 0 0 0
    >> Pool 15: 0 0 0 0 0 0 0 0
    >>
    >> Even with --enable-rss, the packets are still being placed in VLAN Pool 1/Queue 0
    >> instead of VLAN Pool 1/Queue 2.
    >>
    >> As I mentioned in my original email, if I use an 82599ES (dual 10G NIC), it all
    >> works as expected.
    >>
    >> What am I missing?
    >> --Mike
    >>
    >> On 9/18/19, 7:54 AM, "Christensen, ChadX M" <chadx.m.christensen@intel.com>
    >> wrote:
    >>
    >>    [EXTERNAL SENDER]
    >>
    >>    Hi Mike,
    >>
    >>    Did that resolve it?
    >>
    >>    Thanks,
    >>
    >>    Chad Christensen | Ecosystem Enablement Manager
    >>    chadx.m.christensen@intel.com | (801) 786-5703
    >>
    >>    -----Original Message-----
    >>    From: Mike DeVico <mdevico@xcom-labs.com>
    >>    Sent: Wednesday, September 18, 2019 8:17 AM
    >>    To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon
    >> <thomas@monjalon.net>
    >>    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
    >> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
    >> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
    >> <ferruh.yigit@intel.com>; Christensen, ChadX M
    >> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu,
    >> Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
    >>    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >>
    >>    Sure enough, I see it now. I'll give it a try.
    >>
    >>    Thanks!!!
    >>    --Mike
    >>
    >>    On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
    >>
    >>        [EXTERNAL SENDER]
    >>
    >>> -----Original Message-----
    >>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
    >>> Sent: Wednesday, September 18, 2019 3:03 PM
    >>> To: Zhang, Xiao <xiao.zhang@intel.com>
    >>> Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing,
    >> Beilei
    >>> <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson,
    >> Bruce
    >>> <bruce.richardson@intel.com>; Ananyev, Konstantin
    >>> <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
    >>> Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
    >>> <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
    >>> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >>>
    >>> 18/09/2019 09:02, Zhang, Xiao:
    >>>>
    >>>> There is some hardware limitation and need to enable RSS to distribute
    >>> packets for X710.
    >>>
    >>> Is this limitation documented?
    >>
    >>        Yes, it's documented in doc/guides/nics/i40e.rst
    >>
    >>        "DCB works only when RSS is enabled."
    >>
    >>>
    >>
    >>
    >>
    >>
    >
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-30  2:21             ` Zhang, Helin
@ 2019-10-03 23:56               ` Mike DeVico
  0 siblings, 0 replies; 20+ messages in thread
From: Mike DeVico @ 2019-10-03 23:56 UTC (permalink / raw)
  To: Zhang, Helin, Jeff Weeks, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Zhang, Xiao, Wong1, Samuel,
	Kobi Cohen-Arazi

I set up a basic loopback test by connecting two ports of

the same X710 NIC with a 10G passive copper cable.



In my case, the two ports map to PCI addresses

0000:82:00.0 and 0000:82:00.1. 0000:82:00.0 is bound

to igb_uio and .1 is left bound to the kernel driver (i40e)

which is mapped to linux eth iface p4p1.



I then created a simple txtest app (see attached) based on the

vmdq_dcb example app that sends out a fixed packet once per

second over a specified port. It also prints out the packet each

time it sends it.



I run txtest as follows:



sudo ./txtest -w 0000:82:00.0 -c 0x3



TX Packet at [0x7fa407c2ee80], len=60

00000000: 00 00 AE AE 00 00 E8 EA 6A 27 B8 F9 81 00 40 01 | ........j'....@.

00000010: 08 00 00 08 00 00 00 00 AA BB CC DD 00 00 00 00 | ................

00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................

00000030: 00 00 00 00 00 00 00 00 00 00 00 00 |  |  |  |  | ............



...



Note the byte shown in red. This corresponds to the PCP field in the VLAN header.



I then run tcpdump on p2p4 (the receiving end of the loopback) as shown:



sudo tcpdump -xxi p4p2



19:02:08.124694 IP0

        0x0000:  0000 aeae 0000 e8ea 6a27 b8f9 8100 0001

        0x0010:  0800 0008 0000 0000 aabb ccdd 0000 0000

        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000

        0x0030:  0000 0000 0000 0000 0000 0000

...



Once again, note the PCP byte shown in red. The source of my

problem would appear to be that the PCP field is somehow getting

set to 0 even though it’s being passed to rte_eth_tx_burst as 0x40.

Here’s the section of the code in my main.c.



lcore_main(void *arg)

{

    uint32_t lcore_id = rte_lcore_id();



    struct rte_mempool *mbuf_pool = (struct rte_mempool*)arg;



    RTE_LOG(INFO, TXTEST, "tx entering main loop on lcore %u\n", lcore_id);



    static const uint8_t tx_data[] = {

         0x00, 0x00, 0xAE, 0xAE, 0x00, 0x00, 0xE8, 0xEA, 0x6A, 0x27, 0xB8, 0xF9, 0x81, 0x00, 0x40, 0x01,

         0x08, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, 0x00, 0x00, 0x00, 0x00,

         0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,

         0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00

    };



    struct rte_mbuf* tx_pkt      = rte_pktmbuf_alloc(mbuf_pool);

    uint8_t*         tx_pkt_dptr = (uint8_t *) rte_pktmbuf_append(tx_pkt, sizeof(tx_data));

    memcpy(tx_pkt_dptr,tx_data,sizeof(tx_data));

    const uint8_t* dptr = rte_pktmbuf_mtod(tx_pkt,uint8_t*);



    while (!force_quit) {

        uint32_t nb_rx = rte_eth_tx_burst(0, 0, &tx_pkt, 1);

       if (nb_rx == 1) {

            rte_hexdump(stdout, "TX Packet", dptr,

                        rte_pktmbuf_data_len(tx_pkt));

        }

        sleep(1);

    }

    return 0;

}



As can be seen, the packet is fixed and it is dumped immediately

after being sent, so why is the PCP field going out over the wire

as a 0?



For reference, I’m using dpdk version 18.08 and the only config change

is to set CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8 (from the default 4).

I did this because our actual application requires the RX to be configured with

16 pools with 8 TCs queues per pool and I wanted to resemble the target

config as closely as possible.



Any insight/assistance would be greatly appreciated.



Thank you in advance.

--Mike



On 9/29/19, 7:22 PM, "Zhang, Helin" <helin.zhang@intel.com> wrote:



    [EXTERNAL SENDER]



    Hi Mike



    There you need to try to use the example applications (e.g. testpmd) to reproduce your issue. Then send out the detailed information to the maintainers. Thanks!



    Regards,

    Helin



    -----Original Message-----

    From: Mike DeVico [mailto:mdevico@xcom-labs.com]

    Sent: Thursday, September 26, 2019 9:32 PM

    To: Zhang, Helin <helin.zhang@intel.com>; Jeff Weeks <jweeks@sandvine.com>; Thomas Monjalon <thomas@monjalon.net>

    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Xiao <xiao.zhang@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>

    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC



    Hi Helin,



    Yes, and the reason the RX packets were not being queued to the proper queue was due to RSS not being enabled/configured. Once I did this, the RX packets were placed in the proper queue.



    That being said, what I see now is that the TX side seems to have an issue. So the way it works is that the Ferrybridge broadcasts out what's called a Present packet at 1s intervals. Once the host application detects the packet (which it now does) it verifies that the packet is correctly formatted and such and then sends a packet back to the Ferrybridge to tell it to stop sending this packet. However, that TX packet apparently is not going out because I continue to receive Present packets from the Ferrybridge at the 1s interval. What's not clear to me is what queue I should be sending this packet to. I actually tried sending it out all 128 queues, but I still keep receiving the Present packet. What I lack is the ability to actually sniff what's going out over the wire.



    Any ideas how to approach this issue?



    Thanks in advance,

    --Mike



    On 9/26/19, 9:02 AM, "Zhang, Helin" <helin.zhang@intel.com> wrote:



        [EXTERNAL SENDER]



        Hi Mike



        Can you check if you are using the right combination of DPDK version and NIC firmware, and kernel driver if you are using?

        You can find the recommended combination from https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fi40e.html%23recommended-matching-list&amp;data=02%7C01%7Cmdevico%40xcom-labs.com%7C324f8717a3a24e6132a208d7454cf9ce%7C30eb8c3e540c4f279dfa86b6e7bfafa8%7C0%7C0%7C637054069241402915&amp;sdata=TJiThUF1ALki0XiZVf0pBsjFrc8ZPQsv4ikaojBpNIE%3D&amp;reserved=0. Hopefully that helps!



        Regards,

        Helin



        > -----Original Message-----

        > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mike DeVico

        > Sent: Friday, September 13, 2019 2:10 AM

        > To: Jeff Weeks; Thomas Monjalon

        > Cc: dev@dpdk.org; Xing, Beilei; Zhang, Qi Z; Richardson, Bruce; Ananyev,

        > Konstantin; Yigit, Ferruh

        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

        >

        > Hi Jeff,

        >

        > Thanks for chiming in...

        >

        > Yeah, In my case I get the packets, but they end up being put in queue 0

        > instead of 2.

        >

        > --Mike

        >

        > From: Jeff Weeks <jweeks@sandvine.com>

        > Date: Thursday, September 12, 2019 at 10:47 AM

        > To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon

        > <thomas@monjalon.net>

        > Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi

        > Zhang <qi.z.zhang@intel.com>, Bruce Richardson

        > <bruce.richardson@intel.com>, Konstantin Ananyev

        > <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com"

        > <ferruh.yigit@intel.com>

        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

        >

        > [EXTERNAL SENDER]

        >

        > I don't have much else to add, except that I also see dcb fail on the same NIC:

        >

        >

        >

        >   i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.

        >

        >

        >

        > My card doesn't receive any packets, though; not sure if it's related to this, or

        > not.

        >

        >

        >

        > --Jeff

        >

        > ________________________________

        > /dev/jeff_weeks.x2936

        > Sandvine Incorporated

        >

        > ________________________________

        > From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico

        > <mdevico@xcom-labs.com>

        > Sent: Thursday, September 12, 2019 1:06 PM

        > To: Thomas Monjalon

        > Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin

        > Ananyev; ferruh.yigit@intel.com

        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

        >

        > [EXTERNAL]

        >

        > Still no hits...

        >

        > --Mike

        >

        > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:

        >

        >     [EXTERNAL SENDER]

        >

        >     Adding i40e maintainers and few more.

        >

        >     07/09/2019 01:11, Mike DeVico:

        >     > Hello,

        >     >

        >     > I am having an issue getting the DCB feature to work with an Intel

        >     > X710 Quad SFP+ NIC.

        >     >

        >     > Here’s my setup:

        >     >

        >     > 1.      DPDK 18.08 built with the following I40E configs:

        >     >

        >     > CONFIG_RTE_LIBRTE_I40E_PMD=y

        >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n

        >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n

        >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n

        >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y

        >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y

        >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n

        >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64

        >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8

        >     >

        >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net

        >     >

        >     > Network devices using DPDK-compatible driver

        >     > ============================================

        >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     >

        >     >        Network devices using kernel driver

        >     >        ===================================

        >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0

        > drv=igb unused=igb_uio *Active*

        >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1

        > drv=igb unused=igb_uio *Active*

        >     >

        >     >        Other Network devices

        >     >        =====================

        >     >        <none>

        >     >

        >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC

        > that’s broadcasting

        >     > a packet tagged with VLAN 1 and PCP 2.

        >     >

        >     > 4.      I use the vmdq_dcb example app and configure the card with 16

        > pools/8 queue each

        >     > as follows:

        >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3

        >     >

        >     >

        >     > The apps starts up fine and successfully probes the card as shown below:

        >     >

        >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3

        >     > EAL: Detected 80 lcore(s)

        >     > EAL: Detected 2 NUMA nodes

        >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

        >     > EAL: Probing VFIO support...

        >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > vmdq queue base: 64 pool base 1

        >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues

        >     > Port 0 MAC: e8 ea 6a 27 b5 4d

        >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00

        >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01

        >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02

        >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03

        >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04

        >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05

        >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06

        >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07

        >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08

        >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09

        >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a

        >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b

        >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c

        >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d

        >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e

        >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f

        >     > vmdq queue base: 64 pool base 1

        >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues

        >     > Port 1 MAC: e8 ea 6a 27 b5 4e

        >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00

        >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01

        >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02

        >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03

        >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04

        >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05

        >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06

        >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07

        >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08

        >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09

        >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a

        >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b

        >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c

        >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d

        >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e

        >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f

        >     >

        >     > Skipping disabled port 2

        >     >

        >     > Skipping disabled port 3

        >     > Core 0(lcore 1) reading queues 64-191

        >     >

        >     > However, when I issue the SIGHUP I see that the packets

        >     > are being put into the first queue of Pool 1 as follows:

        >     >

        >     > Pool 0: 0 0 0 0 0 0 0 0

        >     > Pool 1: 10 0 0 0 0 0 0 0

        >     > Pool 2: 0 0 0 0 0 0 0 0

        >     > Pool 3: 0 0 0 0 0 0 0 0

        >     > Pool 4: 0 0 0 0 0 0 0 0

        >     > Pool 5: 0 0 0 0 0 0 0 0

        >     > Pool 6: 0 0 0 0 0 0 0 0

        >     > Pool 7: 0 0 0 0 0 0 0 0

        >     > Pool 8: 0 0 0 0 0 0 0 0

        >     > Pool 9: 0 0 0 0 0 0 0 0

        >     > Pool 10: 0 0 0 0 0 0 0 0

        >     > Pool 11: 0 0 0 0 0 0 0 0

        >     > Pool 12: 0 0 0 0 0 0 0 0

        >     > Pool 13: 0 0 0 0 0 0 0 0

        >     > Pool 14: 0 0 0 0 0 0 0 0

        >     > Pool 15: 0 0 0 0 0 0 0 0

        >     > Finished handling signal 1

        >     >

        >     > Since the packets are being tagged with PCP 2 they should be getting

        >     > mapped to 3rd queue of Pool 1, right?

        >     >

        >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC

        > and

        >     > the packets show up in the expected queue. (Note, to get it to work I had

        >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)

        >     >

        >     > Here’s that setup:

        >     >

        >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net

        >     >

        >     > Network devices using DPDK-compatible driver

        >     > ============================================

        >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'

        > drv=igb_uio unused=ixgbe

        >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'

        > drv=igb_uio unused=ixgbe

        >     >

        >     > Network devices using kernel driver

        >     > ===================================

        >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb

        > unused=igb_uio *Active*

        >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb

        > unused=igb_uio *Active*

        >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f0 drv=i40e unused=igb_uio

        >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f1 drv=i40e unused=igb_uio

        >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f2 drv=i40e unused=igb_uio

        >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f3 drv=i40e unused=igb_uio

        >     >

        >     > Other Network devices

        >     > =====================

        >     > <none>

        >     >

        >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3

        >     > EAL: Detected 80 lcore(s)

        >     > EAL: Detected 2 NUMA nodes

        >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

        >     > EAL: Probing VFIO support...

        >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1

        >     > EAL:   probe driver: 8086:10fb net_ixgbe

        >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1

        >     > EAL:   probe driver: 8086:10fb net_ixgbe

        >     > vmdq queue base: 0 pool base 0

        >     > Port 0 MAC: 00 1b 21 bf 71 24

        >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff

        >     > vmdq queue base: 0 pool base 0

        >     > Port 1 MAC: 00 1b 21 bf 71 26

        >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff

        >     >

        >     > Now when I send the SIGHUP, I see the packets being routed to

        >     > the expected queue:

        >     >

        >     > Pool 0: 0 0 0 0 0 0 0 0

        >     > Pool 1: 0 0 58 0 0 0 0 0

        >     > Pool 2: 0 0 0 0 0 0 0 0

        >     > Pool 3: 0 0 0 0 0 0 0 0

        >     > Pool 4: 0 0 0 0 0 0 0 0

        >     > Pool 5: 0 0 0 0 0 0 0 0

        >     > Pool 6: 0 0 0 0 0 0 0 0

        >     > Pool 7: 0 0 0 0 0 0 0 0

        >     > Pool 8: 0 0 0 0 0 0 0 0

        >     > Pool 9: 0 0 0 0 0 0 0 0

        >     > Pool 10: 0 0 0 0 0 0 0 0

        >     > Pool 11: 0 0 0 0 0 0 0 0

        >     > Pool 12: 0 0 0 0 0 0 0 0

        >     > Pool 13: 0 0 0 0 0 0 0 0

        >     > Pool 14: 0 0 0 0 0 0 0 0

        >     > Pool 15: 0 0 0 0 0 0 0 0

        >     > Finished handling signal 1

        >     >

        >     > What am I missing?

        >     >

        >     > Thankyou in advance,

        >     > --Mike

        >     >

        >     >

        >

        >

        >

        >

        >








-------------- next part --------------
A non-text attachment was scrubbed...
Name: txtest.tar
Type: application/x-tar
Size: 20480 bytes
Desc: txtest.tar
URL: <http://mails.dpdk.org/archives/users/attachments/20191003/5ed5e742/attachment.tar>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-26 20:31           ` Mike DeVico
@ 2019-09-30  2:21             ` Zhang, Helin
  2019-10-03 23:56               ` Mike DeVico
  0 siblings, 1 reply; 20+ messages in thread
From: Zhang, Helin @ 2019-09-30  2:21 UTC (permalink / raw)
  To: Mike DeVico, Jeff Weeks, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Zhang, Xiao, Wong1, Samuel

Hi Mike

There you need to try to use the example applications (e.g. testpmd) to reproduce your issue. Then send out the detailed information to the maintainers. Thanks!

Regards,
Helin

-----Original Message-----
From: Mike DeVico [mailto:mdevico@xcom-labs.com] 
Sent: Thursday, September 26, 2019 9:32 PM
To: Zhang, Helin <helin.zhang@intel.com>; Jeff Weeks <jweeks@sandvine.com>; Thomas Monjalon <thomas@monjalon.net>
Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Xiao <xiao.zhang@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

Hi Helin,

Yes, and the reason the RX packets were not being queued to the proper queue was due to RSS not being enabled/configured. Once I did this, the RX packets were placed in the proper queue.

That being said, what I see now is that the TX side seems to have an issue. So the way it works is that the Ferrybridge broadcasts out what's called a Present packet at 1s intervals. Once the host application detects the packet (which it now does) it verifies that the packet is correctly formatted and such and then sends a packet back to the Ferrybridge to tell it to stop sending this packet. However, that TX packet apparently is not going out because I continue to receive Present packets from the Ferrybridge at the 1s interval. What's not clear to me is what queue I should be sending this packet to. I actually tried sending it out all 128 queues, but I still keep receiving the Present packet. What I lack is the ability to actually sniff what's going out over the wire.

Any ideas how to approach this issue?

Thanks in advance,
--Mike

On 9/26/19, 9:02 AM, "Zhang, Helin" <helin.zhang@intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Mike
    
    Can you check if you are using the right combination of DPDK version and NIC firmware, and kernel driver if you are using?
    You can find the recommended combination from https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fi40e.html%23recommended-matching-list&amp;data=02%7C01%7Cmdevico%40xcom-labs.com%7C74c406b4f40a484293f008d7429aec62%7C30eb8c3e540c4f279dfa86b6e7bfafa8%7C0%7C0%7C637051105484800770&amp;sdata=oSpiu5P8oLk2PxwXUzB%2BuyiG2f%2BoFtt7%2FFGbJHs01dY%3D&amp;reserved=0. Hopefully that helps!
    
    Regards,
    Helin
    
    > -----Original Message-----
    > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mike DeVico
    > Sent: Friday, September 13, 2019 2:10 AM
    > To: Jeff Weeks; Thomas Monjalon
    > Cc: dev@dpdk.org; Xing, Beilei; Zhang, Qi Z; Richardson, Bruce; Ananyev,
    > Konstantin; Yigit, Ferruh
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > Hi Jeff,
    >
    > Thanks for chiming in...
    >
    > Yeah, In my case I get the packets, but they end up being put in queue 0
    > instead of 2.
    >
    > --Mike
    >
    > From: Jeff Weeks <jweeks@sandvine.com>
    > Date: Thursday, September 12, 2019 at 10:47 AM
    > To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon
    > <thomas@monjalon.net>
    > Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi
    > Zhang <qi.z.zhang@intel.com>, Bruce Richardson
    > <bruce.richardson@intel.com>, Konstantin Ananyev
    > <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com"
    > <ferruh.yigit@intel.com>
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > [EXTERNAL SENDER]
    >
    > I don't have much else to add, except that I also see dcb fail on the same NIC:
    >
    >
    >
    >   i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.
    >
    >
    >
    > My card doesn't receive any packets, though; not sure if it's related to this, or
    > not.
    >
    >
    >
    > --Jeff
    >
    > ________________________________
    > /dev/jeff_weeks.x2936
    > Sandvine Incorporated
    >
    > ________________________________
    > From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico
    > <mdevico@xcom-labs.com>
    > Sent: Thursday, September 12, 2019 1:06 PM
    > To: Thomas Monjalon
    > Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin
    > Ananyev; ferruh.yigit@intel.com
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > [EXTERNAL]
    >
    > Still no hits...
    >
    > --Mike
    >
    > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
    >
    >     [EXTERNAL SENDER]
    >
    >     Adding i40e maintainers and few more.
    >
    >     07/09/2019 01:11, Mike DeVico:
    >     > Hello,
    >     >
    >     > I am having an issue getting the DCB feature to work with an Intel
    >     > X710 Quad SFP+ NIC.
    >     >
    >     > Here’s my setup:
    >     >
    >     > 1.      DPDK 18.08 built with the following I40E configs:
    >     >
    >     > CONFIG_RTE_LIBRTE_I40E_PMD=y
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >     >
    >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     >
    >     >        Network devices using kernel driver
    >     >        ===================================
    >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
    > drv=igb unused=igb_uio *Active*
    >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
    > drv=igb unused=igb_uio *Active*
    >     >
    >     >        Other Network devices
    >     >        =====================
    >     >        <none>
    >     >
    >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
    > that’s broadcasting
    >     > a packet tagged with VLAN 1 and PCP 2.
    >     >
    >     > 4.      I use the vmdq_dcb example app and configure the card with 16
    > pools/8 queue each
    >     > as follows:
    >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     >
    >     >
    >     > The apps starts up fine and successfully probes the card as shown below:
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 0 MAC: e8 ea 6a 27 b5 4d
    >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 1 MAC: e8 ea 6a 27 b5 4e
    >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >     >
    >     > Skipping disabled port 2
    >     >
    >     > Skipping disabled port 3
    >     > Core 0(lcore 1) reading queues 64-191
    >     >
    >     > However, when I issue the SIGHUP I see that the packets
    >     > are being put into the first queue of Pool 1 as follows:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 10 0 0 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > Since the packets are being tagged with PCP 2 they should be getting
    >     > mapped to 3rd queue of Pool 1, right?
    >     >
    >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
    > and
    >     > the packets show up in the expected queue. (Note, to get it to work I had
    >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >     >
    >     > Here’s that setup:
    >     >
    >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     >
    >     > Network devices using kernel driver
    >     > ===================================
    >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f0 drv=i40e unused=igb_uio
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f1 drv=i40e unused=igb_uio
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f2 drv=i40e unused=igb_uio
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f3 drv=i40e unused=igb_uio
    >     >
    >     > Other Network devices
    >     > =====================
    >     > <none>
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > vmdq queue base: 0 pool base 0
    >     > Port 0 MAC: 00 1b 21 bf 71 24
    >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     > vmdq queue base: 0 pool base 0
    >     > Port 1 MAC: 00 1b 21 bf 71 26
    >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     >
    >     > Now when I send the SIGHUP, I see the packets being routed to
    >     > the expected queue:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 0 0 58 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > What am I missing?
    >     >
    >     > Thankyou in advance,
    >     > --Mike
    >     >
    >     >
    >
    >
    >
    >
    >
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
       [not found]         ` <F35DEAC7BCE34641BA9FAC6BCA4A12E71B4ADE0E@SHSMSX103.ccr.corp.intel.com>
@ 2019-09-26 20:31           ` Mike DeVico
  2019-09-30  2:21             ` Zhang, Helin
  0 siblings, 1 reply; 20+ messages in thread
From: Mike DeVico @ 2019-09-26 20:31 UTC (permalink / raw)
  To: Zhang, Helin, Jeff Weeks, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Zhang, Xiao, Wong1, Samuel

Hi Helin,

Yes, and the reason the RX packets were not being queued to the proper 
queue was due to RSS not being enabled/configured. Once I did this, 
the RX packets were placed in the proper queue.

That being said, what I see now is that the TX side seems to have an issue. So the
way it works is that the Ferrybridge broadcasts out what's called a Present packet
at 1s intervals. Once the host application detects the packet (which it now does) it
verifies that the packet is correctly formatted and such and then sends a packet back 
to the Ferrybridge to tell it to stop sending this packet. However, that TX packet 
apparently is not going out because I continue to receive Present packets from the 
Ferrybridge at the 1s interval. What's not clear to me is what queue I should be sending
this packet to. I actually tried sending it out all 128 queues, but I still keep receiving
the Present packet. What I lack is the ability to actually sniff what's going out over the
wire.

Any ideas how to approach this issue?

Thanks in advance,
--Mike

On 9/26/19, 9:02 AM, "Zhang, Helin" <helin.zhang@intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Mike
    
    Can you check if you are using the right combination of DPDK version and NIC firmware, and kernel driver if you are using?
    You can find the recommended combination from https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fi40e.html%23recommended-matching-list&amp;data=02%7C01%7Cmdevico%40xcom-labs.com%7C74c406b4f40a484293f008d7429aec62%7C30eb8c3e540c4f279dfa86b6e7bfafa8%7C0%7C0%7C637051105484800770&amp;sdata=oSpiu5P8oLk2PxwXUzB%2BuyiG2f%2BoFtt7%2FFGbJHs01dY%3D&amp;reserved=0. Hopefully that helps!
    
    Regards,
    Helin
    
    > -----Original Message-----
    > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mike DeVico
    > Sent: Friday, September 13, 2019 2:10 AM
    > To: Jeff Weeks; Thomas Monjalon
    > Cc: dev@dpdk.org; Xing, Beilei; Zhang, Qi Z; Richardson, Bruce; Ananyev,
    > Konstantin; Yigit, Ferruh
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > Hi Jeff,
    >
    > Thanks for chiming in...
    >
    > Yeah, In my case I get the packets, but they end up being put in queue 0
    > instead of 2.
    >
    > --Mike
    >
    > From: Jeff Weeks <jweeks@sandvine.com>
    > Date: Thursday, September 12, 2019 at 10:47 AM
    > To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon
    > <thomas@monjalon.net>
    > Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi
    > Zhang <qi.z.zhang@intel.com>, Bruce Richardson
    > <bruce.richardson@intel.com>, Konstantin Ananyev
    > <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com"
    > <ferruh.yigit@intel.com>
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > [EXTERNAL SENDER]
    >
    > I don't have much else to add, except that I also see dcb fail on the same NIC:
    >
    >
    >
    >   i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.
    >
    >
    >
    > My card doesn't receive any packets, though; not sure if it's related to this, or
    > not.
    >
    >
    >
    > --Jeff
    >
    > ________________________________
    > /dev/jeff_weeks.x2936
    > Sandvine Incorporated
    >
    > ________________________________
    > From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico
    > <mdevico@xcom-labs.com>
    > Sent: Thursday, September 12, 2019 1:06 PM
    > To: Thomas Monjalon
    > Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin
    > Ananyev; ferruh.yigit@intel.com
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > [EXTERNAL]
    >
    > Still no hits...
    >
    > --Mike
    >
    > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
    >
    >     [EXTERNAL SENDER]
    >
    >     Adding i40e maintainers and few more.
    >
    >     07/09/2019 01:11, Mike DeVico:
    >     > Hello,
    >     >
    >     > I am having an issue getting the DCB feature to work with an Intel
    >     > X710 Quad SFP+ NIC.
    >     >
    >     > Here’s my setup:
    >     >
    >     > 1.      DPDK 18.08 built with the following I40E configs:
    >     >
    >     > CONFIG_RTE_LIBRTE_I40E_PMD=y
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >     >
    >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     >
    >     >        Network devices using kernel driver
    >     >        ===================================
    >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
    > drv=igb unused=igb_uio *Active*
    >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
    > drv=igb unused=igb_uio *Active*
    >     >
    >     >        Other Network devices
    >     >        =====================
    >     >        <none>
    >     >
    >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
    > that’s broadcasting
    >     > a packet tagged with VLAN 1 and PCP 2.
    >     >
    >     > 4.      I use the vmdq_dcb example app and configure the card with 16
    > pools/8 queue each
    >     > as follows:
    >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     >
    >     >
    >     > The apps starts up fine and successfully probes the card as shown below:
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 0 MAC: e8 ea 6a 27 b5 4d
    >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 1 MAC: e8 ea 6a 27 b5 4e
    >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >     >
    >     > Skipping disabled port 2
    >     >
    >     > Skipping disabled port 3
    >     > Core 0(lcore 1) reading queues 64-191
    >     >
    >     > However, when I issue the SIGHUP I see that the packets
    >     > are being put into the first queue of Pool 1 as follows:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 10 0 0 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > Since the packets are being tagged with PCP 2 they should be getting
    >     > mapped to 3rd queue of Pool 1, right?
    >     >
    >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
    > and
    >     > the packets show up in the expected queue. (Note, to get it to work I had
    >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >     >
    >     > Here’s that setup:
    >     >
    >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     >
    >     > Network devices using kernel driver
    >     > ===================================
    >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f0 drv=i40e unused=igb_uio
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f1 drv=i40e unused=igb_uio
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f2 drv=i40e unused=igb_uio
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f3 drv=i40e unused=igb_uio
    >     >
    >     > Other Network devices
    >     > =====================
    >     > <none>
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > vmdq queue base: 0 pool base 0
    >     > Port 0 MAC: 00 1b 21 bf 71 24
    >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     > vmdq queue base: 0 pool base 0
    >     > Port 1 MAC: 00 1b 21 bf 71 26
    >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     >
    >     > Now when I send the SIGHUP, I see the packets being routed to
    >     > the expected queue:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 0 0 58 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > What am I missing?
    >     >
    >     > Thankyou in advance,
    >     > --Mike
    >     >
    >     >
    >
    >
    >
    >
    >
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
@ 2019-09-20 21:57 Mike DeVico
  2019-10-10 21:23 ` Christensen, ChadX M
  0 siblings, 1 reply; 20+ messages in thread
From: Mike DeVico @ 2019-09-20 21:57 UTC (permalink / raw)
  To: Zhang, Xiao
  Cc: Christensen, ChadX M, Thomas Monjalon, users, Xing, Beilei,
	Zhang, Qi Z, Richardson, Bruce, Ananyev, Konstantin, Yigit,
	Ferruh, Tia Cassett, Wu, Jingjing, Wong1,  Samuel

I figured it out!!

All I needed to do was change the rss_hf from:

eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
 					  ETH_RSS_UDP |
 				 	  ETH_RSS_TCP |
 					  ETH_RSS_SCTP;
to simply:

eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD

And now I see:

Pool 0: 0 0 0 0 0 0 0 0 
Pool 1: 0 0 16 0 0 0 0 0 
Pool 2: 0 0 0 0 0 0 0 0 
Pool 3: 0 0 0 0 0 0 0 0 
Pool 4: 0 0 0 0 0 0 0 0 
Pool 5: 0 0 0 0 0 0 0 0 
Pool 6: 0 0 0 0 0 0 0 0 
Pool 7: 0 0 0 0 0 0 0 0 
Pool 8: 0 0 0 0 0 0 0 0 
Pool 9: 0 0 0 0 0 0 0 0 
Pool 10: 0 0 0 0 0 0 0 0 
Pool 11: 0 0 0 0 0 0 0 0 
Pool 12: 0 0 0 0 0 0 0 0 
Pool 13: 0 0 0 0 0 0 0 0 
Pool 14: 0 0 0 0 0 0 0 0 
Pool 15: 0 0 0 0 0 0 0 0 
Finished handling signal 1

Which is exactly how it should be!!!

So in summary, we definitely need to enable rss, but we also need to 
set the rss_hf to simply ETH_RSS_L2_PAYLOAD so that it completely 
ignores any L3 fields.

--Mike


On 9/19/19, 6:34 AM, "users on behalf of Mike DeVico" <users-bounces@dpdk.org on behalf of mdevico@xcom-labs.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Xiao,
    
    Thanks for looking into this!
    
    So here’s the situation...
    
    This is a raw Ethernet packet. No IP. This
    exact setup works fine with an 82599ES.
    It looks like the hardware limitation with
    the x710 is the real problem. If we have to
    enable RSS to make it work and RSS requires a valid IP addr/port, then it’s a  catch-22 for us unless there is something we can change in the driver to account for this.
    
    Thanks!
    —Mike
    
    > On Sep 18, 2019, at 7:52 PM, Zhang, Xiao <xiao.zhang@intel.com> wrote:
    >
    > [EXTERNAL SENDER]
    >
    >> -----Original Message-----
    >> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
    >> Sent: Thursday, September 19, 2019 9:23 AM
    >> To: Christensen, ChadX M <chadx.m.christensen@intel.com>; Zhang, Xiao
    >> <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
    >> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
    >> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
    >> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
    >> <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing
    >> <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
    >> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >>
    >> As suggested I tried the following:
    >>
    >> I have an Intel FlexRAN FerryBridge broadcasting a packet 1/s which looks like
    >> the following (sudo tcpdump -i p7p1 -xx):
    >>
    >>        0x0000:  ffff ffff ffff 0000 aeae 0000 8100 4001
    >>        0x0010:  0800 0009 0000 0000 0001 8086 3600 010f
    >>        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000
    >>        0x0030:  0000 0000 0000 0000 0000 0000
    >
    > There is error in the packets as I checked with wireshark, could you try with normal packets?
    >
    > No issue with following packet as I tried:
    > 0000   ff ff ff ff ff ff 00 40 05 40 ef 24 81 00 40 01
    > 0010   08 00 45 00 00 34 3b 64 40 00 40 06 b7 9b 83 97
    > 0020   20 81 83 97 20 15 04 95 17 70 51 d4 ee 9c 51 a5
    > 0030   5b 36 80 10 7c 70 12 c7 00 00 01 01 08 0a 00 04
    > 0040   f0 d4 01 99 a3 fd
    >
    >>
    >> The first 12 bytes are the dest/src MAC address followed by the 802.1Q Header
    >> (8100 4001) If you crack this, the MS 16 bits are the TPID which is set to 8100 by
    >> the Ferrybridge.
    >> The next 16 bits (0x4001) make up the PCP bits [15:13], the DEI [12] and the VID
    >> [11:0]. So if you crack the 0x4001 this makes the PCP 2 (010b), the DEI 0 and VID
    >> 1 (000000000001b).
    >>
    >> Given this I expect the packets to but placed in Pool 1/Queue 2 (based on VID 1
    >> and PCP 2).
    >> However, when I run:
    >>
    >> ./vmdq_dcb_app -w 0000:05:00.0 -w 0000:05:00.1 -l 1 -- -p 3 --nb-pools 16 --nb-
    >> tcs 8 --enable-rss
    >> EAL: Detected 24 lcore(s)
    >> EAL: Detected 2 NUMA nodes
    >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >> EAL: Probing VFIO support...
    >> EAL: PCI device 0000:05:00.0 on NUMA socket 0
    >> EAL:   probe driver: 8086:1572 net_i40e
    >> EAL: PCI device 0000:05:00.1 on NUMA socket 0
    >> EAL:   probe driver: 8086:1572 net_i40e
    >> vmdq queue base: 64 pool base 1
    >> Configured vmdq pool num: 16, each vmdq pool has 8 queues Port 0 modified
    >> RSS hash function based on hardware support,requested:0x3bffc
    >> configured:0x3ef8 Port 0 MAC: e8 ea 6a 27 b5 4d Port 0 vmdq pool 0 set mac
    >> 52:54:00:12:00:00 Port 0 vmdq pool 1 set mac 52:54:00:12:00:01 Port 0 vmdq
    >> pool 2 set mac 52:54:00:12:00:02 Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    >> Port 0 vmdq pool 4 set mac 52:54:00:12:00:04 Port 0 vmdq pool 5 set mac
    >> 52:54:00:12:00:05 Port 0 vmdq pool 6 set mac 52:54:00:12:00:06 Port 0 vmdq
    >> pool 7 set mac 52:54:00:12:00:07 Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    >> Port 0 vmdq pool 9 set mac 52:54:00:12:00:09 Port 0 vmdq pool 10 set mac
    >> 52:54:00:12:00:0a Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b Port 0 vmdq
    >> pool 12 set mac 52:54:00:12:00:0c Port 0 vmdq pool 13 set mac
    >> 52:54:00:12:00:0d Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e Port 0 vmdq
    >> pool 15 set mac 52:54:00:12:00:0f vmdq queue base: 64 pool base 1 Configured
    >> vmdq pool num: 16, each vmdq pool has 8 queues Port 1 modified RSS hash
    >> function based on hardware support,requested:0x3bffc configured:0x3ef8 Port
    >> 1 MAC: e8 ea 6a 27 b5 4e Port 1 vmdq pool 0 set mac 52:54:00:12:01:00 Port 1
    >> vmdq pool 1 set mac 52:54:00:12:01:01 Port 1 vmdq pool 2 set mac
    >> 52:54:00:12:01:02 Port 1 vmdq pool 3 set mac 52:54:00:12:01:03 Port 1 vmdq
    >> pool 4 set mac 52:54:00:12:01:04 Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    >> Port 1 vmdq pool 6 set mac 52:54:00:12:01:06 Port 1 vmdq pool 7 set mac
    >> 52:54:00:12:01:07 Port 1 vmdq pool 8 set mac 52:54:00:12:01:08 Port 1 vmdq
    >> pool 9 set mac 52:54:00:12:01:09 Port 1 vmdq pool 10 set mac
    >> 52:54:00:12:01:0a Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b Port 1 vmdq
    >> pool 12 set mac 52:54:00:12:01:0c Port 1 vmdq pool 13 set mac
    >> 52:54:00:12:01:0d Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e Port 1 vmdq
    >> pool 15 set mac 52:54:00:12:01:0f Core 0(lcore 1) reading queues 64-191
    >>
    >> <SIGHUP>
    >>
    >> Pool 0: 0 0 0 0 0 0 0 0
    >> Pool 1: 119 0 0 0 0 0 0 0
    >> Pool 2: 0 0 0 0 0 0 0 0
    >> Pool 3: 0 0 0 0 0 0 0 0
    >> Pool 4: 0 0 0 0 0 0 0 0
    >> Pool 5: 0 0 0 0 0 0 0 0
    >> Pool 6: 0 0 0 0 0 0 0 0
    >> Pool 7: 0 0 0 0 0 0 0 0
    >> Pool 8: 0 0 0 0 0 0 0 0
    >> Pool 9: 0 0 0 0 0 0 0 0
    >> Pool 10: 0 0 0 0 0 0 0 0
    >> Pool 11: 0 0 0 0 0 0 0 0
    >> Pool 12: 0 0 0 0 0 0 0 0
    >> Pool 13: 0 0 0 0 0 0 0 0
    >> Pool 14: 0 0 0 0 0 0 0 0
    >> Pool 15: 0 0 0 0 0 0 0 0
    >>
    >> Even with --enable-rss, the packets are still being placed in VLAN Pool 1/Queue 0
    >> instead of VLAN Pool 1/Queue 2.
    >>
    >> As I mentioned in my original email, if I use an 82599ES (dual 10G NIC), it all
    >> works as expected.
    >>
    >> What am I missing?
    >> --Mike
    >>
    >> On 9/18/19, 7:54 AM, "Christensen, ChadX M" <chadx.m.christensen@intel.com>
    >> wrote:
    >>
    >>    [EXTERNAL SENDER]
    >>
    >>    Hi Mike,
    >>
    >>    Did that resolve it?
    >>
    >>    Thanks,
    >>
    >>    Chad Christensen | Ecosystem Enablement Manager
    >>    chadx.m.christensen@intel.com | (801) 786-5703
    >>
    >>    -----Original Message-----
    >>    From: Mike DeVico <mdevico@xcom-labs.com>
    >>    Sent: Wednesday, September 18, 2019 8:17 AM
    >>    To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon
    >> <thomas@monjalon.net>
    >>    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
    >> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
    >> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
    >> <ferruh.yigit@intel.com>; Christensen, ChadX M
    >> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu,
    >> Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
    >>    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >>
    >>    Sure enough, I see it now. I'll give it a try.
    >>
    >>    Thanks!!!
    >>    --Mike
    >>
    >>    On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
    >>
    >>        [EXTERNAL SENDER]
    >>
    >>> -----Original Message-----
    >>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
    >>> Sent: Wednesday, September 18, 2019 3:03 PM
    >>> To: Zhang, Xiao <xiao.zhang@intel.com>
    >>> Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing,
    >> Beilei
    >>> <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson,
    >> Bruce
    >>> <bruce.richardson@intel.com>; Ananyev, Konstantin
    >>> <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
    >>> Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
    >>> <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
    >>> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >>>
    >>> 18/09/2019 09:02, Zhang, Xiao:
    >>>>
    >>>> There is some hardware limitation and need to enable RSS to distribute
    >>> packets for X710.
    >>>
    >>> Is this limitation documented?
    >>
    >>        Yes, it's documented in doc/guides/nics/i40e.rst
    >>
    >>        "DCB works only when RSS is enabled."
    >>
    >>>
    >>
    >>
    >>
    >>
    >
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-19 13:34                       ` Mike DeVico
@ 2019-09-19 14:34                         ` Johnson, Brian
  0 siblings, 0 replies; 20+ messages in thread
From: Johnson, Brian @ 2019-09-19 14:34 UTC (permalink / raw)
  To: Mike DeVico, Zhang, Xiao
  Cc: Christensen, ChadX M, Thomas Monjalon, users, Xing, Beilei,
	Zhang, Qi Z, Richardson, Bruce, Ananyev, Konstantin, Yigit,
	Ferruh, Tia Cassett, Wu, Jingjing, Wong1, Samuel, Chilikin,
	Andrey

Feedback from our architecture team that might address this use case.

The deployment is using special fronthaul raw packets over Ethernet. Problem with such packets - they use standard 0x0800 Ethertype, so default parser fails to parse and validate IPv4. To fix this problem they have to use our FlexRAN DDP package for the Intel(R) Ethernet 700 Series.
Here is the link to the FlexRAN package
 https://downloadcenter.intel.com/download/28938/Intel-Ethernet-Controller-X710-XXV710-XL710-Adapters-Dynamic-Device-Personalization-Radio-Fronthaul-4G 

Brian Johnson
Solutions Architect / Ethernet Networking Division

-----Original Message-----
From: users <users-bounces@dpdk.org> On Behalf Of Mike DeVico
Sent: Thursday, September 19, 2019 6:34 AM
To: Zhang, Xiao <xiao.zhang@intel.com>
Cc: Christensen, ChadX M <chadx.m.christensen@intel.com>; Thomas Monjalon <thomas@monjalon.net>; users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
Subject: Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

Hi Xiao,

Thanks for looking into this!

So here’s the situation...

This is a raw Ethernet packet. No IP. This exact setup works fine with an 82599ES.
It looks like the hardware limitation with the x710 is the real problem. If we have to enable RSS to make it work and RSS requires a valid IP addr/port, then it’s a  catch-22 for us unless there is something we can change in the driver to account for this.

Thanks!
—Mike

> On Sep 18, 2019, at 7:52 PM, Zhang, Xiao <xiao.zhang@intel.com> wrote:
> 
> [EXTERNAL SENDER]
> 
>> -----Original Message-----
>> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
>> Sent: Thursday, September 19, 2019 9:23 AM
>> To: Christensen, ChadX M <chadx.m.christensen@intel.com>; Zhang, Xiao 
>> <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
>> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z 
>> <qi.z.zhang@intel.com>; Richardson, Bruce 
>> <bruce.richardson@intel.com>; Ananyev, Konstantin 
>> <konstantin.ananyev@intel.com>; Yigit, Ferruh 
>> <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, 
>> Jingjing <jingjing.wu@intel.com>; Wong1, Samuel 
>> <samuel.wong1@intel.com>
>> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>> 
>> As suggested I tried the following:
>> 
>> I have an Intel FlexRAN FerryBridge broadcasting a packet 1/s which 
>> looks like the following (sudo tcpdump -i p7p1 -xx):
>> 
>>        0x0000:  ffff ffff ffff 0000 aeae 0000 8100 4001
>>        0x0010:  0800 0009 0000 0000 0001 8086 3600 010f
>>        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000
>>        0x0030:  0000 0000 0000 0000 0000 0000
> 
> There is error in the packets as I checked with wireshark, could you try with normal packets?
> 
> No issue with following packet as I tried:
> 0000   ff ff ff ff ff ff 00 40 05 40 ef 24 81 00 40 01
> 0010   08 00 45 00 00 34 3b 64 40 00 40 06 b7 9b 83 97
> 0020   20 81 83 97 20 15 04 95 17 70 51 d4 ee 9c 51 a5
> 0030   5b 36 80 10 7c 70 12 c7 00 00 01 01 08 0a 00 04
> 0040   f0 d4 01 99 a3 fd
> 
>> 
>> The first 12 bytes are the dest/src MAC address followed by the 
>> 802.1Q Header
>> (8100 4001) If you crack this, the MS 16 bits are the TPID which is 
>> set to 8100 by the Ferrybridge.
>> The next 16 bits (0x4001) make up the PCP bits [15:13], the DEI [12] 
>> and the VID [11:0]. So if you crack the 0x4001 this makes the PCP 2 
>> (010b), the DEI 0 and VID
>> 1 (000000000001b).
>> 
>> Given this I expect the packets to but placed in Pool 1/Queue 2 
>> (based on VID 1 and PCP 2).
>> However, when I run:
>> 
>> ./vmdq_dcb_app -w 0000:05:00.0 -w 0000:05:00.1 -l 1 -- -p 3 
>> --nb-pools 16 --nb- tcs 8 --enable-rss
>> EAL: Detected 24 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Probing VFIO support...
>> EAL: PCI device 0000:05:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:1572 net_i40e
>> EAL: PCI device 0000:05:00.1 on NUMA socket 0
>> EAL:   probe driver: 8086:1572 net_i40e
>> vmdq queue base: 64 pool base 1
>> Configured vmdq pool num: 16, each vmdq pool has 8 queues Port 0 
>> modified RSS hash function based on hardware 
>> support,requested:0x3bffc
>> configured:0x3ef8 Port 0 MAC: e8 ea 6a 27 b5 4d Port 0 vmdq pool 0 
>> set mac
>> 52:54:00:12:00:00 Port 0 vmdq pool 1 set mac 52:54:00:12:00:01 Port 0 
>> vmdq pool 2 set mac 52:54:00:12:00:02 Port 0 vmdq pool 3 set mac 
>> 52:54:00:12:00:03 Port 0 vmdq pool 4 set mac 52:54:00:12:00:04 Port 0 
>> vmdq pool 5 set mac
>> 52:54:00:12:00:05 Port 0 vmdq pool 6 set mac 52:54:00:12:00:06 Port 0 
>> vmdq pool 7 set mac 52:54:00:12:00:07 Port 0 vmdq pool 8 set mac 
>> 52:54:00:12:00:08 Port 0 vmdq pool 9 set mac 52:54:00:12:00:09 Port 0 
>> vmdq pool 10 set mac 52:54:00:12:00:0a Port 0 vmdq pool 11 set mac 
>> 52:54:00:12:00:0b Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c Port 
>> 0 vmdq pool 13 set mac 52:54:00:12:00:0d Port 0 vmdq pool 14 set mac 
>> 52:54:00:12:00:0e Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f vmdq 
>> queue base: 64 pool base 1 Configured vmdq pool num: 16, each vmdq 
>> pool has 8 queues Port 1 modified RSS hash function based on hardware 
>> support,requested:0x3bffc configured:0x3ef8 Port
>> 1 MAC: e8 ea 6a 27 b5 4e Port 1 vmdq pool 0 set mac 52:54:00:12:01:00 
>> Port 1 vmdq pool 1 set mac 52:54:00:12:01:01 Port 1 vmdq pool 2 set 
>> mac
>> 52:54:00:12:01:02 Port 1 vmdq pool 3 set mac 52:54:00:12:01:03 Port 1 
>> vmdq pool 4 set mac 52:54:00:12:01:04 Port 1 vmdq pool 5 set mac 
>> 52:54:00:12:01:05 Port 1 vmdq pool 6 set mac 52:54:00:12:01:06 Port 1 
>> vmdq pool 7 set mac
>> 52:54:00:12:01:07 Port 1 vmdq pool 8 set mac 52:54:00:12:01:08 Port 1 
>> vmdq pool 9 set mac 52:54:00:12:01:09 Port 1 vmdq pool 10 set mac 
>> 52:54:00:12:01:0a Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b Port 
>> 1 vmdq pool 12 set mac 52:54:00:12:01:0c Port 1 vmdq pool 13 set mac 
>> 52:54:00:12:01:0d Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e Port 
>> 1 vmdq pool 15 set mac 52:54:00:12:01:0f Core 0(lcore 1) reading 
>> queues 64-191
>> 
>> <SIGHUP>
>> 
>> Pool 0: 0 0 0 0 0 0 0 0
>> Pool 1: 119 0 0 0 0 0 0 0
>> Pool 2: 0 0 0 0 0 0 0 0
>> Pool 3: 0 0 0 0 0 0 0 0
>> Pool 4: 0 0 0 0 0 0 0 0
>> Pool 5: 0 0 0 0 0 0 0 0
>> Pool 6: 0 0 0 0 0 0 0 0
>> Pool 7: 0 0 0 0 0 0 0 0
>> Pool 8: 0 0 0 0 0 0 0 0
>> Pool 9: 0 0 0 0 0 0 0 0
>> Pool 10: 0 0 0 0 0 0 0 0
>> Pool 11: 0 0 0 0 0 0 0 0
>> Pool 12: 0 0 0 0 0 0 0 0
>> Pool 13: 0 0 0 0 0 0 0 0
>> Pool 14: 0 0 0 0 0 0 0 0
>> Pool 15: 0 0 0 0 0 0 0 0
>> 
>> Even with --enable-rss, the packets are still being placed in VLAN 
>> Pool 1/Queue 0 instead of VLAN Pool 1/Queue 2.
>> 
>> As I mentioned in my original email, if I use an 82599ES (dual 10G 
>> NIC), it all works as expected.
>> 
>> What am I missing?
>> --Mike
>> 
>> On 9/18/19, 7:54 AM, "Christensen, ChadX M" 
>> <chadx.m.christensen@intel.com>
>> wrote:
>> 
>>    [EXTERNAL SENDER]
>> 
>>    Hi Mike,
>> 
>>    Did that resolve it?
>> 
>>    Thanks,
>> 
>>    Chad Christensen | Ecosystem Enablement Manager
>>    chadx.m.christensen@intel.com | (801) 786-5703
>> 
>>    -----Original Message-----
>>    From: Mike DeVico <mdevico@xcom-labs.com>
>>    Sent: Wednesday, September 18, 2019 8:17 AM
>>    To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon 
>> <thomas@monjalon.net>
>>    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, 
>> Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce 
>> <bruce.richardson@intel.com>; Ananyev, Konstantin 
>> <konstantin.ananyev@intel.com>; Yigit, Ferruh 
>> <ferruh.yigit@intel.com>; Christensen, ChadX M 
>> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
>>    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>> 
>>    Sure enough, I see it now. I'll give it a try.
>> 
>>    Thanks!!!
>>    --Mike
>> 
>>    On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
>> 
>>        [EXTERNAL SENDER]
>> 
>>> -----Original Message-----
>>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>>> Sent: Wednesday, September 18, 2019 3:03 PM
>>> To: Zhang, Xiao <xiao.zhang@intel.com>
>>> Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing,
>> Beilei
>>> <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; 
>>> Richardson,
>> Bruce
>>> <bruce.richardson@intel.com>; Ananyev, Konstantin 
>>> <konstantin.ananyev@intel.com>; Yigit, Ferruh 
>>> <ferruh.yigit@intel.com>; Christensen, ChadX M 
>>> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; 
>>> Wu, Jingjing <jingjing.wu@intel.com>
>>> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>>> 
>>> 18/09/2019 09:02, Zhang, Xiao:
>>>> 
>>>> There is some hardware limitation and need to enable RSS to 
>>>> distribute
>>> packets for X710.
>>> 
>>> Is this limitation documented?
>> 
>>        Yes, it's documented in doc/guides/nics/i40e.rst
>> 
>>        "DCB works only when RSS is enabled."
>> 
>>> 
>> 
>> 
>> 
>> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-19  2:52                     ` Zhang, Xiao
@ 2019-09-19 13:34                       ` Mike DeVico
  2019-09-19 14:34                         ` Johnson, Brian
  0 siblings, 1 reply; 20+ messages in thread
From: Mike DeVico @ 2019-09-19 13:34 UTC (permalink / raw)
  To: Zhang, Xiao
  Cc: Christensen, ChadX M, Thomas Monjalon, users, Xing, Beilei,
	Zhang, Qi Z, Richardson, Bruce, Ananyev, Konstantin, Yigit,
	Ferruh, Tia Cassett, Wu, Jingjing, Wong1,  Samuel

Hi Xiao,

Thanks for looking into this!

So here’s the situation...

This is a raw Ethernet packet. No IP. This
exact setup works fine with an 82599ES.
It looks like the hardware limitation with 
the x710 is the real problem. If we have to 
enable RSS to make it work and RSS requires a valid IP addr/port, then it’s a  catch-22 for us unless there is something we can change in the driver to account for this.

Thanks!
—Mike

> On Sep 18, 2019, at 7:52 PM, Zhang, Xiao <xiao.zhang@intel.com> wrote:
> 
> [EXTERNAL SENDER]
> 
>> -----Original Message-----
>> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
>> Sent: Thursday, September 19, 2019 9:23 AM
>> To: Christensen, ChadX M <chadx.m.christensen@intel.com>; Zhang, Xiao
>> <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
>> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
>> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
>> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
>> <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing
>> <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
>> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>> 
>> As suggested I tried the following:
>> 
>> I have an Intel FlexRAN FerryBridge broadcasting a packet 1/s which looks like
>> the following (sudo tcpdump -i p7p1 -xx):
>> 
>>        0x0000:  ffff ffff ffff 0000 aeae 0000 8100 4001
>>        0x0010:  0800 0009 0000 0000 0001 8086 3600 010f
>>        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000
>>        0x0030:  0000 0000 0000 0000 0000 0000
> 
> There is error in the packets as I checked with wireshark, could you try with normal packets?
> 
> No issue with following packet as I tried:
> 0000   ff ff ff ff ff ff 00 40 05 40 ef 24 81 00 40 01
> 0010   08 00 45 00 00 34 3b 64 40 00 40 06 b7 9b 83 97
> 0020   20 81 83 97 20 15 04 95 17 70 51 d4 ee 9c 51 a5
> 0030   5b 36 80 10 7c 70 12 c7 00 00 01 01 08 0a 00 04
> 0040   f0 d4 01 99 a3 fd
> 
>> 
>> The first 12 bytes are the dest/src MAC address followed by the 802.1Q Header
>> (8100 4001) If you crack this, the MS 16 bits are the TPID which is set to 8100 by
>> the Ferrybridge.
>> The next 16 bits (0x4001) make up the PCP bits [15:13], the DEI [12] and the VID
>> [11:0]. So if you crack the 0x4001 this makes the PCP 2 (010b), the DEI 0 and VID
>> 1 (000000000001b).
>> 
>> Given this I expect the packets to but placed in Pool 1/Queue 2 (based on VID 1
>> and PCP 2).
>> However, when I run:
>> 
>> ./vmdq_dcb_app -w 0000:05:00.0 -w 0000:05:00.1 -l 1 -- -p 3 --nb-pools 16 --nb-
>> tcs 8 --enable-rss
>> EAL: Detected 24 lcore(s)
>> EAL: Detected 2 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Probing VFIO support...
>> EAL: PCI device 0000:05:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:1572 net_i40e
>> EAL: PCI device 0000:05:00.1 on NUMA socket 0
>> EAL:   probe driver: 8086:1572 net_i40e
>> vmdq queue base: 64 pool base 1
>> Configured vmdq pool num: 16, each vmdq pool has 8 queues Port 0 modified
>> RSS hash function based on hardware support,requested:0x3bffc
>> configured:0x3ef8 Port 0 MAC: e8 ea 6a 27 b5 4d Port 0 vmdq pool 0 set mac
>> 52:54:00:12:00:00 Port 0 vmdq pool 1 set mac 52:54:00:12:00:01 Port 0 vmdq
>> pool 2 set mac 52:54:00:12:00:02 Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
>> Port 0 vmdq pool 4 set mac 52:54:00:12:00:04 Port 0 vmdq pool 5 set mac
>> 52:54:00:12:00:05 Port 0 vmdq pool 6 set mac 52:54:00:12:00:06 Port 0 vmdq
>> pool 7 set mac 52:54:00:12:00:07 Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
>> Port 0 vmdq pool 9 set mac 52:54:00:12:00:09 Port 0 vmdq pool 10 set mac
>> 52:54:00:12:00:0a Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b Port 0 vmdq
>> pool 12 set mac 52:54:00:12:00:0c Port 0 vmdq pool 13 set mac
>> 52:54:00:12:00:0d Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e Port 0 vmdq
>> pool 15 set mac 52:54:00:12:00:0f vmdq queue base: 64 pool base 1 Configured
>> vmdq pool num: 16, each vmdq pool has 8 queues Port 1 modified RSS hash
>> function based on hardware support,requested:0x3bffc configured:0x3ef8 Port
>> 1 MAC: e8 ea 6a 27 b5 4e Port 1 vmdq pool 0 set mac 52:54:00:12:01:00 Port 1
>> vmdq pool 1 set mac 52:54:00:12:01:01 Port 1 vmdq pool 2 set mac
>> 52:54:00:12:01:02 Port 1 vmdq pool 3 set mac 52:54:00:12:01:03 Port 1 vmdq
>> pool 4 set mac 52:54:00:12:01:04 Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
>> Port 1 vmdq pool 6 set mac 52:54:00:12:01:06 Port 1 vmdq pool 7 set mac
>> 52:54:00:12:01:07 Port 1 vmdq pool 8 set mac 52:54:00:12:01:08 Port 1 vmdq
>> pool 9 set mac 52:54:00:12:01:09 Port 1 vmdq pool 10 set mac
>> 52:54:00:12:01:0a Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b Port 1 vmdq
>> pool 12 set mac 52:54:00:12:01:0c Port 1 vmdq pool 13 set mac
>> 52:54:00:12:01:0d Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e Port 1 vmdq
>> pool 15 set mac 52:54:00:12:01:0f Core 0(lcore 1) reading queues 64-191
>> 
>> <SIGHUP>
>> 
>> Pool 0: 0 0 0 0 0 0 0 0
>> Pool 1: 119 0 0 0 0 0 0 0
>> Pool 2: 0 0 0 0 0 0 0 0
>> Pool 3: 0 0 0 0 0 0 0 0
>> Pool 4: 0 0 0 0 0 0 0 0
>> Pool 5: 0 0 0 0 0 0 0 0
>> Pool 6: 0 0 0 0 0 0 0 0
>> Pool 7: 0 0 0 0 0 0 0 0
>> Pool 8: 0 0 0 0 0 0 0 0
>> Pool 9: 0 0 0 0 0 0 0 0
>> Pool 10: 0 0 0 0 0 0 0 0
>> Pool 11: 0 0 0 0 0 0 0 0
>> Pool 12: 0 0 0 0 0 0 0 0
>> Pool 13: 0 0 0 0 0 0 0 0
>> Pool 14: 0 0 0 0 0 0 0 0
>> Pool 15: 0 0 0 0 0 0 0 0
>> 
>> Even with --enable-rss, the packets are still being placed in VLAN Pool 1/Queue 0
>> instead of VLAN Pool 1/Queue 2.
>> 
>> As I mentioned in my original email, if I use an 82599ES (dual 10G NIC), it all
>> works as expected.
>> 
>> What am I missing?
>> --Mike
>> 
>> On 9/18/19, 7:54 AM, "Christensen, ChadX M" <chadx.m.christensen@intel.com>
>> wrote:
>> 
>>    [EXTERNAL SENDER]
>> 
>>    Hi Mike,
>> 
>>    Did that resolve it?
>> 
>>    Thanks,
>> 
>>    Chad Christensen | Ecosystem Enablement Manager
>>    chadx.m.christensen@intel.com | (801) 786-5703
>> 
>>    -----Original Message-----
>>    From: Mike DeVico <mdevico@xcom-labs.com>
>>    Sent: Wednesday, September 18, 2019 8:17 AM
>>    To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon
>> <thomas@monjalon.net>
>>    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
>> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
>> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
>> <ferruh.yigit@intel.com>; Christensen, ChadX M
>> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu,
>> Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
>>    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>> 
>>    Sure enough, I see it now. I'll give it a try.
>> 
>>    Thanks!!!
>>    --Mike
>> 
>>    On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
>> 
>>        [EXTERNAL SENDER]
>> 
>>> -----Original Message-----
>>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>>> Sent: Wednesday, September 18, 2019 3:03 PM
>>> To: Zhang, Xiao <xiao.zhang@intel.com>
>>> Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing,
>> Beilei
>>> <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson,
>> Bruce
>>> <bruce.richardson@intel.com>; Ananyev, Konstantin
>>> <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
>>> Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
>>> <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
>>> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>>> 
>>> 18/09/2019 09:02, Zhang, Xiao:
>>>> 
>>>> There is some hardware limitation and need to enable RSS to distribute
>>> packets for X710.
>>> 
>>> Is this limitation documented?
>> 
>>        Yes, it's documented in doc/guides/nics/i40e.rst
>> 
>>        "DCB works only when RSS is enabled."
>> 
>>> 
>> 
>> 
>> 
>> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-19  1:23                   ` Mike DeVico
@ 2019-09-19  2:52                     ` Zhang, Xiao
  2019-09-19 13:34                       ` Mike DeVico
  0 siblings, 1 reply; 20+ messages in thread
From: Zhang, Xiao @ 2019-09-19  2:52 UTC (permalink / raw)
  To: Mike DeVico, Christensen, ChadX M, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Tia Cassett, Wu, Jingjing, Wong1,
	Samuel


> -----Original Message-----
> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
> Sent: Thursday, September 19, 2019 9:23 AM
> To: Christensen, ChadX M <chadx.m.christensen@intel.com>; Zhang, Xiao
> <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> As suggested I tried the following:
> 
> I have an Intel FlexRAN FerryBridge broadcasting a packet 1/s which looks like
> the following (sudo tcpdump -i p7p1 -xx):
> 
>         0x0000:  ffff ffff ffff 0000 aeae 0000 8100 4001
>         0x0010:  0800 0009 0000 0000 0001 8086 3600 010f
>         0x0020:  0000 0000 0000 0000 0000 0000 0000 0000
>         0x0030:  0000 0000 0000 0000 0000 0000

There is error in the packets as I checked with wireshark, could you try with normal packets?

No issue with following packet as I tried:
0000   ff ff ff ff ff ff 00 40 05 40 ef 24 81 00 40 01
0010   08 00 45 00 00 34 3b 64 40 00 40 06 b7 9b 83 97
0020   20 81 83 97 20 15 04 95 17 70 51 d4 ee 9c 51 a5
0030   5b 36 80 10 7c 70 12 c7 00 00 01 01 08 0a 00 04
0040   f0 d4 01 99 a3 fd

> 
> The first 12 bytes are the dest/src MAC address followed by the 802.1Q Header
> (8100 4001) If you crack this, the MS 16 bits are the TPID which is set to 8100 by
> the Ferrybridge.
> The next 16 bits (0x4001) make up the PCP bits [15:13], the DEI [12] and the VID
> [11:0]. So if you crack the 0x4001 this makes the PCP 2 (010b), the DEI 0 and VID
> 1 (000000000001b).
> 
> Given this I expect the packets to but placed in Pool 1/Queue 2 (based on VID 1
> and PCP 2).
> However, when I run:
> 
> ./vmdq_dcb_app -w 0000:05:00.0 -w 0000:05:00.1 -l 1 -- -p 3 --nb-pools 16 --nb-
> tcs 8 --enable-rss
> EAL: Detected 24 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Probing VFIO support...
> EAL: PCI device 0000:05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> EAL: PCI device 0000:05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1572 net_i40e
> vmdq queue base: 64 pool base 1
> Configured vmdq pool num: 16, each vmdq pool has 8 queues Port 0 modified
> RSS hash function based on hardware support,requested:0x3bffc
> configured:0x3ef8 Port 0 MAC: e8 ea 6a 27 b5 4d Port 0 vmdq pool 0 set mac
> 52:54:00:12:00:00 Port 0 vmdq pool 1 set mac 52:54:00:12:00:01 Port 0 vmdq
> pool 2 set mac 52:54:00:12:00:02 Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
> Port 0 vmdq pool 4 set mac 52:54:00:12:00:04 Port 0 vmdq pool 5 set mac
> 52:54:00:12:00:05 Port 0 vmdq pool 6 set mac 52:54:00:12:00:06 Port 0 vmdq
> pool 7 set mac 52:54:00:12:00:07 Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
> Port 0 vmdq pool 9 set mac 52:54:00:12:00:09 Port 0 vmdq pool 10 set mac
> 52:54:00:12:00:0a Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b Port 0 vmdq
> pool 12 set mac 52:54:00:12:00:0c Port 0 vmdq pool 13 set mac
> 52:54:00:12:00:0d Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e Port 0 vmdq
> pool 15 set mac 52:54:00:12:00:0f vmdq queue base: 64 pool base 1 Configured
> vmdq pool num: 16, each vmdq pool has 8 queues Port 1 modified RSS hash
> function based on hardware support,requested:0x3bffc configured:0x3ef8 Port
> 1 MAC: e8 ea 6a 27 b5 4e Port 1 vmdq pool 0 set mac 52:54:00:12:01:00 Port 1
> vmdq pool 1 set mac 52:54:00:12:01:01 Port 1 vmdq pool 2 set mac
> 52:54:00:12:01:02 Port 1 vmdq pool 3 set mac 52:54:00:12:01:03 Port 1 vmdq
> pool 4 set mac 52:54:00:12:01:04 Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
> Port 1 vmdq pool 6 set mac 52:54:00:12:01:06 Port 1 vmdq pool 7 set mac
> 52:54:00:12:01:07 Port 1 vmdq pool 8 set mac 52:54:00:12:01:08 Port 1 vmdq
> pool 9 set mac 52:54:00:12:01:09 Port 1 vmdq pool 10 set mac
> 52:54:00:12:01:0a Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b Port 1 vmdq
> pool 12 set mac 52:54:00:12:01:0c Port 1 vmdq pool 13 set mac
> 52:54:00:12:01:0d Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e Port 1 vmdq
> pool 15 set mac 52:54:00:12:01:0f Core 0(lcore 1) reading queues 64-191
> 
> <SIGHUP>
> 
> Pool 0: 0 0 0 0 0 0 0 0
> Pool 1: 119 0 0 0 0 0 0 0
> Pool 2: 0 0 0 0 0 0 0 0
> Pool 3: 0 0 0 0 0 0 0 0
> Pool 4: 0 0 0 0 0 0 0 0
> Pool 5: 0 0 0 0 0 0 0 0
> Pool 6: 0 0 0 0 0 0 0 0
> Pool 7: 0 0 0 0 0 0 0 0
> Pool 8: 0 0 0 0 0 0 0 0
> Pool 9: 0 0 0 0 0 0 0 0
> Pool 10: 0 0 0 0 0 0 0 0
> Pool 11: 0 0 0 0 0 0 0 0
> Pool 12: 0 0 0 0 0 0 0 0
> Pool 13: 0 0 0 0 0 0 0 0
> Pool 14: 0 0 0 0 0 0 0 0
> Pool 15: 0 0 0 0 0 0 0 0
> 
> Even with --enable-rss, the packets are still being placed in VLAN Pool 1/Queue 0
> instead of VLAN Pool 1/Queue 2.
> 
> As I mentioned in my original email, if I use an 82599ES (dual 10G NIC), it all
> works as expected.
> 
> What am I missing?
> --Mike
> 
> On 9/18/19, 7:54 AM, "Christensen, ChadX M" <chadx.m.christensen@intel.com>
> wrote:
> 
>     [EXTERNAL SENDER]
> 
>     Hi Mike,
> 
>     Did that resolve it?
> 
>     Thanks,
> 
>     Chad Christensen | Ecosystem Enablement Manager
>     chadx.m.christensen@intel.com | (801) 786-5703
> 
>     -----Original Message-----
>     From: Mike DeVico <mdevico@xcom-labs.com>
>     Sent: Wednesday, September 18, 2019 8:17 AM
>     To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
>     Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Christensen, ChadX M
> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
>     Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
>     Sure enough, I see it now. I'll give it a try.
> 
>     Thanks!!!
>     --Mike
> 
>     On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
> 
>         [EXTERNAL SENDER]
> 
>         > -----Original Message-----
>         > From: Thomas Monjalon [mailto:thomas@monjalon.net]
>         > Sent: Wednesday, September 18, 2019 3:03 PM
>         > To: Zhang, Xiao <xiao.zhang@intel.com>
>         > Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing,
> Beilei
>         > <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson,
> Bruce
>         > <bruce.richardson@intel.com>; Ananyev, Konstantin
>         > <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
>         > Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
>         > <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
>         > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>         >
>         > 18/09/2019 09:02, Zhang, Xiao:
>         > >
>         > > There is some hardware limitation and need to enable RSS to distribute
>         > packets for X710.
>         >
>         > Is this limitation documented?
> 
>         Yes, it's documented in doc/guides/nics/i40e.rst
> 
>         "DCB works only when RSS is enabled."
> 
>         >
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18 14:53                 ` Christensen, ChadX M
  2019-09-18 20:22                   ` Mike DeVico
@ 2019-09-19  1:23                   ` Mike DeVico
  2019-09-19  2:52                     ` Zhang, Xiao
  1 sibling, 1 reply; 20+ messages in thread
From: Mike DeVico @ 2019-09-19  1:23 UTC (permalink / raw)
  To: Christensen, ChadX M, Zhang, Xiao, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Tia Cassett, Wu, Jingjing, Wong1,
	 Samuel

As suggested I tried the following:

I have an Intel FlexRAN FerryBridge broadcasting a packet 1/s which looks like the following
(sudo tcpdump -i p7p1 -xx):

        0x0000:  ffff ffff ffff 0000 aeae 0000 8100 4001
        0x0010:  0800 0009 0000 0000 0001 8086 3600 010f
        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000
        0x0030:  0000 0000 0000 0000 0000 0000

The first 12 bytes are the dest/src MAC address followed by the 802.1Q Header (8100 4001)
If you crack this, the MS 16 bits are the TPID which is set to 8100 by the Ferrybridge. 
The next 16 bits (0x4001) make up the PCP bits [15:13], the DEI [12] and the VID [11:0]. So if you
crack the 0x4001 this makes the PCP 2 (010b), the DEI 0 and VID 1 (000000000001b).

Given this I expect the packets to but placed in Pool 1/Queue 2 (based on VID 1 and PCP 2). 
However, when I run:

./vmdq_dcb_app -w 0000:05:00.0 -w 0000:05:00.1 -l 1 -- -p 3 --nb-pools 16 --nb-tcs 8 --enable-rss
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1572 net_i40e
vmdq queue base: 64 pool base 1
Configured vmdq pool num: 16, each vmdq pool has 8 queues
Port 0 modified RSS hash function based on hardware support,requested:0x3bffc configured:0x3ef8
Port 0 MAC: e8 ea 6a 27 b5 4d
Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
vmdq queue base: 64 pool base 1
Configured vmdq pool num: 16, each vmdq pool has 8 queues
Port 1 modified RSS hash function based on hardware support,requested:0x3bffc configured:0x3ef8
Port 1 MAC: e8 ea 6a 27 b5 4e
Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
Core 0(lcore 1) reading queues 64-191

<SIGHUP>

Pool 0: 0 0 0 0 0 0 0 0 
Pool 1: 119 0 0 0 0 0 0 0 
Pool 2: 0 0 0 0 0 0 0 0 
Pool 3: 0 0 0 0 0 0 0 0 
Pool 4: 0 0 0 0 0 0 0 0 
Pool 5: 0 0 0 0 0 0 0 0 
Pool 6: 0 0 0 0 0 0 0 0 
Pool 7: 0 0 0 0 0 0 0 0 
Pool 8: 0 0 0 0 0 0 0 0 
Pool 9: 0 0 0 0 0 0 0 0 
Pool 10: 0 0 0 0 0 0 0 0 
Pool 11: 0 0 0 0 0 0 0 0 
Pool 12: 0 0 0 0 0 0 0 0 
Pool 13: 0 0 0 0 0 0 0 0 
Pool 14: 0 0 0 0 0 0 0 0 
Pool 15: 0 0 0 0 0 0 0 0

Even with --enable-rss, the packets are still being placed in VLAN Pool 1/Queue 0 
instead of VLAN Pool 1/Queue 2.

As I mentioned in my original email, if I use an 82599ES (dual 10G NIC), it all
works as expected.

What am I missing?
--Mike

On 9/18/19, 7:54 AM, "Christensen, ChadX M" <chadx.m.christensen@intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Mike,
    
    Did that resolve it?
    
    Thanks,
    
    Chad Christensen | Ecosystem Enablement Manager
    chadx.m.christensen@intel.com | (801) 786-5703
    
    -----Original Message-----
    From: Mike DeVico <mdevico@xcom-labs.com>
    Sent: Wednesday, September 18, 2019 8:17 AM
    To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    
    Sure enough, I see it now. I'll give it a try.
    
    Thanks!!!
    --Mike
    
    On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
    
        [EXTERNAL SENDER]
    
        > -----Original Message-----
        > From: Thomas Monjalon [mailto:thomas@monjalon.net]
        > Sent: Wednesday, September 18, 2019 3:03 PM
        > To: Zhang, Xiao <xiao.zhang@intel.com>
        > Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing, Beilei
        > <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce
        > <bruce.richardson@intel.com>; Ananyev, Konstantin
        > <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
        > Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
        > <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
        >
        > 18/09/2019 09:02, Zhang, Xiao:
        > >
        > > There is some hardware limitation and need to enable RSS to distribute
        > packets for X710.
        >
        > Is this limitation documented?
    
        Yes, it's documented in doc/guides/nics/i40e.rst
    
        "DCB works only when RSS is enabled."
    
        >
    
    
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18 14:53                 ` Christensen, ChadX M
@ 2019-09-18 20:22                   ` Mike DeVico
  2019-09-19  1:23                   ` Mike DeVico
  1 sibling, 0 replies; 20+ messages in thread
From: Mike DeVico @ 2019-09-18 20:22 UTC (permalink / raw)
  To: Christensen, ChadX M, Zhang, Xiao, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Tia Cassett, Wu, Jingjing, Wong1,
	 Samuel

I have not had a chance to try it yet, but it definitely looks like this is 
definitely the issue.

On 9/18/19, 7:54 AM, "Christensen, ChadX M" <chadx.m.christensen@intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Mike,
    
    Did that resolve it?
    
    Thanks,
    
    Chad Christensen | Ecosystem Enablement Manager
    chadx.m.christensen@intel.com | (801) 786-5703
    
    -----Original Message-----
    From: Mike DeVico <mdevico@xcom-labs.com>
    Sent: Wednesday, September 18, 2019 8:17 AM
    To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    
    Sure enough, I see it now. I'll give it a try.
    
    Thanks!!!
    --Mike
    
    On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
    
        [EXTERNAL SENDER]
    
        > -----Original Message-----
        > From: Thomas Monjalon [mailto:thomas@monjalon.net]
        > Sent: Wednesday, September 18, 2019 3:03 PM
        > To: Zhang, Xiao <xiao.zhang@intel.com>
        > Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing, Beilei
        > <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce
        > <bruce.richardson@intel.com>; Ananyev, Konstantin
        > <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
        > Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
        > <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
        >
        > 18/09/2019 09:02, Zhang, Xiao:
        > >
        > > There is some hardware limitation and need to enable RSS to distribute
        > packets for X710.
        >
        > Is this limitation documented?
    
        Yes, it's documented in doc/guides/nics/i40e.rst
    
        "DCB works only when RSS is enabled."
    
        >
    
    
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18 14:17               ` Mike DeVico
@ 2019-09-18 14:53                 ` Christensen, ChadX M
  2019-09-18 20:22                   ` Mike DeVico
  2019-09-19  1:23                   ` Mike DeVico
  0 siblings, 2 replies; 20+ messages in thread
From: Christensen, ChadX M @ 2019-09-18 14:53 UTC (permalink / raw)
  To: Mike DeVico, Zhang, Xiao, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Tia Cassett, Wu, Jingjing, Wong1,
	Samuel

Hi Mike,

Did that resolve it?

Thanks,

Chad Christensen | Ecosystem Enablement Manager
chadx.m.christensen@intel.com | (801) 786-5703

-----Original Message-----
From: Mike DeVico <mdevico@xcom-labs.com> 
Sent: Wednesday, September 18, 2019 8:17 AM
To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon <thomas@monjalon.net>
Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>
Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

Sure enough, I see it now. I'll give it a try. 

Thanks!!!
--Mike

On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:

    [EXTERNAL SENDER]
    
    > -----Original Message-----
    > From: Thomas Monjalon [mailto:thomas@monjalon.net]
    > Sent: Wednesday, September 18, 2019 3:03 PM
    > To: Zhang, Xiao <xiao.zhang@intel.com>
    > Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing, Beilei
    > <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce
    > <bruce.richardson@intel.com>; Ananyev, Konstantin
    > <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
    > Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
    > <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > 18/09/2019 09:02, Zhang, Xiao:
    > >
    > > There is some hardware limitation and need to enable RSS to distribute
    > packets for X710.
    >
    > Is this limitation documented?
    
    Yes, it's documented in doc/guides/nics/i40e.rst
    
    "DCB works only when RSS is enabled."
    
    >
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18  7:10             ` Zhang, Xiao
@ 2019-09-18 14:17               ` Mike DeVico
  2019-09-18 14:53                 ` Christensen, ChadX M
  0 siblings, 1 reply; 20+ messages in thread
From: Mike DeVico @ 2019-09-18 14:17 UTC (permalink / raw)
  To: Zhang, Xiao, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Christensen, ChadX M, Tia Cassett, Wu,
	Jingjing, Wong1, Samuel

Sure enough, I see it now. I'll give it a try. 

Thanks!!!
--Mike

On 9/18/19, 12:11 AM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:

    [EXTERNAL SENDER]
    
    > -----Original Message-----
    > From: Thomas Monjalon [mailto:thomas@monjalon.net]
    > Sent: Wednesday, September 18, 2019 3:03 PM
    > To: Zhang, Xiao <xiao.zhang@intel.com>
    > Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing, Beilei
    > <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce
    > <bruce.richardson@intel.com>; Ananyev, Konstantin
    > <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
    > Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
    > <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > 18/09/2019 09:02, Zhang, Xiao:
    > >
    > > There is some hardware limitation and need to enable RSS to distribute
    > packets for X710.
    >
    > Is this limitation documented?
    
    Yes, it's documented in doc/guides/nics/i40e.rst
    
    "DCB works only when RSS is enabled."
    
    >
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18  7:03           ` Thomas Monjalon
@ 2019-09-18  7:10             ` Zhang, Xiao
  2019-09-18 14:17               ` Mike DeVico
  0 siblings, 1 reply; 20+ messages in thread
From: Zhang, Xiao @ 2019-09-18  7:10 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Mike DeVico, users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce,
	Ananyev, Konstantin, Yigit, Ferruh, Christensen, ChadX M,
	Tia Cassett, Wu, Jingjing



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, September 18, 2019 3:03 PM
> To: Zhang, Xiao <xiao.zhang@intel.com>
> Cc: Mike DeVico <mdevico@xcom-labs.com>; users@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
> Christensen, ChadX M <chadx.m.christensen@intel.com>; Tia Cassett
> <tiac@xcom-labs.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> 18/09/2019 09:02, Zhang, Xiao:
> >
> > There is some hardware limitation and need to enable RSS to distribute
> packets for X710.
> 
> Is this limitation documented?

Yes, it's documented in doc/guides/nics/i40e.rst

"DCB works only when RSS is enabled."

> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18  7:02         ` Zhang, Xiao
@ 2019-09-18  7:03           ` Thomas Monjalon
  2019-09-18  7:10             ` Zhang, Xiao
  0 siblings, 1 reply; 20+ messages in thread
From: Thomas Monjalon @ 2019-09-18  7:03 UTC (permalink / raw)
  To: Zhang, Xiao
  Cc: Mike DeVico, users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce,
	Ananyev, Konstantin, Yigit, Ferruh, Christensen, ChadX M,
	Tia Cassett, Wu, Jingjing

18/09/2019 09:02, Zhang, Xiao:
> 
> There is some hardware limitation and need to enable RSS to distribute packets for X710.

Is this limitation documented?



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18  4:20       ` Mike DeVico
@ 2019-09-18  7:02         ` Zhang, Xiao
  2019-09-18  7:03           ` Thomas Monjalon
  0 siblings, 1 reply; 20+ messages in thread
From: Zhang, Xiao @ 2019-09-18  7:02 UTC (permalink / raw)
  To: Mike DeVico
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Christensen, ChadX M, Tia Cassett,
	Thomas Monjalon, Wu, Jingjing


There is some hardware limitation and need to enable RSS to distribute packets for X710.

Thanks,
Xiao

> -----Original Message-----
> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
> Sent: Wednesday, September 18, 2019 12:21 PM
> To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Christensen, ChadX M
> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> I as understand it, RSS and DCB are two completely different things. DCB uses
> the PCP field to map the packet to a given queue, whereas RSS computes a key
> by hashing the IP address and port and then uses the key to map the packet to a
> given queue.
> 
> --Mike
> 
> On 9/17/19, 8:33 PM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
> 
>     [EXTERNAL SENDER]
> 
>     Hi Mike,
> 
>     You need add --enable-rss option when start up the process like:
>     sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 --enable-rss
> 
>     Thanks,
>     Xiao
> 
>     > -----Original Message-----
>     > From: Mike DeVico [mailto:mdevico@xcom-labs.com]
>     > Sent: Wednesday, September 18, 2019 2:55 AM
>     > To: Thomas Monjalon <thomas@monjalon.net>
>     > Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
>     > <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
>     > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
>     > <ferruh.yigit@intel.com>; Christensen, ChadX M
>     > <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>
>     > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>     >
>     > Hello,
>     >
>     > So far I haven't heard back from anyone regarding this issue and I would like
> to
>     > know what the status is at this point. Also, if you have any recommendations
> or
>     > require additional information from me, please let me know.
>     >
>     > Thank you in advance,
>     > --Mike DeVico
>     >
>     > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
>     >
>     >     [EXTERNAL SENDER]
>     >
>     >     Adding i40e maintainers and few more.
>     >
>     >     07/09/2019 01:11, Mike DeVico:
>     >     > Hello,
>     >     >
>     >     > I am having an issue getting the DCB feature to work with an Intel
>     >     > X710 Quad SFP+ NIC.
>     >     >
>     >     > Here’s my setup:
>     >     >
>     >     > 1.      DPDK 18.08 built with the following I40E configs:
>     >     >
>     >     > CONFIG_RTE_LIBRTE_I40E_PMD=y
>     >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
>     >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
>     >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
>     >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
>     >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
>     >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
>     >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
>     >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
>     >     >
>     >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >     >
>     >     > Network devices using DPDK-compatible driver
>     >     > ============================================
>     >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     >
>     >     >        Network devices using kernel driver
>     >     >        ===================================
>     >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
>     > drv=igb unused=igb_uio *Active*
>     >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
>     > drv=igb unused=igb_uio *Active*
>     >     >
>     >     >        Other Network devices
>     >     >        =====================
>     >     >        <none>
>     >     >
>     >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
>     > that’s broadcasting
>     >     > a packet tagged with VLAN 1 and PCP 2.
>     >     >
>     >     > 4.      I use the vmdq_dcb example app and configure the card with 16
>     > pools/8 queue each
>     >     > as follows:
>     >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >     >
>     >     >
>     >     > The apps starts up fine and successfully probes the card as shown below:
>     >     >
>     >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >     > EAL: Detected 80 lcore(s)
>     >     > EAL: Detected 2 NUMA nodes
>     >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     >     > EAL: Probing VFIO support...
>     >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > vmdq queue base: 64 pool base 1
>     >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     >     > Port 0 MAC: e8 ea 6a 27 b5 4d
>     >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
>     >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
>     >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
>     >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
>     >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
>     >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
>     >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
>     >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
>     >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
>     >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
>     >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
>     >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
>     >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
>     >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
>     >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
>     >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
>     >     > vmdq queue base: 64 pool base 1
>     >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     >     > Port 1 MAC: e8 ea 6a 27 b5 4e
>     >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
>     >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
>     >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
>     >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
>     >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
>     >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
>     >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
>     >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
>     >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
>     >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
>     >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
>     >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
>     >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
>     >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
>     >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
>     >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
>     >     >
>     >     > Skipping disabled port 2
>     >     >
>     >     > Skipping disabled port 3
>     >     > Core 0(lcore 1) reading queues 64-191
>     >     >
>     >     > However, when I issue the SIGHUP I see that the packets
>     >     > are being put into the first queue of Pool 1 as follows:
>     >     >
>     >     > Pool 0: 0 0 0 0 0 0 0 0
>     >     > Pool 1: 10 0 0 0 0 0 0 0
>     >     > Pool 2: 0 0 0 0 0 0 0 0
>     >     > Pool 3: 0 0 0 0 0 0 0 0
>     >     > Pool 4: 0 0 0 0 0 0 0 0
>     >     > Pool 5: 0 0 0 0 0 0 0 0
>     >     > Pool 6: 0 0 0 0 0 0 0 0
>     >     > Pool 7: 0 0 0 0 0 0 0 0
>     >     > Pool 8: 0 0 0 0 0 0 0 0
>     >     > Pool 9: 0 0 0 0 0 0 0 0
>     >     > Pool 10: 0 0 0 0 0 0 0 0
>     >     > Pool 11: 0 0 0 0 0 0 0 0
>     >     > Pool 12: 0 0 0 0 0 0 0 0
>     >     > Pool 13: 0 0 0 0 0 0 0 0
>     >     > Pool 14: 0 0 0 0 0 0 0 0
>     >     > Pool 15: 0 0 0 0 0 0 0 0
>     >     > Finished handling signal 1
>     >     >
>     >     > Since the packets are being tagged with PCP 2 they should be getting
>     >     > mapped to 3rd queue of Pool 1, right?
>     >     >
>     >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
>     > and
>     >     > the packets show up in the expected queue. (Note, to get it to work I
> had
>     >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
>     >     >
>     >     > Here’s that setup:
>     >     >
>     >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >     >
>     >     > Network devices using DPDK-compatible driver
>     >     > ============================================
>     >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>     > drv=igb_uio unused=ixgbe
>     >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>     > drv=igb_uio unused=ixgbe
>     >     >
>     >     > Network devices using kernel driver
>     >     > ===================================
>     >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
> drv=igb
>     > unused=igb_uio *Active*
>     >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
> drv=igb
>     > unused=igb_uio *Active*
>     >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f0
>     > drv=i40e unused=igb_uio
>     >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f1
>     > drv=i40e unused=igb_uio
>     >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f2
>     > drv=i40e unused=igb_uio
>     >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f3
>     > drv=i40e unused=igb_uio
>     >     >
>     >     > Other Network devices
>     >     > =====================
>     >     > <none>
>     >     >
>     >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >     > EAL: Detected 80 lcore(s)
>     >     > EAL: Detected 2 NUMA nodes
>     >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     >     > EAL: Probing VFIO support...
>     >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
>     >     > EAL:   probe driver: 8086:10fb net_ixgbe
>     >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
>     >     > EAL:   probe driver: 8086:10fb net_ixgbe
>     >     > vmdq queue base: 0 pool base 0
>     >     > Port 0 MAC: 00 1b 21 bf 71 24
>     >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     >     > vmdq queue base: 0 pool base 0
>     >     > Port 1 MAC: 00 1b 21 bf 71 26
>     >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     >     >
>     >     > Now when I send the SIGHUP, I see the packets being routed to
>     >     > the expected queue:
>     >     >
>     >     > Pool 0: 0 0 0 0 0 0 0 0
>     >     > Pool 1: 0 0 58 0 0 0 0 0
>     >     > Pool 2: 0 0 0 0 0 0 0 0
>     >     > Pool 3: 0 0 0 0 0 0 0 0
>     >     > Pool 4: 0 0 0 0 0 0 0 0
>     >     > Pool 5: 0 0 0 0 0 0 0 0
>     >     > Pool 6: 0 0 0 0 0 0 0 0
>     >     > Pool 7: 0 0 0 0 0 0 0 0
>     >     > Pool 8: 0 0 0 0 0 0 0 0
>     >     > Pool 9: 0 0 0 0 0 0 0 0
>     >     > Pool 10: 0 0 0 0 0 0 0 0
>     >     > Pool 11: 0 0 0 0 0 0 0 0
>     >     > Pool 12: 0 0 0 0 0 0 0 0
>     >     > Pool 13: 0 0 0 0 0 0 0 0
>     >     > Pool 14: 0 0 0 0 0 0 0 0
>     >     > Pool 15: 0 0 0 0 0 0 0 0
>     >     > Finished handling signal 1
>     >     >
>     >     > What am I missing?
>     >     >
>     >     > Thankyou in advance,
>     >     > --Mike
>     >     >
>     >     >
>     >
>     >
>     >
>     >
>     >
>     >
> 
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-18  3:32     ` Zhang, Xiao
@ 2019-09-18  4:20       ` Mike DeVico
  2019-09-18  7:02         ` Zhang, Xiao
  0 siblings, 1 reply; 20+ messages in thread
From: Mike DeVico @ 2019-09-18  4:20 UTC (permalink / raw)
  To: Zhang, Xiao, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Christensen, ChadX M, Tia Cassett

I as understand it, RSS and DCB are two completely different things. DCB uses the PCP field to 
map the packet to a given queue, whereas RSS computes a key by hashing the IP address and port
and then uses the key to map the packet to a given queue.

--Mike

On 9/17/19, 8:33 PM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Mike,
    
    You need add --enable-rss option when start up the process like:
    sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 --enable-rss
    
    Thanks,
    Xiao
    
    > -----Original Message-----
    > From: Mike DeVico [mailto:mdevico@xcom-labs.com]
    > Sent: Wednesday, September 18, 2019 2:55 AM
    > To: Thomas Monjalon <thomas@monjalon.net>
    > Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
    > <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
    > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
    > <ferruh.yigit@intel.com>; Christensen, ChadX M
    > <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > Hello,
    >
    > So far I haven't heard back from anyone regarding this issue and I would like to
    > know what the status is at this point. Also, if you have any recommendations or
    > require additional information from me, please let me know.
    >
    > Thank you in advance,
    > --Mike DeVico
    >
    > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
    >
    >     [EXTERNAL SENDER]
    >
    >     Adding i40e maintainers and few more.
    >
    >     07/09/2019 01:11, Mike DeVico:
    >     > Hello,
    >     >
    >     > I am having an issue getting the DCB feature to work with an Intel
    >     > X710 Quad SFP+ NIC.
    >     >
    >     > Here’s my setup:
    >     >
    >     > 1.      DPDK 18.08 built with the following I40E configs:
    >     >
    >     > CONFIG_RTE_LIBRTE_I40E_PMD=y
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >     >
    >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     >
    >     >        Network devices using kernel driver
    >     >        ===================================
    >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
    > drv=igb unused=igb_uio *Active*
    >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
    > drv=igb unused=igb_uio *Active*
    >     >
    >     >        Other Network devices
    >     >        =====================
    >     >        <none>
    >     >
    >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
    > that’s broadcasting
    >     > a packet tagged with VLAN 1 and PCP 2.
    >     >
    >     > 4.      I use the vmdq_dcb example app and configure the card with 16
    > pools/8 queue each
    >     > as follows:
    >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     >
    >     >
    >     > The apps starts up fine and successfully probes the card as shown below:
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 0 MAC: e8 ea 6a 27 b5 4d
    >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 1 MAC: e8 ea 6a 27 b5 4e
    >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >     >
    >     > Skipping disabled port 2
    >     >
    >     > Skipping disabled port 3
    >     > Core 0(lcore 1) reading queues 64-191
    >     >
    >     > However, when I issue the SIGHUP I see that the packets
    >     > are being put into the first queue of Pool 1 as follows:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 10 0 0 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > Since the packets are being tagged with PCP 2 they should be getting
    >     > mapped to 3rd queue of Pool 1, right?
    >     >
    >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
    > and
    >     > the packets show up in the expected queue. (Note, to get it to work I had
    >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >     >
    >     > Here’s that setup:
    >     >
    >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     >
    >     > Network devices using kernel driver
    >     > ===================================
    >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0
    > drv=i40e unused=igb_uio
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1
    > drv=i40e unused=igb_uio
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2
    > drv=i40e unused=igb_uio
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3
    > drv=i40e unused=igb_uio
    >     >
    >     > Other Network devices
    >     > =====================
    >     > <none>
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > vmdq queue base: 0 pool base 0
    >     > Port 0 MAC: 00 1b 21 bf 71 24
    >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     > vmdq queue base: 0 pool base 0
    >     > Port 1 MAC: 00 1b 21 bf 71 26
    >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     >
    >     > Now when I send the SIGHUP, I see the packets being routed to
    >     > the expected queue:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 0 0 58 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > What am I missing?
    >     >
    >     > Thankyou in advance,
    >     > --Mike
    >     >
    >     >
    >
    >
    >
    >
    >
    >
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
  2019-09-17 18:54   ` Mike DeVico
@ 2019-09-18  3:32     ` Zhang, Xiao
  2019-09-18  4:20       ` Mike DeVico
  0 siblings, 1 reply; 20+ messages in thread
From: Zhang, Xiao @ 2019-09-18  3:32 UTC (permalink / raw)
  To: Mike DeVico, Thomas Monjalon
  Cc: users, Xing, Beilei, Zhang, Qi Z, Richardson, Bruce, Ananyev,
	Konstantin, Yigit, Ferruh, Christensen, ChadX M, Tia Cassett

Hi Mike,

You need add --enable-rss option when start up the process like:
sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 --enable-rss

Thanks,
Xiao

> -----Original Message-----
> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
> Sent: Wednesday, September 18, 2019 2:55 AM
> To: Thomas Monjalon <thomas@monjalon.net>
> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Christensen, ChadX M
> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> Hello,
> 
> So far I haven't heard back from anyone regarding this issue and I would like to
> know what the status is at this point. Also, if you have any recommendations or
> require additional information from me, please let me know.
> 
> Thank you in advance,
> --Mike DeVico
> 
> On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
> 
>     [EXTERNAL SENDER]
> 
>     Adding i40e maintainers and few more.
> 
>     07/09/2019 01:11, Mike DeVico:
>     > Hello,
>     >
>     > I am having an issue getting the DCB feature to work with an Intel
>     > X710 Quad SFP+ NIC.
>     >
>     > Here’s my setup:
>     >
>     > 1.      DPDK 18.08 built with the following I40E configs:
>     >
>     > CONFIG_RTE_LIBRTE_I40E_PMD=y
>     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
>     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
>     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
>     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
>     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
>     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
>     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
>     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
>     >
>     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >
>     > Network devices using DPDK-compatible driver
>     > ============================================
>     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
> unused=i40e
>     >
>     >        Network devices using kernel driver
>     >        ===================================
>     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
> drv=igb unused=igb_uio *Active*
>     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
> drv=igb unused=igb_uio *Active*
>     >
>     >        Other Network devices
>     >        =====================
>     >        <none>
>     >
>     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
> that’s broadcasting
>     > a packet tagged with VLAN 1 and PCP 2.
>     >
>     > 4.      I use the vmdq_dcb example app and configure the card with 16
> pools/8 queue each
>     > as follows:
>     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >
>     >
>     > The apps starts up fine and successfully probes the card as shown below:
>     >
>     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     > EAL: Detected 80 lcore(s)
>     > EAL: Detected 2 NUMA nodes
>     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     > EAL: Probing VFIO support...
>     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > vmdq queue base: 64 pool base 1
>     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     > Port 0 MAC: e8 ea 6a 27 b5 4d
>     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
>     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
>     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
>     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
>     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
>     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
>     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
>     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
>     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
>     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
>     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
>     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
>     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
>     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
>     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
>     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
>     > vmdq queue base: 64 pool base 1
>     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     > Port 1 MAC: e8 ea 6a 27 b5 4e
>     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
>     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
>     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
>     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
>     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
>     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
>     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
>     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
>     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
>     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
>     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
>     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
>     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
>     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
>     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
>     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
>     >
>     > Skipping disabled port 2
>     >
>     > Skipping disabled port 3
>     > Core 0(lcore 1) reading queues 64-191
>     >
>     > However, when I issue the SIGHUP I see that the packets
>     > are being put into the first queue of Pool 1 as follows:
>     >
>     > Pool 0: 0 0 0 0 0 0 0 0
>     > Pool 1: 10 0 0 0 0 0 0 0
>     > Pool 2: 0 0 0 0 0 0 0 0
>     > Pool 3: 0 0 0 0 0 0 0 0
>     > Pool 4: 0 0 0 0 0 0 0 0
>     > Pool 5: 0 0 0 0 0 0 0 0
>     > Pool 6: 0 0 0 0 0 0 0 0
>     > Pool 7: 0 0 0 0 0 0 0 0
>     > Pool 8: 0 0 0 0 0 0 0 0
>     > Pool 9: 0 0 0 0 0 0 0 0
>     > Pool 10: 0 0 0 0 0 0 0 0
>     > Pool 11: 0 0 0 0 0 0 0 0
>     > Pool 12: 0 0 0 0 0 0 0 0
>     > Pool 13: 0 0 0 0 0 0 0 0
>     > Pool 14: 0 0 0 0 0 0 0 0
>     > Pool 15: 0 0 0 0 0 0 0 0
>     > Finished handling signal 1
>     >
>     > Since the packets are being tagged with PCP 2 they should be getting
>     > mapped to 3rd queue of Pool 1, right?
>     >
>     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
> and
>     > the packets show up in the expected queue. (Note, to get it to work I had
>     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
>     >
>     > Here’s that setup:
>     >
>     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >
>     > Network devices using DPDK-compatible driver
>     > ============================================
>     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> drv=igb_uio unused=ixgbe
>     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> drv=igb_uio unused=ixgbe
>     >
>     > Network devices using kernel driver
>     > ===================================
>     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb
> unused=igb_uio *Active*
>     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb
> unused=igb_uio *Active*
>     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0
> drv=i40e unused=igb_uio
>     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1
> drv=i40e unused=igb_uio
>     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2
> drv=i40e unused=igb_uio
>     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3
> drv=i40e unused=igb_uio
>     >
>     > Other Network devices
>     > =====================
>     > <none>
>     >
>     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     > EAL: Detected 80 lcore(s)
>     > EAL: Detected 2 NUMA nodes
>     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     > EAL: Probing VFIO support...
>     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     > EAL:   probe driver: 8086:1572 net_i40e
>     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
>     > EAL:   probe driver: 8086:10fb net_ixgbe
>     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
>     > EAL:   probe driver: 8086:10fb net_ixgbe
>     > vmdq queue base: 0 pool base 0
>     > Port 0 MAC: 00 1b 21 bf 71 24
>     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     > vmdq queue base: 0 pool base 0
>     > Port 1 MAC: 00 1b 21 bf 71 26
>     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     >
>     > Now when I send the SIGHUP, I see the packets being routed to
>     > the expected queue:
>     >
>     > Pool 0: 0 0 0 0 0 0 0 0
>     > Pool 1: 0 0 58 0 0 0 0 0
>     > Pool 2: 0 0 0 0 0 0 0 0
>     > Pool 3: 0 0 0 0 0 0 0 0
>     > Pool 4: 0 0 0 0 0 0 0 0
>     > Pool 5: 0 0 0 0 0 0 0 0
>     > Pool 6: 0 0 0 0 0 0 0 0
>     > Pool 7: 0 0 0 0 0 0 0 0
>     > Pool 8: 0 0 0 0 0 0 0 0
>     > Pool 9: 0 0 0 0 0 0 0 0
>     > Pool 10: 0 0 0 0 0 0 0 0
>     > Pool 11: 0 0 0 0 0 0 0 0
>     > Pool 12: 0 0 0 0 0 0 0 0
>     > Pool 13: 0 0 0 0 0 0 0 0
>     > Pool 14: 0 0 0 0 0 0 0 0
>     > Pool 15: 0 0 0 0 0 0 0 0
>     > Finished handling signal 1
>     >
>     > What am I missing?
>     >
>     > Thankyou in advance,
>     > --Mike
>     >
>     >
> 
> 
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
       [not found] ` <2953945.eKoDkclGR7@xps>
@ 2019-09-17 18:54   ` Mike DeVico
  2019-09-18  3:32     ` Zhang, Xiao
       [not found]   ` <0BD0EAA3-BB16-4B09-BF25-4744C0A879A0@xcom-tech.com>
  1 sibling, 1 reply; 20+ messages in thread
From: Mike DeVico @ 2019-09-17 18:54 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: users, Beilei Xing, Qi Zhang, Bruce Richardson,
	Konstantin Ananyev, ferruh.yigit, Christensen, ChadX M,
	Tia Cassett

Hello,

So far I haven't heard back from anyone regarding this issue and I 
would like to know what the status is at this point. Also, if you
have any recommendations or require additional information from
me, please let me know. 

Thank you in advance,
--Mike DeVico

On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:

    [EXTERNAL SENDER]
    
    Adding i40e maintainers and few more.
    
    07/09/2019 01:11, Mike DeVico:
    > Hello,
    >
    > I am having an issue getting the DCB feature to work with an Intel
    > X710 Quad SFP+ NIC.
    >
    > Here’s my setup:
    >
    > 1.      DPDK 18.08 built with the following I40E configs:
    >
    > CONFIG_RTE_LIBRTE_I40E_PMD=y
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >
    > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e
    >
    >        Network devices using kernel driver
    >        ===================================
    >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    >
    >        Other Network devices
    >        =====================
    >        <none>
    >
    > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC that’s broadcasting
    > a packet tagged with VLAN 1 and PCP 2.
    >
    > 4.      I use the vmdq_dcb example app and configure the card with 16 pools/8 queue each
    > as follows:
    >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >
    >
    > The apps starts up fine and successfully probes the card as shown below:
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 0 MAC: e8 ea 6a 27 b5 4d
    > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    > vmdq queue base: 64 pool base 1
    > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    > Port 1 MAC: e8 ea 6a 27 b5 4e
    > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >
    > Skipping disabled port 2
    >
    > Skipping disabled port 3
    > Core 0(lcore 1) reading queues 64-191
    >
    > However, when I issue the SIGHUP I see that the packets
    > are being put into the first queue of Pool 1 as follows:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 10 0 0 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > Since the packets are being tagged with PCP 2 they should be getting
    > mapped to 3rd queue of Pool 1, right?
    >
    > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC and
    > the packets show up in the expected queue. (Note, to get it to work I had
    > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >
    > Here’s that setup:
    >
    > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >
    > Network devices using DPDK-compatible driver
    > ============================================
    > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe
    >
    > Network devices using kernel driver
    > ===================================
    > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb unused=igb_uio *Active*
    > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb unused=igb_uio *Active*
    > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f0 drv=i40e unused=igb_uio
    > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f1 drv=i40e unused=igb_uio
    > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f2 drv=i40e unused=igb_uio
    > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=enp59s0f3 drv=i40e unused=igb_uio
    >
    > Other Network devices
    > =====================
    > <none>
    >
    > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    > EAL: Detected 80 lcore(s)
    > EAL: Detected 2 NUMA nodes
    > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    > EAL: Probing VFIO support...
    > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1521 net_e1000_igb
    > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    > EAL:   probe driver: 8086:1572 net_i40e
    > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    > EAL:   probe driver: 8086:10fb net_ixgbe
    > vmdq queue base: 0 pool base 0
    > Port 0 MAC: 00 1b 21 bf 71 24
    > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    > vmdq queue base: 0 pool base 0
    > Port 1 MAC: 00 1b 21 bf 71 26
    > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >
    > Now when I send the SIGHUP, I see the packets being routed to
    > the expected queue:
    >
    > Pool 0: 0 0 0 0 0 0 0 0
    > Pool 1: 0 0 58 0 0 0 0 0
    > Pool 2: 0 0 0 0 0 0 0 0
    > Pool 3: 0 0 0 0 0 0 0 0
    > Pool 4: 0 0 0 0 0 0 0 0
    > Pool 5: 0 0 0 0 0 0 0 0
    > Pool 6: 0 0 0 0 0 0 0 0
    > Pool 7: 0 0 0 0 0 0 0 0
    > Pool 8: 0 0 0 0 0 0 0 0
    > Pool 9: 0 0 0 0 0 0 0 0
    > Pool 10: 0 0 0 0 0 0 0 0
    > Pool 11: 0 0 0 0 0 0 0 0
    > Pool 12: 0 0 0 0 0 0 0 0
    > Pool 13: 0 0 0 0 0 0 0 0
    > Pool 14: 0 0 0 0 0 0 0 0
    > Pool 15: 0 0 0 0 0 0 0 0
    > Finished handling signal 1
    >
    > What am I missing?
    >
    > Thankyou in advance,
    > --Mike
    >
    >
    
    
    
    
    
    


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, back to index

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-10 21:12 [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC Mike DeVico
  -- strict thread matches above, loose matches on Subject: below --
2019-09-20 21:57 Mike DeVico
2019-10-10 21:23 ` Christensen, ChadX M
2019-10-10 21:25   ` Mike DeVico
     [not found] <834B2FF6-9FC7-43E4-8CA7-67D861FEE70E@xcom-tech.com>
     [not found] ` <2953945.eKoDkclGR7@xps>
2019-09-17 18:54   ` Mike DeVico
2019-09-18  3:32     ` Zhang, Xiao
2019-09-18  4:20       ` Mike DeVico
2019-09-18  7:02         ` Zhang, Xiao
2019-09-18  7:03           ` Thomas Monjalon
2019-09-18  7:10             ` Zhang, Xiao
2019-09-18 14:17               ` Mike DeVico
2019-09-18 14:53                 ` Christensen, ChadX M
2019-09-18 20:22                   ` Mike DeVico
2019-09-19  1:23                   ` Mike DeVico
2019-09-19  2:52                     ` Zhang, Xiao
2019-09-19 13:34                       ` Mike DeVico
2019-09-19 14:34                         ` Johnson, Brian
     [not found]   ` <0BD0EAA3-BB16-4B09-BF25-4744C0A879A0@xcom-tech.com>
     [not found]     ` <b9318aa4f0a943958171cc6fc53a010f@sandvine.com>
     [not found]       ` <61798E93-724B-4BE6-A03C-63B274E71AD2@xcom-tech.com>
     [not found]         ` <F35DEAC7BCE34641BA9FAC6BCA4A12E71B4ADE0E@SHSMSX103.ccr.corp.intel.com>
2019-09-26 20:31           ` Mike DeVico
2019-09-30  2:21             ` Zhang, Helin
2019-10-03 23:56               ` Mike DeVico

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox