DPDK usage discussions
 help / color / mirror / Atom feed
From: Mike DeVico <mdevico@xcom-labs.com>
To: "Zhang, Helin" <helin.zhang@intel.com>,
	Jeff Weeks <jweeks@sandvine.com>,
	 Thomas Monjalon <thomas@monjalon.net>
Cc: "users@dpdk.org" <users@dpdk.org>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	 "Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>,
	"Zhang, Xiao" <xiao.zhang@intel.com>,
	"Wong1, Samuel" <samuel.wong1@intel.com>,
	Kobi Cohen-Arazi <Kobi.Cohen-Arazi@xcom-labs.com>
Subject: Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
Date: Thu, 3 Oct 2019 23:56:06 +0000	[thread overview]
Message-ID: <ED1084C8-7233-41CF-A135-4D9DFC0D9225@xcom-tech.com> (raw)
In-Reply-To: <F35DEAC7BCE34641BA9FAC6BCA4A12E71B4BC656@SHSMSX103.ccr.corp.intel.com>

I set up a basic loopback test by connecting two ports of

the same X710 NIC with a 10G passive copper cable.



In my case, the two ports map to PCI addresses

0000:82:00.0 and 0000:82:00.1. 0000:82:00.0 is bound

to igb_uio and .1 is left bound to the kernel driver (i40e)

which is mapped to linux eth iface p4p1.



I then created a simple txtest app (see attached) based on the

vmdq_dcb example app that sends out a fixed packet once per

second over a specified port. It also prints out the packet each

time it sends it.



I run txtest as follows:



sudo ./txtest -w 0000:82:00.0 -c 0x3



TX Packet at [0x7fa407c2ee80], len=60

00000000: 00 00 AE AE 00 00 E8 EA 6A 27 B8 F9 81 00 40 01 | ........j'....@.

00000010: 08 00 00 08 00 00 00 00 AA BB CC DD 00 00 00 00 | ................

00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................

00000030: 00 00 00 00 00 00 00 00 00 00 00 00 |  |  |  |  | ............



...



Note the byte shown in red. This corresponds to the PCP field in the VLAN header.



I then run tcpdump on p2p4 (the receiving end of the loopback) as shown:



sudo tcpdump -xxi p4p2



19:02:08.124694 IP0

        0x0000:  0000 aeae 0000 e8ea 6a27 b8f9 8100 0001

        0x0010:  0800 0008 0000 0000 aabb ccdd 0000 0000

        0x0020:  0000 0000 0000 0000 0000 0000 0000 0000

        0x0030:  0000 0000 0000 0000 0000 0000

...



Once again, note the PCP byte shown in red. The source of my

problem would appear to be that the PCP field is somehow getting

set to 0 even though it’s being passed to rte_eth_tx_burst as 0x40.

Here’s the section of the code in my main.c.



lcore_main(void *arg)

{

    uint32_t lcore_id = rte_lcore_id();



    struct rte_mempool *mbuf_pool = (struct rte_mempool*)arg;



    RTE_LOG(INFO, TXTEST, "tx entering main loop on lcore %u\n", lcore_id);



    static const uint8_t tx_data[] = {

         0x00, 0x00, 0xAE, 0xAE, 0x00, 0x00, 0xE8, 0xEA, 0x6A, 0x27, 0xB8, 0xF9, 0x81, 0x00, 0x40, 0x01,

         0x08, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, 0x00, 0x00, 0x00, 0x00,

         0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,

         0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00

    };



    struct rte_mbuf* tx_pkt      = rte_pktmbuf_alloc(mbuf_pool);

    uint8_t*         tx_pkt_dptr = (uint8_t *) rte_pktmbuf_append(tx_pkt, sizeof(tx_data));

    memcpy(tx_pkt_dptr,tx_data,sizeof(tx_data));

    const uint8_t* dptr = rte_pktmbuf_mtod(tx_pkt,uint8_t*);



    while (!force_quit) {

        uint32_t nb_rx = rte_eth_tx_burst(0, 0, &tx_pkt, 1);

       if (nb_rx == 1) {

            rte_hexdump(stdout, "TX Packet", dptr,

                        rte_pktmbuf_data_len(tx_pkt));

        }

        sleep(1);

    }

    return 0;

}



As can be seen, the packet is fixed and it is dumped immediately

after being sent, so why is the PCP field going out over the wire

as a 0?



For reference, I’m using dpdk version 18.08 and the only config change

is to set CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8 (from the default 4).

I did this because our actual application requires the RX to be configured with

16 pools with 8 TCs queues per pool and I wanted to resemble the target

config as closely as possible.



Any insight/assistance would be greatly appreciated.



Thank you in advance.

--Mike



On 9/29/19, 7:22 PM, "Zhang, Helin" <helin.zhang@intel.com> wrote:



    [EXTERNAL SENDER]



    Hi Mike



    There you need to try to use the example applications (e.g. testpmd) to reproduce your issue. Then send out the detailed information to the maintainers. Thanks!



    Regards,

    Helin



    -----Original Message-----

    From: Mike DeVico [mailto:mdevico@xcom-labs.com]

    Sent: Thursday, September 26, 2019 9:32 PM

    To: Zhang, Helin <helin.zhang@intel.com>; Jeff Weeks <jweeks@sandvine.com>; Thomas Monjalon <thomas@monjalon.net>

    Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Zhang, Xiao <xiao.zhang@intel.com>; Wong1, Samuel <samuel.wong1@intel.com>

    Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC



    Hi Helin,



    Yes, and the reason the RX packets were not being queued to the proper queue was due to RSS not being enabled/configured. Once I did this, the RX packets were placed in the proper queue.



    That being said, what I see now is that the TX side seems to have an issue. So the way it works is that the Ferrybridge broadcasts out what's called a Present packet at 1s intervals. Once the host application detects the packet (which it now does) it verifies that the packet is correctly formatted and such and then sends a packet back to the Ferrybridge to tell it to stop sending this packet. However, that TX packet apparently is not going out because I continue to receive Present packets from the Ferrybridge at the 1s interval. What's not clear to me is what queue I should be sending this packet to. I actually tried sending it out all 128 queues, but I still keep receiving the Present packet. What I lack is the ability to actually sniff what's going out over the wire.



    Any ideas how to approach this issue?



    Thanks in advance,

    --Mike



    On 9/26/19, 9:02 AM, "Zhang, Helin" <helin.zhang@intel.com> wrote:



        [EXTERNAL SENDER]



        Hi Mike



        Can you check if you are using the right combination of DPDK version and NIC firmware, and kernel driver if you are using?

        You can find the recommended combination from https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fi40e.html%23recommended-matching-list&amp;data=02%7C01%7Cmdevico%40xcom-labs.com%7C324f8717a3a24e6132a208d7454cf9ce%7C30eb8c3e540c4f279dfa86b6e7bfafa8%7C0%7C0%7C637054069241402915&amp;sdata=TJiThUF1ALki0XiZVf0pBsjFrc8ZPQsv4ikaojBpNIE%3D&amp;reserved=0. Hopefully that helps!



        Regards,

        Helin



        > -----Original Message-----

        > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mike DeVico

        > Sent: Friday, September 13, 2019 2:10 AM

        > To: Jeff Weeks; Thomas Monjalon

        > Cc: dev@dpdk.org; Xing, Beilei; Zhang, Qi Z; Richardson, Bruce; Ananyev,

        > Konstantin; Yigit, Ferruh

        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

        >

        > Hi Jeff,

        >

        > Thanks for chiming in...

        >

        > Yeah, In my case I get the packets, but they end up being put in queue 0

        > instead of 2.

        >

        > --Mike

        >

        > From: Jeff Weeks <jweeks@sandvine.com>

        > Date: Thursday, September 12, 2019 at 10:47 AM

        > To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon

        > <thomas@monjalon.net>

        > Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi

        > Zhang <qi.z.zhang@intel.com>, Bruce Richardson

        > <bruce.richardson@intel.com>, Konstantin Ananyev

        > <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com"

        > <ferruh.yigit@intel.com>

        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

        >

        > [EXTERNAL SENDER]

        >

        > I don't have much else to add, except that I also see dcb fail on the same NIC:

        >

        >

        >

        >   i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.

        >

        >

        >

        > My card doesn't receive any packets, though; not sure if it's related to this, or

        > not.

        >

        >

        >

        > --Jeff

        >

        > ________________________________

        > /dev/jeff_weeks.x2936

        > Sandvine Incorporated

        >

        > ________________________________

        > From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico

        > <mdevico@xcom-labs.com>

        > Sent: Thursday, September 12, 2019 1:06 PM

        > To: Thomas Monjalon

        > Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin

        > Ananyev; ferruh.yigit@intel.com

        > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC

        >

        > [EXTERNAL]

        >

        > Still no hits...

        >

        > --Mike

        >

        > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:

        >

        >     [EXTERNAL SENDER]

        >

        >     Adding i40e maintainers and few more.

        >

        >     07/09/2019 01:11, Mike DeVico:

        >     > Hello,

        >     >

        >     > I am having an issue getting the DCB feature to work with an Intel

        >     > X710 Quad SFP+ NIC.

        >     >

        >     > Here’s my setup:

        >     >

        >     > 1.      DPDK 18.08 built with the following I40E configs:

        >     >

        >     > CONFIG_RTE_LIBRTE_I40E_PMD=y

        >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n

        >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n

        >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n

        >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y

        >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y

        >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n

        >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64

        >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8

        >     >

        >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net

        >     >

        >     > Network devices using DPDK-compatible driver

        >     > ============================================

        >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio

        > unused=i40e

        >     >

        >     >        Network devices using kernel driver

        >     >        ===================================

        >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0

        > drv=igb unused=igb_uio *Active*

        >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1

        > drv=igb unused=igb_uio *Active*

        >     >

        >     >        Other Network devices

        >     >        =====================

        >     >        <none>

        >     >

        >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC

        > that’s broadcasting

        >     > a packet tagged with VLAN 1 and PCP 2.

        >     >

        >     > 4.      I use the vmdq_dcb example app and configure the card with 16

        > pools/8 queue each

        >     > as follows:

        >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3

        >     >

        >     >

        >     > The apps starts up fine and successfully probes the card as shown below:

        >     >

        >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3

        >     > EAL: Detected 80 lcore(s)

        >     > EAL: Detected 2 NUMA nodes

        >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

        >     > EAL: Probing VFIO support...

        >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > vmdq queue base: 64 pool base 1

        >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues

        >     > Port 0 MAC: e8 ea 6a 27 b5 4d

        >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00

        >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01

        >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02

        >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03

        >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04

        >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05

        >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06

        >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07

        >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08

        >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09

        >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a

        >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b

        >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c

        >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d

        >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e

        >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f

        >     > vmdq queue base: 64 pool base 1

        >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues

        >     > Port 1 MAC: e8 ea 6a 27 b5 4e

        >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00

        >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01

        >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02

        >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03

        >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04

        >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05

        >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06

        >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07

        >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08

        >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09

        >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a

        >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b

        >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c

        >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d

        >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e

        >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f

        >     >

        >     > Skipping disabled port 2

        >     >

        >     > Skipping disabled port 3

        >     > Core 0(lcore 1) reading queues 64-191

        >     >

        >     > However, when I issue the SIGHUP I see that the packets

        >     > are being put into the first queue of Pool 1 as follows:

        >     >

        >     > Pool 0: 0 0 0 0 0 0 0 0

        >     > Pool 1: 10 0 0 0 0 0 0 0

        >     > Pool 2: 0 0 0 0 0 0 0 0

        >     > Pool 3: 0 0 0 0 0 0 0 0

        >     > Pool 4: 0 0 0 0 0 0 0 0

        >     > Pool 5: 0 0 0 0 0 0 0 0

        >     > Pool 6: 0 0 0 0 0 0 0 0

        >     > Pool 7: 0 0 0 0 0 0 0 0

        >     > Pool 8: 0 0 0 0 0 0 0 0

        >     > Pool 9: 0 0 0 0 0 0 0 0

        >     > Pool 10: 0 0 0 0 0 0 0 0

        >     > Pool 11: 0 0 0 0 0 0 0 0

        >     > Pool 12: 0 0 0 0 0 0 0 0

        >     > Pool 13: 0 0 0 0 0 0 0 0

        >     > Pool 14: 0 0 0 0 0 0 0 0

        >     > Pool 15: 0 0 0 0 0 0 0 0

        >     > Finished handling signal 1

        >     >

        >     > Since the packets are being tagged with PCP 2 they should be getting

        >     > mapped to 3rd queue of Pool 1, right?

        >     >

        >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC

        > and

        >     > the packets show up in the expected queue. (Note, to get it to work I had

        >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)

        >     >

        >     > Here’s that setup:

        >     >

        >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net

        >     >

        >     > Network devices using DPDK-compatible driver

        >     > ============================================

        >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'

        > drv=igb_uio unused=ixgbe

        >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'

        > drv=igb_uio unused=ixgbe

        >     >

        >     > Network devices using kernel driver

        >     > ===================================

        >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb

        > unused=igb_uio *Active*

        >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb

        > unused=igb_uio *Active*

        >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f0 drv=i40e unused=igb_uio

        >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f1 drv=i40e unused=igb_uio

        >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f2 drv=i40e unused=igb_uio

        >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'

        > if=enp59s0f3 drv=i40e unused=igb_uio

        >     >

        >     > Other Network devices

        >     > =====================

        >     > <none>

        >     >

        >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3

        >     > EAL: Detected 80 lcore(s)

        >     > EAL: Detected 2 NUMA nodes

        >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

        >     > EAL: Probing VFIO support...

        >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1521 net_e1000_igb

        >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0

        >     > EAL:   probe driver: 8086:1572 net_i40e

        >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1

        >     > EAL:   probe driver: 8086:10fb net_ixgbe

        >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1

        >     > EAL:   probe driver: 8086:10fb net_ixgbe

        >     > vmdq queue base: 0 pool base 0

        >     > Port 0 MAC: 00 1b 21 bf 71 24

        >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff

        >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff

        >     > vmdq queue base: 0 pool base 0

        >     > Port 1 MAC: 00 1b 21 bf 71 26

        >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff

        >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff

        >     >

        >     > Now when I send the SIGHUP, I see the packets being routed to

        >     > the expected queue:

        >     >

        >     > Pool 0: 0 0 0 0 0 0 0 0

        >     > Pool 1: 0 0 58 0 0 0 0 0

        >     > Pool 2: 0 0 0 0 0 0 0 0

        >     > Pool 3: 0 0 0 0 0 0 0 0

        >     > Pool 4: 0 0 0 0 0 0 0 0

        >     > Pool 5: 0 0 0 0 0 0 0 0

        >     > Pool 6: 0 0 0 0 0 0 0 0

        >     > Pool 7: 0 0 0 0 0 0 0 0

        >     > Pool 8: 0 0 0 0 0 0 0 0

        >     > Pool 9: 0 0 0 0 0 0 0 0

        >     > Pool 10: 0 0 0 0 0 0 0 0

        >     > Pool 11: 0 0 0 0 0 0 0 0

        >     > Pool 12: 0 0 0 0 0 0 0 0

        >     > Pool 13: 0 0 0 0 0 0 0 0

        >     > Pool 14: 0 0 0 0 0 0 0 0

        >     > Pool 15: 0 0 0 0 0 0 0 0

        >     > Finished handling signal 1

        >     >

        >     > What am I missing?

        >     >

        >     > Thankyou in advance,

        >     > --Mike

        >     >

        >     >

        >

        >

        >

        >

        >








-------------- next part --------------
A non-text attachment was scrubbed...
Name: txtest.tar
Type: application/x-tar
Size: 20480 bytes
Desc: txtest.tar
URL: <http://mails.dpdk.org/archives/users/attachments/20191003/5ed5e742/attachment.tar>

  reply	other threads:[~2019-10-03 23:56 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <834B2FF6-9FC7-43E4-8CA7-67D861FEE70E@xcom-tech.com>
     [not found] ` <2953945.eKoDkclGR7@xps>
2019-09-17 18:54   ` Mike DeVico
2019-09-18  3:32     ` Zhang, Xiao
2019-09-18  4:20       ` Mike DeVico
2019-09-18  7:02         ` Zhang, Xiao
2019-09-18  7:03           ` Thomas Monjalon
2019-09-18  7:10             ` Zhang, Xiao
2019-09-18 14:17               ` Mike DeVico
2019-09-18 14:53                 ` Christensen, ChadX M
2019-09-18 20:22                   ` Mike DeVico
2019-09-19  1:23                   ` Mike DeVico
2019-09-19  2:52                     ` Zhang, Xiao
2019-09-19 13:34                       ` Mike DeVico
2019-09-19 14:34                         ` Johnson, Brian
     [not found]   ` <0BD0EAA3-BB16-4B09-BF25-4744C0A879A0@xcom-tech.com>
     [not found]     ` <b9318aa4f0a943958171cc6fc53a010f@sandvine.com>
     [not found]       ` <61798E93-724B-4BE6-A03C-63B274E71AD2@xcom-tech.com>
     [not found]         ` <F35DEAC7BCE34641BA9FAC6BCA4A12E71B4ADE0E@SHSMSX103.ccr.corp.intel.com>
2019-09-26 20:31           ` Mike DeVico
2019-09-30  2:21             ` Zhang, Helin
2019-10-03 23:56               ` Mike DeVico [this message]
2019-09-20 21:57 Mike DeVico
2019-10-10 21:23 ` Christensen, ChadX M
2019-10-10 21:25   ` Mike DeVico
2019-10-10 21:12 Mike DeVico

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ED1084C8-7233-41CF-A135-4D9DFC0D9225@xcom-tech.com \
    --to=mdevico@xcom-labs.com \
    --cc=Kobi.Cohen-Arazi@xcom-labs.com \
    --cc=beilei.xing@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=ferruh.yigit@intel.com \
    --cc=helin.zhang@intel.com \
    --cc=jweeks@sandvine.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=samuel.wong1@intel.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    --cc=xiao.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).