DPDK usage discussions
 help / color / mirror / Atom feed
From: Mike DeVico <mdevico@xcom-labs.com>
To: "Zhang, Helin" <helin.zhang@intel.com>,
	Jeff Weeks <jweeks@sandvine.com>,
	 Thomas Monjalon <thomas@monjalon.net>
Cc: "users@dpdk.org" <users@dpdk.org>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	 "Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>,
	"Zhang, Xiao" <xiao.zhang@intel.com>,
	"Wong1, Samuel" <samuel.wong1@intel.com>
Subject: Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
Date: Thu, 26 Sep 2019 20:31:59 +0000	[thread overview]
Message-ID: <FB86FF24-EAF2-453E-8CC0-1F529F34C781@xcom-tech.com> (raw)
In-Reply-To: <F35DEAC7BCE34641BA9FAC6BCA4A12E71B4ADE0E@SHSMSX103.ccr.corp.intel.com>

Hi Helin,

Yes, and the reason the RX packets were not being queued to the proper 
queue was due to RSS not being enabled/configured. Once I did this, 
the RX packets were placed in the proper queue.

That being said, what I see now is that the TX side seems to have an issue. So the
way it works is that the Ferrybridge broadcasts out what's called a Present packet
at 1s intervals. Once the host application detects the packet (which it now does) it
verifies that the packet is correctly formatted and such and then sends a packet back 
to the Ferrybridge to tell it to stop sending this packet. However, that TX packet 
apparently is not going out because I continue to receive Present packets from the 
Ferrybridge at the 1s interval. What's not clear to me is what queue I should be sending
this packet to. I actually tried sending it out all 128 queues, but I still keep receiving
the Present packet. What I lack is the ability to actually sniff what's going out over the
wire.

Any ideas how to approach this issue?

Thanks in advance,
--Mike

On 9/26/19, 9:02 AM, "Zhang, Helin" <helin.zhang@intel.com> wrote:

    [EXTERNAL SENDER]
    
    Hi Mike
    
    Can you check if you are using the right combination of DPDK version and NIC firmware, and kernel driver if you are using?
    You can find the recommended combination from https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fi40e.html%23recommended-matching-list&amp;data=02%7C01%7Cmdevico%40xcom-labs.com%7C74c406b4f40a484293f008d7429aec62%7C30eb8c3e540c4f279dfa86b6e7bfafa8%7C0%7C0%7C637051105484800770&amp;sdata=oSpiu5P8oLk2PxwXUzB%2BuyiG2f%2BoFtt7%2FFGbJHs01dY%3D&amp;reserved=0. Hopefully that helps!
    
    Regards,
    Helin
    
    > -----Original Message-----
    > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mike DeVico
    > Sent: Friday, September 13, 2019 2:10 AM
    > To: Jeff Weeks; Thomas Monjalon
    > Cc: dev@dpdk.org; Xing, Beilei; Zhang, Qi Z; Richardson, Bruce; Ananyev,
    > Konstantin; Yigit, Ferruh
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > Hi Jeff,
    >
    > Thanks for chiming in...
    >
    > Yeah, In my case I get the packets, but they end up being put in queue 0
    > instead of 2.
    >
    > --Mike
    >
    > From: Jeff Weeks <jweeks@sandvine.com>
    > Date: Thursday, September 12, 2019 at 10:47 AM
    > To: Mike DeVico <mdevico@xcom-labs.com>, Thomas Monjalon
    > <thomas@monjalon.net>
    > Cc: "dev@dpdk.org" <dev@dpdk.org>, Beilei Xing <beilei.xing@intel.com>, Qi
    > Zhang <qi.z.zhang@intel.com>, Bruce Richardson
    > <bruce.richardson@intel.com>, Konstantin Ananyev
    > <konstantin.ananyev@intel.com>, "ferruh.yigit@intel.com"
    > <ferruh.yigit@intel.com>
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > [EXTERNAL SENDER]
    >
    > I don't have much else to add, except that I also see dcb fail on the same NIC:
    >
    >
    >
    >   i40e_dcb_init_configure(): default dcb config fails. err = -53, aq_err = 3.
    >
    >
    >
    > My card doesn't receive any packets, though; not sure if it's related to this, or
    > not.
    >
    >
    >
    > --Jeff
    >
    > ________________________________
    > /dev/jeff_weeks.x2936
    > Sandvine Incorporated
    >
    > ________________________________
    > From: dev <dev-bounces@dpdk.org> on behalf of Mike DeVico
    > <mdevico@xcom-labs.com>
    > Sent: Thursday, September 12, 2019 1:06 PM
    > To: Thomas Monjalon
    > Cc: dev@dpdk.org; Beilei Xing; Qi Zhang; Bruce Richardson; Konstantin
    > Ananyev; ferruh.yigit@intel.com
    > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
    >
    > [EXTERNAL]
    >
    > Still no hits...
    >
    > --Mike
    >
    > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
    >
    >     [EXTERNAL SENDER]
    >
    >     Adding i40e maintainers and few more.
    >
    >     07/09/2019 01:11, Mike DeVico:
    >     > Hello,
    >     >
    >     > I am having an issue getting the DCB feature to work with an Intel
    >     > X710 Quad SFP+ NIC.
    >     >
    >     > Here’s my setup:
    >     >
    >     > 1.      DPDK 18.08 built with the following I40E configs:
    >     >
    >     > CONFIG_RTE_LIBRTE_I40E_PMD=y
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
    >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
    >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
    >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
    >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
    >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
    >     >
    >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio
    > unused=i40e
    >     >
    >     >        Network devices using kernel driver
    >     >        ===================================
    >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
    > drv=igb unused=igb_uio *Active*
    >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
    > drv=igb unused=igb_uio *Active*
    >     >
    >     >        Other Network devices
    >     >        =====================
    >     >        <none>
    >     >
    >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
    > that’s broadcasting
    >     > a packet tagged with VLAN 1 and PCP 2.
    >     >
    >     > 4.      I use the vmdq_dcb example app and configure the card with 16
    > pools/8 queue each
    >     > as follows:
    >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     >
    >     >
    >     > The apps starts up fine and successfully probes the card as shown below:
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 0 MAC: e8 ea 6a 27 b5 4d
    >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
    >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
    >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
    >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
    >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
    >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
    >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
    >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
    >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
    >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
    >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
    >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
    >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
    >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
    >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
    >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
    >     > vmdq queue base: 64 pool base 1
    >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
    >     > Port 1 MAC: e8 ea 6a 27 b5 4e
    >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
    >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
    >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
    >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
    >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
    >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
    >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
    >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
    >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
    >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
    >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
    >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
    >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
    >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
    >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
    >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
    >     >
    >     > Skipping disabled port 2
    >     >
    >     > Skipping disabled port 3
    >     > Core 0(lcore 1) reading queues 64-191
    >     >
    >     > However, when I issue the SIGHUP I see that the packets
    >     > are being put into the first queue of Pool 1 as follows:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 10 0 0 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > Since the packets are being tagged with PCP 2 they should be getting
    >     > mapped to 3rd queue of Pool 1, right?
    >     >
    >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
    > and
    >     > the packets show up in the expected queue. (Note, to get it to work I had
    >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
    >     >
    >     > Here’s that setup:
    >     >
    >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
    >     >
    >     > Network devices using DPDK-compatible driver
    >     > ============================================
    >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
    > drv=igb_uio unused=ixgbe
    >     >
    >     > Network devices using kernel driver
    >     > ===================================
    >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1 drv=igb
    > unused=igb_uio *Active*
    >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f0 drv=i40e unused=igb_uio
    >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f1 drv=i40e unused=igb_uio
    >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f2 drv=i40e unused=igb_uio
    >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
    > if=enp59s0f3 drv=i40e unused=igb_uio
    >     >
    >     > Other Network devices
    >     > =====================
    >     > <none>
    >     >
    >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
    >     > EAL: Detected 80 lcore(s)
    >     > EAL: Detected 2 NUMA nodes
    >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    >     > EAL: Probing VFIO support...
    >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1521 net_e1000_igb
    >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
    >     > EAL:   probe driver: 8086:1572 net_i40e
    >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
    >     > EAL:   probe driver: 8086:10fb net_ixgbe
    >     > vmdq queue base: 0 pool base 0
    >     > Port 0 MAC: 00 1b 21 bf 71 24
    >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     > vmdq queue base: 0 pool base 0
    >     > Port 1 MAC: 00 1b 21 bf 71 26
    >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
    >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
    >     >
    >     > Now when I send the SIGHUP, I see the packets being routed to
    >     > the expected queue:
    >     >
    >     > Pool 0: 0 0 0 0 0 0 0 0
    >     > Pool 1: 0 0 58 0 0 0 0 0
    >     > Pool 2: 0 0 0 0 0 0 0 0
    >     > Pool 3: 0 0 0 0 0 0 0 0
    >     > Pool 4: 0 0 0 0 0 0 0 0
    >     > Pool 5: 0 0 0 0 0 0 0 0
    >     > Pool 6: 0 0 0 0 0 0 0 0
    >     > Pool 7: 0 0 0 0 0 0 0 0
    >     > Pool 8: 0 0 0 0 0 0 0 0
    >     > Pool 9: 0 0 0 0 0 0 0 0
    >     > Pool 10: 0 0 0 0 0 0 0 0
    >     > Pool 11: 0 0 0 0 0 0 0 0
    >     > Pool 12: 0 0 0 0 0 0 0 0
    >     > Pool 13: 0 0 0 0 0 0 0 0
    >     > Pool 14: 0 0 0 0 0 0 0 0
    >     > Pool 15: 0 0 0 0 0 0 0 0
    >     > Finished handling signal 1
    >     >
    >     > What am I missing?
    >     >
    >     > Thankyou in advance,
    >     > --Mike
    >     >
    >     >
    >
    >
    >
    >
    >
    
    


  parent reply	other threads:[~2019-09-26 20:32 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <834B2FF6-9FC7-43E4-8CA7-67D861FEE70E@xcom-tech.com>
     [not found] ` <2953945.eKoDkclGR7@xps>
2019-09-17 18:54   ` Mike DeVico
2019-09-18  3:32     ` Zhang, Xiao
2019-09-18  4:20       ` Mike DeVico
2019-09-18  7:02         ` Zhang, Xiao
2019-09-18  7:03           ` Thomas Monjalon
2019-09-18  7:10             ` Zhang, Xiao
2019-09-18 14:17               ` Mike DeVico
2019-09-18 14:53                 ` Christensen, ChadX M
2019-09-18 20:22                   ` Mike DeVico
2019-09-19  1:23                   ` Mike DeVico
2019-09-19  2:52                     ` Zhang, Xiao
2019-09-19 13:34                       ` Mike DeVico
2019-09-19 14:34                         ` Johnson, Brian
     [not found]   ` <0BD0EAA3-BB16-4B09-BF25-4744C0A879A0@xcom-tech.com>
     [not found]     ` <b9318aa4f0a943958171cc6fc53a010f@sandvine.com>
     [not found]       ` <61798E93-724B-4BE6-A03C-63B274E71AD2@xcom-tech.com>
     [not found]         ` <F35DEAC7BCE34641BA9FAC6BCA4A12E71B4ADE0E@SHSMSX103.ccr.corp.intel.com>
2019-09-26 20:31           ` Mike DeVico [this message]
2019-09-30  2:21             ` Zhang, Helin
2019-10-03 23:56               ` Mike DeVico
2019-09-20 21:57 Mike DeVico
2019-10-10 21:23 ` Christensen, ChadX M
2019-10-10 21:25   ` Mike DeVico
2019-10-10 21:12 Mike DeVico

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=FB86FF24-EAF2-453E-8CC0-1F529F34C781@xcom-tech.com \
    --to=mdevico@xcom-labs.com \
    --cc=beilei.xing@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=ferruh.yigit@intel.com \
    --cc=helin.zhang@intel.com \
    --cc=jweeks@sandvine.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=samuel.wong1@intel.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    --cc=xiao.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).