DPDK usage discussions
 help / color / mirror / Atom feed
From: "Zhang, Xiao" <xiao.zhang@intel.com>
To: Mike DeVico <mdevico@xcom-labs.com>
Cc: "users@dpdk.org" <users@dpdk.org>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	 "Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>,
	"Christensen, ChadX M" <chadx.m.christensen@intel.com>,
	Tia Cassett <tiac@xcom-labs.com>,
	Thomas Monjalon <thomas@monjalon.net>,
	"Wu, Jingjing" <jingjing.wu@intel.com>
Subject: Re: [dpdk-users] [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
Date: Wed, 18 Sep 2019 07:02:08 +0000	[thread overview]
Message-ID: <AF0377F445CB2540BB46FF359C1C1BBE011B9CD0@SHSMSX105.ccr.corp.intel.com> (raw)
In-Reply-To: <B4A0EF5B-0676-4439-B1FA-05299369F6C2@xcom-tech.com>


There is some hardware limitation and need to enable RSS to distribute packets for X710.

Thanks,
Xiao

> -----Original Message-----
> From: Mike DeVico [mailto:mdevico@xcom-labs.com]
> Sent: Wednesday, September 18, 2019 12:21 PM
> To: Zhang, Xiao <xiao.zhang@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Christensen, ChadX M
> <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>
> Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
> 
> I as understand it, RSS and DCB are two completely different things. DCB uses
> the PCP field to map the packet to a given queue, whereas RSS computes a key
> by hashing the IP address and port and then uses the key to map the packet to a
> given queue.
> 
> --Mike
> 
> On 9/17/19, 8:33 PM, "Zhang, Xiao" <xiao.zhang@intel.com> wrote:
> 
>     [EXTERNAL SENDER]
> 
>     Hi Mike,
> 
>     You need add --enable-rss option when start up the process like:
>     sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 --enable-rss
> 
>     Thanks,
>     Xiao
> 
>     > -----Original Message-----
>     > From: Mike DeVico [mailto:mdevico@xcom-labs.com]
>     > Sent: Wednesday, September 18, 2019 2:55 AM
>     > To: Thomas Monjalon <thomas@monjalon.net>
>     > Cc: users@dpdk.org; Xing, Beilei <beilei.xing@intel.com>; Zhang, Qi Z
>     > <qi.z.zhang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
>     > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh
>     > <ferruh.yigit@intel.com>; Christensen, ChadX M
>     > <chadx.m.christensen@intel.com>; Tia Cassett <tiac@xcom-labs.com>
>     > Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC
>     >
>     > Hello,
>     >
>     > So far I haven't heard back from anyone regarding this issue and I would like
> to
>     > know what the status is at this point. Also, if you have any recommendations
> or
>     > require additional information from me, please let me know.
>     >
>     > Thank you in advance,
>     > --Mike DeVico
>     >
>     > On 9/9/19, 1:39 PM, "Thomas Monjalon" <thomas@monjalon.net> wrote:
>     >
>     >     [EXTERNAL SENDER]
>     >
>     >     Adding i40e maintainers and few more.
>     >
>     >     07/09/2019 01:11, Mike DeVico:
>     >     > Hello,
>     >     >
>     >     > I am having an issue getting the DCB feature to work with an Intel
>     >     > X710 Quad SFP+ NIC.
>     >     >
>     >     > Here’s my setup:
>     >     >
>     >     > 1.      DPDK 18.08 built with the following I40E configs:
>     >     >
>     >     > CONFIG_RTE_LIBRTE_I40E_PMD=y
>     >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=n
>     >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=n
>     >     > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
>     >     > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
>     >     > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
>     >     > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
>     >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
>     >     > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=8
>     >     >
>     >     > 2.      /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >     >
>     >     > Network devices using DPDK-compatible driver
>     >     > ============================================
>     >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> drv=igb_uio
>     > unused=i40e
>     >     >
>     >     >        Network devices using kernel driver
>     >     >        ===================================
>     >     >        0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
>     > drv=igb unused=igb_uio *Active*
>     >     >        0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
>     > drv=igb unused=igb_uio *Active*
>     >     >
>     >     >        Other Network devices
>     >     >        =====================
>     >     >        <none>
>     >     >
>     >     > 3.      We have a custom FPGA board connected to port 1 of the X710 NIC
>     > that’s broadcasting
>     >     > a packet tagged with VLAN 1 and PCP 2.
>     >     >
>     >     > 4.      I use the vmdq_dcb example app and configure the card with 16
>     > pools/8 queue each
>     >     > as follows:
>     >     >        sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >     >
>     >     >
>     >     > The apps starts up fine and successfully probes the card as shown below:
>     >     >
>     >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >     > EAL: Detected 80 lcore(s)
>     >     > EAL: Detected 2 NUMA nodes
>     >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     >     > EAL: Probing VFIO support...
>     >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > vmdq queue base: 64 pool base 1
>     >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     >     > Port 0 MAC: e8 ea 6a 27 b5 4d
>     >     > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00
>     >     > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01
>     >     > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02
>     >     > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03
>     >     > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04
>     >     > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05
>     >     > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06
>     >     > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07
>     >     > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08
>     >     > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09
>     >     > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a
>     >     > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b
>     >     > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c
>     >     > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d
>     >     > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e
>     >     > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f
>     >     > vmdq queue base: 64 pool base 1
>     >     > Configured vmdq pool num: 16, each vmdq pool has 8 queues
>     >     > Port 1 MAC: e8 ea 6a 27 b5 4e
>     >     > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00
>     >     > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01
>     >     > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02
>     >     > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03
>     >     > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04
>     >     > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05
>     >     > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06
>     >     > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07
>     >     > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08
>     >     > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09
>     >     > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a
>     >     > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b
>     >     > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c
>     >     > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d
>     >     > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e
>     >     > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f
>     >     >
>     >     > Skipping disabled port 2
>     >     >
>     >     > Skipping disabled port 3
>     >     > Core 0(lcore 1) reading queues 64-191
>     >     >
>     >     > However, when I issue the SIGHUP I see that the packets
>     >     > are being put into the first queue of Pool 1 as follows:
>     >     >
>     >     > Pool 0: 0 0 0 0 0 0 0 0
>     >     > Pool 1: 10 0 0 0 0 0 0 0
>     >     > Pool 2: 0 0 0 0 0 0 0 0
>     >     > Pool 3: 0 0 0 0 0 0 0 0
>     >     > Pool 4: 0 0 0 0 0 0 0 0
>     >     > Pool 5: 0 0 0 0 0 0 0 0
>     >     > Pool 6: 0 0 0 0 0 0 0 0
>     >     > Pool 7: 0 0 0 0 0 0 0 0
>     >     > Pool 8: 0 0 0 0 0 0 0 0
>     >     > Pool 9: 0 0 0 0 0 0 0 0
>     >     > Pool 10: 0 0 0 0 0 0 0 0
>     >     > Pool 11: 0 0 0 0 0 0 0 0
>     >     > Pool 12: 0 0 0 0 0 0 0 0
>     >     > Pool 13: 0 0 0 0 0 0 0 0
>     >     > Pool 14: 0 0 0 0 0 0 0 0
>     >     > Pool 15: 0 0 0 0 0 0 0 0
>     >     > Finished handling signal 1
>     >     >
>     >     > Since the packets are being tagged with PCP 2 they should be getting
>     >     > mapped to 3rd queue of Pool 1, right?
>     >     >
>     >     > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC
>     > and
>     >     > the packets show up in the expected queue. (Note, to get it to work I
> had
>     >     > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF’s)
>     >     >
>     >     > Here’s that setup:
>     >     >
>     >     > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net
>     >     >
>     >     > Network devices using DPDK-compatible driver
>     >     > ============================================
>     >     > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>     > drv=igb_uio unused=ixgbe
>     >     > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>     > drv=igb_uio unused=ixgbe
>     >     >
>     >     > Network devices using kernel driver
>     >     > ===================================
>     >     > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=enp2s0f0
> drv=igb
>     > unused=igb_uio *Active*
>     >     > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=enp2s0f1
> drv=igb
>     > unused=igb_uio *Active*
>     >     > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f0
>     > drv=i40e unused=igb_uio
>     >     > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f1
>     > drv=i40e unused=igb_uio
>     >     > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f2
>     > drv=i40e unused=igb_uio
>     >     > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572'
> if=enp59s0f3
>     > drv=i40e unused=igb_uio
>     >     >
>     >     > Other Network devices
>     >     > =====================
>     >     > <none>
>     >     >
>     >     > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3
>     >     > EAL: Detected 80 lcore(s)
>     >     > EAL: Detected 2 NUMA nodes
>     >     > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>     >     > EAL: Probing VFIO support...
>     >     > EAL: PCI device 0000:02:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:02:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1521 net_e1000_igb
>     >     > EAL: PCI device 0000:3b:00.0 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.1 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.2 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:3b:00.3 on NUMA socket 0
>     >     > EAL:   probe driver: 8086:1572 net_i40e
>     >     > EAL: PCI device 0000:af:00.0 on NUMA socket 1
>     >     > EAL:   probe driver: 8086:10fb net_ixgbe
>     >     > EAL: PCI device 0000:af:00.1 on NUMA socket 1
>     >     > EAL:   probe driver: 8086:10fb net_ixgbe
>     >     > vmdq queue base: 0 pool base 0
>     >     > Port 0 MAC: 00 1b 21 bf 71 24
>     >     > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     >     > vmdq queue base: 0 pool base 0
>     >     > Port 1 MAC: 00 1b 21 bf 71 26
>     >     > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff
>     >     > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff
>     >     >
>     >     > Now when I send the SIGHUP, I see the packets being routed to
>     >     > the expected queue:
>     >     >
>     >     > Pool 0: 0 0 0 0 0 0 0 0
>     >     > Pool 1: 0 0 58 0 0 0 0 0
>     >     > Pool 2: 0 0 0 0 0 0 0 0
>     >     > Pool 3: 0 0 0 0 0 0 0 0
>     >     > Pool 4: 0 0 0 0 0 0 0 0
>     >     > Pool 5: 0 0 0 0 0 0 0 0
>     >     > Pool 6: 0 0 0 0 0 0 0 0
>     >     > Pool 7: 0 0 0 0 0 0 0 0
>     >     > Pool 8: 0 0 0 0 0 0 0 0
>     >     > Pool 9: 0 0 0 0 0 0 0 0
>     >     > Pool 10: 0 0 0 0 0 0 0 0
>     >     > Pool 11: 0 0 0 0 0 0 0 0
>     >     > Pool 12: 0 0 0 0 0 0 0 0
>     >     > Pool 13: 0 0 0 0 0 0 0 0
>     >     > Pool 14: 0 0 0 0 0 0 0 0
>     >     > Pool 15: 0 0 0 0 0 0 0 0
>     >     > Finished handling signal 1
>     >     >
>     >     > What am I missing?
>     >     >
>     >     > Thankyou in advance,
>     >     > --Mike
>     >     >
>     >     >
>     >
>     >
>     >
>     >
>     >
>     >
> 
> 


  reply	other threads:[~2019-09-18  7:05 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <834B2FF6-9FC7-43E4-8CA7-67D861FEE70E@xcom-tech.com>
     [not found] ` <2953945.eKoDkclGR7@xps>
2019-09-17 18:54   ` Mike DeVico
2019-09-18  3:32     ` Zhang, Xiao
2019-09-18  4:20       ` Mike DeVico
2019-09-18  7:02         ` Zhang, Xiao [this message]
2019-09-18  7:03           ` Thomas Monjalon
2019-09-18  7:10             ` Zhang, Xiao
2019-09-18 14:17               ` Mike DeVico
2019-09-18 14:53                 ` Christensen, ChadX M
2019-09-18 20:22                   ` Mike DeVico
2019-09-19  1:23                   ` Mike DeVico
2019-09-19  2:52                     ` Zhang, Xiao
2019-09-19 13:34                       ` Mike DeVico
2019-09-19 14:34                         ` Johnson, Brian
     [not found]   ` <0BD0EAA3-BB16-4B09-BF25-4744C0A879A0@xcom-tech.com>
     [not found]     ` <b9318aa4f0a943958171cc6fc53a010f@sandvine.com>
     [not found]       ` <61798E93-724B-4BE6-A03C-63B274E71AD2@xcom-tech.com>
     [not found]         ` <F35DEAC7BCE34641BA9FAC6BCA4A12E71B4ADE0E@SHSMSX103.ccr.corp.intel.com>
2019-09-26 20:31           ` Mike DeVico
2019-09-30  2:21             ` Zhang, Helin
2019-10-03 23:56               ` Mike DeVico
2019-09-20 21:57 Mike DeVico
2019-10-10 21:23 ` Christensen, ChadX M
2019-10-10 21:25   ` Mike DeVico
2019-10-10 21:12 Mike DeVico

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AF0377F445CB2540BB46FF359C1C1BBE011B9CD0@SHSMSX105.ccr.corp.intel.com \
    --to=xiao.zhang@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=chadx.m.christensen@intel.com \
    --cc=ferruh.yigit@intel.com \
    --cc=jingjing.wu@intel.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=mdevico@xcom-labs.com \
    --cc=qi.z.zhang@intel.com \
    --cc=thomas@monjalon.net \
    --cc=tiac@xcom-labs.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).