From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3C5BA2EEB for ; Mon, 9 Sep 2019 22:39:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6BD481ED4D; Mon, 9 Sep 2019 22:39:37 +0200 (CEST) Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by dpdk.org (Postfix) with ESMTP id 4A3A22AB for ; Mon, 9 Sep 2019 22:39:36 +0200 (CEST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 4A607222D2; Mon, 9 Sep 2019 16:39:34 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Mon, 09 Sep 2019 16:39:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=MQGXueZU9OGxD+ZAopVXbbDH3VkB7+29vAVaYLFwzJs=; b=lJkI2lHKT3hp BoIYUSQ5XHU1a4XNL6vONR8xZu2ngIhU8Hdn5rte0KYD9lF/Lq+0r/MzRnK+Szbg kh8SZQ7VqPo0GJKDANkw4LbjI/ZE+Qi2L7O3n4BYIWBZVokwc5Hz+9HA2OtOu+s4 EOWkaHlakfo3moWksCwugHieinxeHU0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=MQGXueZU9OGxD+ZAopVXbbDH3VkB7+29vAVaYLFwz Js=; b=hZHb0JlwjEKE273xNzTCf4v/2qs3DWTtssbgpfn2ICa1pMOPjRyIAGfxN c57nzIyKjtCMD6HzoH52x/JtTc+3K/OIGYyH1y4UWLXuxt2uNYA/e5Q7LHJMgRPu Cvr6cul4VBhr/nBFuGMOx4f7Q3ccukbXnwtmRe6F3UFCs0/n0zpneKPnOJ3sT/G3 enIuZZj5v16taoKiMZ7+WetBjnJ6PV0cLczk8vCXNJPkQY14FclPVuvoHnCqNpkA TwBsxCWcJPFLM3jxcKGNkwT1pjFUq7IncYurnkvTNLt+6UyaduT0g6ppd4BYC9V/ p93PW0358ne3DQMB98bxgMpYzNK/w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduvddrudekiedgudehtdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthhqredttddtjeenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc fkphepjeejrddufeegrddvtdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpeht hhhomhgrshesmhhonhhjrghlohhnrdhnvghtnecuvehluhhsthgvrhfuihiivgepud X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id F01B2D60057; Mon, 9 Sep 2019 16:39:32 -0400 (EDT) From: Thomas Monjalon To: Mike DeVico Cc: dev@dpdk.org, Beilei Xing , Qi Zhang , Bruce Richardson , Konstantin Ananyev , ferruh.yigit@intel.com Date: Mon, 09 Sep 2019 22:39:30 +0200 Message-ID: <2953945.eKoDkclGR7@xps> In-Reply-To: <834B2FF6-9FC7-43E4-8CA7-67D861FEE70E@xcom-tech.com> References: <834B2FF6-9FC7-43E4-8CA7-67D861FEE70E@xcom-tech.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] Issue with DCB with X710 Quad 10Gb NIC X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adding i40e maintainers and few more. 07/09/2019 01:11, Mike DeVico: > Hello, >=20 > I am having an issue getting the DCB feature to work with an Intel > X710 Quad SFP+ NIC. >=20 > Here=E2=80=99s my setup: >=20 > 1. DPDK 18.08 built with the following I40E configs: >=20 > CONFIG_RTE_LIBRTE_I40E_PMD=3Dy > CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=3Dn > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=3Dn > CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=3Dn > CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=3Dy > CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=3Dy > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=3Dn > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=3D64 > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=3D8 >=20 > 2. /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net >=20 > Network devices using DPDK-compatible driver > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=3Digb_uio= unused=3Di40e > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=3Digb_uio= unused=3Di40e > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=3Digb_uio= unused=3Di40e > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=3Digb_uio= unused=3Di40e >=20 > Network devices using kernel driver > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=3Denp2s0f0 = drv=3Digb unused=3Digb_uio *Active* > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=3Denp2s0f1 = drv=3Digb unused=3Digb_uio *Active* >=20 > Other Network devices > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >=20 > 3. We have a custom FPGA board connected to port 1 of the X710 NIC t= hat=E2=80=99s broadcasting > a packet tagged with VLAN 1 and PCP 2. >=20 > 4. I use the vmdq_dcb example app and configure the card with 16 poo= ls/8 queue each > as follows: > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 >=20 >=20 > The apps starts up fine and successfully probes the card as shown below: >=20 > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 > EAL: Detected 80 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Probing VFIO support... > EAL: PCI device 0000:02:00.0 on NUMA socket 0 > EAL: probe driver: 8086:1521 net_e1000_igb > EAL: PCI device 0000:02:00.1 on NUMA socket 0 > EAL: probe driver: 8086:1521 net_e1000_igb > EAL: PCI device 0000:3b:00.0 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > EAL: PCI device 0000:3b:00.1 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > EAL: PCI device 0000:3b:00.2 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > EAL: PCI device 0000:3b:00.3 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > vmdq queue base: 64 pool base 1 > Configured vmdq pool num: 16, each vmdq pool has 8 queues > Port 0 MAC: e8 ea 6a 27 b5 4d > Port 0 vmdq pool 0 set mac 52:54:00:12:00:00 > Port 0 vmdq pool 1 set mac 52:54:00:12:00:01 > Port 0 vmdq pool 2 set mac 52:54:00:12:00:02 > Port 0 vmdq pool 3 set mac 52:54:00:12:00:03 > Port 0 vmdq pool 4 set mac 52:54:00:12:00:04 > Port 0 vmdq pool 5 set mac 52:54:00:12:00:05 > Port 0 vmdq pool 6 set mac 52:54:00:12:00:06 > Port 0 vmdq pool 7 set mac 52:54:00:12:00:07 > Port 0 vmdq pool 8 set mac 52:54:00:12:00:08 > Port 0 vmdq pool 9 set mac 52:54:00:12:00:09 > Port 0 vmdq pool 10 set mac 52:54:00:12:00:0a > Port 0 vmdq pool 11 set mac 52:54:00:12:00:0b > Port 0 vmdq pool 12 set mac 52:54:00:12:00:0c > Port 0 vmdq pool 13 set mac 52:54:00:12:00:0d > Port 0 vmdq pool 14 set mac 52:54:00:12:00:0e > Port 0 vmdq pool 15 set mac 52:54:00:12:00:0f > vmdq queue base: 64 pool base 1 > Configured vmdq pool num: 16, each vmdq pool has 8 queues > Port 1 MAC: e8 ea 6a 27 b5 4e > Port 1 vmdq pool 0 set mac 52:54:00:12:01:00 > Port 1 vmdq pool 1 set mac 52:54:00:12:01:01 > Port 1 vmdq pool 2 set mac 52:54:00:12:01:02 > Port 1 vmdq pool 3 set mac 52:54:00:12:01:03 > Port 1 vmdq pool 4 set mac 52:54:00:12:01:04 > Port 1 vmdq pool 5 set mac 52:54:00:12:01:05 > Port 1 vmdq pool 6 set mac 52:54:00:12:01:06 > Port 1 vmdq pool 7 set mac 52:54:00:12:01:07 > Port 1 vmdq pool 8 set mac 52:54:00:12:01:08 > Port 1 vmdq pool 9 set mac 52:54:00:12:01:09 > Port 1 vmdq pool 10 set mac 52:54:00:12:01:0a > Port 1 vmdq pool 11 set mac 52:54:00:12:01:0b > Port 1 vmdq pool 12 set mac 52:54:00:12:01:0c > Port 1 vmdq pool 13 set mac 52:54:00:12:01:0d > Port 1 vmdq pool 14 set mac 52:54:00:12:01:0e > Port 1 vmdq pool 15 set mac 52:54:00:12:01:0f >=20 > Skipping disabled port 2 >=20 > Skipping disabled port 3 > Core 0(lcore 1) reading queues 64-191 >=20 > However, when I issue the SIGHUP I see that the packets > are being put into the first queue of Pool 1 as follows: >=20 > Pool 0: 0 0 0 0 0 0 0 0 > Pool 1: 10 0 0 0 0 0 0 0 > Pool 2: 0 0 0 0 0 0 0 0 > Pool 3: 0 0 0 0 0 0 0 0 > Pool 4: 0 0 0 0 0 0 0 0 > Pool 5: 0 0 0 0 0 0 0 0 > Pool 6: 0 0 0 0 0 0 0 0 > Pool 7: 0 0 0 0 0 0 0 0 > Pool 8: 0 0 0 0 0 0 0 0 > Pool 9: 0 0 0 0 0 0 0 0 > Pool 10: 0 0 0 0 0 0 0 0 > Pool 11: 0 0 0 0 0 0 0 0 > Pool 12: 0 0 0 0 0 0 0 0 > Pool 13: 0 0 0 0 0 0 0 0 > Pool 14: 0 0 0 0 0 0 0 0 > Pool 15: 0 0 0 0 0 0 0 0 > Finished handling signal 1 >=20 > Since the packets are being tagged with PCP 2 they should be getting > mapped to 3rd queue of Pool 1, right? >=20 > As a sanity check, I tried the same test using an 82599ES 2 port 10GB NIC= and > the packets show up in the expected queue. (Note, to get it to work I had > to modify the vmdq_dcb app to set the vmdq pool MACs to all FF=E2=80=99s) >=20 > Here=E2=80=99s that setup: >=20 > /opt/dpdk-18.08/usertools/dpdk-devbind.py --status-dev net >=20 > Network devices using DPDK-compatible driver > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=3D= igb_uio unused=3Dixgbe > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=3D= igb_uio unused=3Dixgbe >=20 > Network devices using kernel driver > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > 0000:02:00.0 'I350 Gigabit Network Connection 1521' if=3Denp2s0f0 drv=3Di= gb unused=3Digb_uio *Active* > 0000:02:00.1 'I350 Gigabit Network Connection 1521' if=3Denp2s0f1 drv=3Di= gb unused=3Digb_uio *Active* > 0000:3b:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=3Denp59s0f= 0 drv=3Di40e unused=3Digb_uio > 0000:3b:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=3Denp59s0f= 1 drv=3Di40e unused=3Digb_uio > 0000:3b:00.2 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=3Denp59s0f= 2 drv=3Di40e unused=3Digb_uio > 0000:3b:00.3 'Ethernet Controller X710 for 10GbE SFP+ 1572' if=3Denp59s0f= 3 drv=3Di40e unused=3Digb_uio >=20 > Other Network devices > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >=20 > sudo ./vmdq_dcb_app -l 1 -- -p3 --nb-pools 16 --nb-tcs 8 -p 3 > EAL: Detected 80 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Probing VFIO support... > EAL: PCI device 0000:02:00.0 on NUMA socket 0 > EAL: probe driver: 8086:1521 net_e1000_igb > EAL: PCI device 0000:02:00.1 on NUMA socket 0 > EAL: probe driver: 8086:1521 net_e1000_igb > EAL: PCI device 0000:3b:00.0 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > EAL: PCI device 0000:3b:00.1 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > EAL: PCI device 0000:3b:00.2 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > EAL: PCI device 0000:3b:00.3 on NUMA socket 0 > EAL: probe driver: 8086:1572 net_i40e > EAL: PCI device 0000:af:00.0 on NUMA socket 1 > EAL: probe driver: 8086:10fb net_ixgbe > EAL: PCI device 0000:af:00.1 on NUMA socket 1 > EAL: probe driver: 8086:10fb net_ixgbe > vmdq queue base: 0 pool base 0 > Port 0 MAC: 00 1b 21 bf 71 24 > Port 0 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff > Port 0 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff > vmdq queue base: 0 pool base 0 > Port 1 MAC: 00 1b 21 bf 71 26 > Port 1 vmdq pool 0 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 1 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 2 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 3 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 4 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 5 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 6 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 7 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 8 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 9 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 10 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 11 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 12 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 13 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 14 set mac ff:ff:ff:ff:ff:ff > Port 1 vmdq pool 15 set mac ff:ff:ff:ff:ff:ff >=20 > Now when I send the SIGHUP, I see the packets being routed to > the expected queue: >=20 > Pool 0: 0 0 0 0 0 0 0 0 > Pool 1: 0 0 58 0 0 0 0 0 > Pool 2: 0 0 0 0 0 0 0 0 > Pool 3: 0 0 0 0 0 0 0 0 > Pool 4: 0 0 0 0 0 0 0 0 > Pool 5: 0 0 0 0 0 0 0 0 > Pool 6: 0 0 0 0 0 0 0 0 > Pool 7: 0 0 0 0 0 0 0 0 > Pool 8: 0 0 0 0 0 0 0 0 > Pool 9: 0 0 0 0 0 0 0 0 > Pool 10: 0 0 0 0 0 0 0 0 > Pool 11: 0 0 0 0 0 0 0 0 > Pool 12: 0 0 0 0 0 0 0 0 > Pool 13: 0 0 0 0 0 0 0 0 > Pool 14: 0 0 0 0 0 0 0 0 > Pool 15: 0 0 0 0 0 0 0 0 > Finished handling signal 1 >=20 > What am I missing? >=20 > Thankyou in advance, > --Mike >=20 >=20