From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC3F7A00E6 for ; Wed, 10 Jul 2019 06:16:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9B0491B970; Wed, 10 Jul 2019 06:16:22 +0200 (CEST) Received: from inbox.dpdk.org (xvm-172-178.dc0.ghst.net [95.142.172.178]) by dpdk.org (Postfix) with ESMTP id A47CB1B96E for ; Wed, 10 Jul 2019 06:16:21 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id 9458CA0613; Wed, 10 Jul 2019 06:16:21 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Wed, 10 Jul 2019 04:16:21 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: ethdev X-Bugzilla-Version: 19.05 X-Bugzilla-Keywords: X-Bugzilla-Severity: major X-Bugzilla-Who: tb.bingel@qq.com X-Bugzilla-Status: CONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 312] i40evf could not receive mulicast packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" https://bugs.dpdk.org/show_bug.cgi?id=3D312 Bug ID: 312 Summary: i40evf could not receive mulicast packets Product: DPDK Version: 19.05 Hardware: x86 OS: Linux Status: CONFIRMED Severity: major Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: tb.bingel@qq.com Target Milestone: --- Hi all,=20 I found i40e with SRIOV mode, the vf in DPDK could not receive the multicast packets. Was it a bug? > Testing Environment: >> network card : Ethernet controller: Intel Corporation Ethernet Controller >> XL710 for 40GbE QSFP >> CPU : Intel(R) Xeon(R) Silver 4116 CPU @ 2.10GHz >> OS : Ubuntu 16.04 >> DPDK: 19.05 > Testing Steps: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D # Step 1. Start to send multicast continuously from one PC $ python multicast.py -I 192.168.99.142 "hello world" Thu, 04 Jul 2019 11:53:23 DEBUG Joined the multicast network: 224.0.0.5 = on 192.168.99.142 Thu, 04 Jul 2019 11:53:23 INFO Sent data: '000000001 1562212403.14 hello world' Thu, 04 Jul 2019 11:53:24 INFO Sent data: '000000002 1562212404.14 hello world' Thu, 04 Jul 2019 11:53:25 INFO Sent data: '000000003 1562212405.14 hello world' Thu, 04 Jul 2019 11:53:26 INFO Sent data: '000000004 1562212406.14 hello world' # Step 2. Test VF in NONE-dpdk mode can receive multicast $ sudo ./dpdk-devbind.py -s Network devices using DPDK-compatible driver =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Network devices using kernel driver =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 0000:19:00.0 'I350 Gigabit Network Connection 1521' if=3Deth0 drv=3Digb unused=3Digb_uio *Active* 0000:3b:00.0 'Ethernet Controller XL710 for 40GbE QSFP+ 1583' if=3Deth4 drv= =3Di40e unused=3Digb_uio *Active* 0000:3b:00.1 'Ethernet Controller XL710 for 40GbE QSFP+ 1583' if=3Deth5 drv= =3Di40e unused=3Digb_uio 0000:3b:0a.0 'Ethernet Virtual Function 700 Series 154c' if=3Deth6 drv=3Di4= 0evf unused=3Digb_uio <<<< VF=20 $ sudo ifconfig eth6 192.168.99.141/24 # <<< configure IP for VF $ sudo ifconfig eth5 up # <<<< configure PF interface to UP/Running status $ sudo tcpdump -i eth6 -n -A -XX host 192.168.99.142 & # <<<< start to cap= ture on VF card ... <<<<<<<<<< noting $ sudo ip maddr add 01:00:5e:00:00:05 dev eth6 # <<<< configure maddr for= VF $ sudo tcpdump -i eth6 -n -A -XX host 192.168.99.142 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth6, link-type EN10MB (Ethernet), capture size 262144 bytes 12:00:21.075607 IP 192.168.99.142.56250 > 224.0.0.5.5405: UDP, length 34 0x0000: 0100 5e00 0005 f8f2 1e35 e651 0800 4500 ..^......5.Q..E. 0x0010: 003e c9c2 4000 0111 abb0 c0a8 638e e000 .>..@.......c... 0x0020: 0005 dbba 151d 002a f9de 3030 3030 3030 .......*..000000 0x0030: 3431 3120 3135 3632 3231 3238 3231 2e31 411.1562212821.1 0x0040: 2068 656c 6c6f 2077 6f72 6c64 .hello.world 12:00:22.077017 IP 192.168.99.142.56250 > 224.0.0.5.5405: UDP, length 35 0x0000: 0100 5e00 0005 f8f2 1e35 e651 0800 4500 ..^......5.Q..E. 0x0010: 003f caa4 4000 0111 aacd c0a8 638e e000 .?..@.......c... 0x0020: 0005 dbba 151d 002b 247f 3030 3030 3030 .......+$.000000 0x0030: 3431 3220 3135 3632 3231 3238 3232 2e31 412.1562212822.1 0x0040: 3120 6865 6c6c 6f20 776f 726c 64 1.hello.world 12:00:23.078429 IP 192.168.99.142.56250 > 224.0.0.5.5405: UDP, length 35 <<<< captured the multicast Step 3: Test VF in DPDK mode could NOT receive multicast $ sudo ifconfig eth6 down $ sudo ./dpdk-devbind.py -b igb_uio eth6 # <<<<< bind vf to igb_uio $ sudo ./build/kni -l 0-5 -n 4 -- -P -p 0x1 -m --config=3D"(0,1,3)" # <<<<= =20 start KNI program=20=20=20=20 EAL: Detected 48 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Probing VFIO support... EAL: PCI device 0000:19:00.0 on NUMA socket 0 EAL: probe driver: 8086:1521 net_e1000_igb EAL: PCI device 0000:19:00.1 on NUMA socket 0 EAL: probe driver: 8086:1521 net_e1000_igb EAL: PCI device 0000:19:00.2 on NUMA socket 0 EAL: probe driver: 8086:1521 net_e1000_igb EAL: PCI device 0000:19:00.3 on NUMA socket 0 EAL: probe driver: 8086:1521 net_e1000_igb EAL: PCI device 0000:3b:00.0 on NUMA socket 0 EAL: probe driver: 8086:1583 net_i40e EAL: PCI device 0000:3b:00.1 on NUMA socket 0 EAL: probe driver: 8086:1583 net_i40e EAL: PCI device 0000:3b:0a.0 on NUMA socket 0 EAL: probe driver: 8086:154c net_i40e_vf APP: Initialising port 0 ... Checking link status done Port0 Link Up - speed 40000Mbps - full-duplex APP: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D APP: KNI Running APP: kill -SIGUSR1 140041 APP: Show KNI Statistics. APP: kill -SIGUSR2 140041 APP: Zero KNI Statistics. APP: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D APP: Lcore 1 is reading from port 0 APP: Lcore 2 has nothing to do APP: Lcore 3 is writing to port 0 APP: Lcore 4 has nothing to do APP: Lcore 0 has nothing to do APP: Lcore 5 has nothing to do APP: Configure network interface of 0 up APP: vEth0 NIC Link is Up 40000 Mbps (AutoNeg) Full Duplex. $ sudo ifconfig vEth0 192.168.99.141/24 up # <<< configure KNI interface $ sudo ip maddr add 01:00:5e:00:00:05 dev vEth0 # <<<< configure maddr for= VF $ sudo tcpdump -i vEth0 -n -A -XX -v host 192.168.99.142 # <<< start to capture ... <<< NOTHING $ sudo ifconfig vEth0 allmulti # <<<< configure allmulti for VF in KNI $ sudo ifconfig vEth0 promisc # <<<< configure promisc for VF in KNI $ sudo tcpdump -i vEth0 -n -A -XX -v host 192.168.99.142 ... <<<<<< still NOTHING --=20 You are receiving this mail because: You are the assignee for the bug.=