From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8BDF0A046B for ; Sat, 24 Aug 2019 00:16:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8D8F31C037; Sat, 24 Aug 2019 00:16:20 +0200 (CEST) Received: from lb.pantheon.sk (lb.pantheon.sk [46.229.239.20]) by dpdk.org (Postfix) with ESMTP id 5A4CC1BEE1; Tue, 20 Aug 2019 17:36:16 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by lb.pantheon.sk (Postfix) with ESMTP id 7FD0E13C67E; Tue, 20 Aug 2019 17:36:16 +0200 (CEST) X-Virus-Scanned: amavisd-new at siecit.sk Received: from lb.pantheon.sk ([127.0.0.1]) by localhost (lb.pantheon.sk [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id e6PbZjVGH2tX; Tue, 20 Aug 2019 17:36:15 +0200 (CEST) Received: from mail.pantheon.sk (srvw-ptex1.pantheon.local [10.101.4.5]) by lb.pantheon.sk (Postfix) with ESMTPS id 4F05E13C679; Tue, 20 Aug 2019 17:36:15 +0200 (CEST) Received: from srvw-ptex1.pantheon.local (10.101.4.5) by srvw-ptex1.pantheon.local (10.101.4.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1466.3; Tue, 20 Aug 2019 17:36:14 +0200 Received: from srvw-ptex1.pantheon.local ([fe80::7975:60a9:41c6:4c85]) by srvw-ptex1.pantheon.local ([fe80::7975:60a9:41c6:4c85%7]) with mapi id 15.01.1466.003; Tue, 20 Aug 2019 17:36:14 +0200 From: =?iso-8859-2?Q?Miroslav_Kov=E1=E8?= To: "dev@dpdk.org" , "users@dpdk.org" CC: =?iso-8859-2?Q?Juraj_Linke=B9?= , =?iso-8859-2?Q?Miroslav_Miklu=B9?= Thread-Topic: Intel XXV710 SR-IOV packet loss Thread-Index: AQHVV2xHNPIhICDW+kyHYc+jC10Mng== Date: Tue, 20 Aug 2019 15:36:14 +0000 Message-ID: <51bf2af189834b1d91717a3e8e648885@pantheon.tech> Accept-Language: sk-SK, en-US Content-Language: sk-SK X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.101.4.10] MIME-Version: 1.0 X-Mailman-Approved-At: Sat, 24 Aug 2019 00:16:16 +0200 Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-users] Intel XXV710 SR-IOV packet loss X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hello, We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-i= ov to sort packets based on vlan in between the VFs. We are using trex on o= ne machine to generate packets and multiple VPPs (each in docker container,= using one VF) on another one. Trex machine contains the exact same hardwar= e. Each VF contains one vlan with spoof checking off and trust on and specific= MAC address. For example -> vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto,= trust on We are generating packets with VF destination MACs with the corresponding V= LAN. When sending packets to 3 VFs trex shows 35 million tx-packets and Dpd= k stats on the trex machine show that 35 million were in fact sent out: ##### DPDK Statistics port0 ##### { "tx_good_bytes": 2142835740, "tx_good_packets": 35713929, "tx_size_64_packets": 35713929, "tx_unicast_packets": 35713929 } rate=3D '96%'; pktSize=3D 64; frameLoss%=3D51.31%; bytesReceived/s=3D= 1112966528.00; totalReceived=3D 17390102; totalSent=3D 35713929; fr= ameLoss=3D 18323827; bytesReceived=3D 1112966528; targetDuration=3D1.0 However VPP shows only 33 million rx-packets: VirtualFunctionEthernet17/a/0 2 up 9000/0/0/0 rx packets 5718196 rx bytes 343091760 rx-miss 5572089 VirtualFunctionEthernet17/a/1 2 up 9000/0/0/0 rx packets 5831396 rx bytes 349883760 rx-miss 5459089 VirtualFunctionEthernet17/a/2 2 up 9000/0/0/0 rx packets 5840512 rx bytes 350430720 rx-miss 5449466 Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing. Even when I check VFs stats I see only 33 million to come (out of which 9.9= million are rx-missed): root@protonet:/home/protonet# for f in $(ls /sys/class/net/enp23s0f1/device= /sriov/*/stats/rx_packets); do echo "$f: $(cat $f)"; done | grep -v ' 0$' /sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290 /sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485 /sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978 When increasing the number of VFs the number of rx-packets on VPP is actual= ly decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 mill= ion packets, but when I use 8 VFs all the sudden it drops to 16 million pac= kets (no rx-miss any more). The same goes with trunk mode: VirtualFunctionEthernet17/a/0 2 up 9000/0/0/0 rx packets 1959110 rx bytes 117546600 VirtualFunctionEthernet17/a/1 2 up 9000/0/0/0 rx packets 1959181 rx bytes 117550860 VirtualFunctionEthernet17/a/2 2 up 9000/0/0/0 rx packets 1956242 rx bytes 117374520 . . . Approximately the same amount of packets for each VPP instance which is 2 m= illion packets * 8 =3D 16 million packets out of 35 million sent. Almost 20= million are gone We are using vfio-pci driver. The strange thing is that when I use only PF, no sr-iov VFs are on and I tr= y the same vpp setup I can see all 35 million packets to come across. This leads us to believe that there could be something wrong the sr-iov on = XXV710 but we don't know how to debug this any further - The packets seem t= o be lost somewhere in the NIC when using sr-iov and we don't know of any d= pdk or linux tool that could help us with locating the lost packets. Regards, Miroslav Kovac