From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id C954D95CC for ; Fri, 23 Oct 2015 02:42:16 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP; 22 Oct 2015 17:42:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,185,1444719600"; d="scan'208";a="800694218" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga001.jf.intel.com with ESMTP; 22 Oct 2015 17:42:08 -0700 Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 22 Oct 2015 17:42:07 -0700 Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 22 Oct 2015 17:42:07 -0700 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.194]) by SHSMSX103.ccr.corp.intel.com ([169.254.4.204]) with mapi id 14.03.0248.002; Fri, 23 Oct 2015 08:42:05 +0800 From: "Zhang, Helin" To: Martin Weiser Thread-Topic: i40e: problem with rx packet drops not accounted in statistics Thread-Index: AQHQ6vbXoRzDPrpKHEGgek6nfv9/Wp406oiQgEByyACAAYnk8P//++eAgAGdpgA= Date: Fri, 23 Oct 2015 00:42:04 +0000 Message-ID: References: <55F01EC7.1070909@allegro-packets.com> <56275064.6000107@allegro-packets.com> <5628975F.8090906@allegro-packets.com> In-Reply-To: <5628975F.8090906@allegro-packets.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] i40e: problem with rx packet drops not accounted in statistics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Oct 2015 00:42:17 -0000 Hi Martin Could you help to try bigger size of rx/tx ring, but not the default sizes? For example, could you help to try 1024 for RX ring size, and 512 or 1024 f= or TX ring size. In addition, please make sure you are using the latest version of NIC firmw= are. Regards, Helin > -----Original Message----- > From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] > Sent: Thursday, October 22, 2015 3:59 PM > To: Zhang, Helin > Cc: dev@dpdk.org > Subject: Re: i40e: problem with rx packet drops not accounted in statisti= cs >=20 > Hi Helin, >=20 > good to know that there is work being done on that issue. > By performance problem I mean that theses packet discards start to appear= at > low bandwidths where I would not expect any packets to be dropped. On the > same system we can reach higher bandwidths using ixgbe NICs without loosi= ng a > single packet so seeing packets being lost at only ~5GBit/s and ~1.5Mpps = on a > 40Gb adapter worries me a bit. >=20 > Best regards, > Martin >=20 >=20 > On 22.10.15 02:16, Zhang, Helin wrote: > > Hi Martin > > > > Yes, we have a developer working on it now, and hopefully he will have > something soon later on this fix. > > But what do you mean the performance problem? Did you mean the > performance number is not good as expected, or else? > > > > Regards, > > Helin > > > >> -----Original Message----- > >> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] > >> Sent: Wednesday, October 21, 2015 4:44 PM > >> To: Zhang, Helin > >> Cc: dev@dpdk.org > >> Subject: Re: i40e: problem with rx packet drops not accounted in > >> statistics > >> > >> Hi Helin, > >> > >> any news on this issue? By the way this is not just a problem with > >> statistics for us but also a performance problem since these packet > >> discards start appearing at a relatively low bandwidth (~5GBit/s and > ~1.5Mpps). > >> > >> Best regards, > >> Martin > >> > >> On 10.09.15 03:09, Zhang, Helin wrote: > >>> Hi Martin > >>> > >>> Yes, the statistics issue has been reported several times recently. > >>> We will check the issue and try to fix it or get a workaround soon. > >>> Thank you > >> very much! > >>> Regards, > >>> Helin > >>> > >>>> -----Original Message----- > >>>> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] > >>>> Sent: Wednesday, September 9, 2015 7:58 PM > >>>> To: Zhang, Helin > >>>> Cc: dev@dpdk.org > >>>> Subject: i40e: problem with rx packet drops not accounted in > >>>> statistics > >>>> > >>>> Hi Helin, > >>>> > >>>> in one of our test setups involving i40e adapters we are > >>>> experiencing packet drops which are not reflected in the interfaces > statistics. > >>>> The call to rte_eth_stats_get suggests that all packets were > >>>> properly received but the total number of packets received through > >>>> rte_eth_rx_burst is less than the ipackets counter. > >>>> When for example running the l2fwd application (l2fwd -c 0xfe -n 4 > >>>> -- -p > >>>> 0x3) and having driver debug messages enabled the following output > >>>> is generated for the interface in question: > >>>> > >>>> ... > >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats start > >>>> ******************* > >>>> PMD: i40e_update_vsi_stats(): rx_bytes: 242624340000 > >>>> PMD: i40e_update_vsi_stats(): rx_unicast: 167790000 > >>>> PMD: i40e_update_vsi_stats(): rx_multicast: 0 > >>>> PMD: i40e_update_vsi_stats(): rx_broadcast: 0 > >>>> PMD: i40e_update_vsi_stats(): rx_discards: 1192557 > >>>> PMD: i40e_update_vsi_stats(): rx_unknown_protocol: 0 > >>>> PMD: i40e_update_vsi_stats(): tx_bytes: 0 > >>>> PMD: i40e_update_vsi_stats(): tx_unicast: 0 > >>>> PMD: i40e_update_vsi_stats(): tx_multicast: 0 > >>>> PMD: i40e_update_vsi_stats(): tx_broadcast: 0 > >>>> PMD: i40e_update_vsi_stats(): tx_discards: 0 > >>>> PMD: i40e_update_vsi_stats(): tx_errors: 0 > >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats end > >>>> ******************* > >>>> PMD: i40e_dev_stats_get(): ***************** PF stats start > >>>> ******************* > >>>> PMD: i40e_dev_stats_get(): rx_bytes: 242624340000 > >>>> PMD: i40e_dev_stats_get(): rx_unicast: 167790000 > >>>> PMD: i40e_dev_stats_get(): rx_multicast: 0 > >>>> PMD: i40e_dev_stats_get(): rx_broadcast: 0 > >>>> PMD: i40e_dev_stats_get(): rx_discards: 0 > >>>> PMD: i40e_dev_stats_get(): rx_unknown_protocol: 167790000 > >>>> PMD: i40e_dev_stats_get(): tx_bytes: 0 > >>>> PMD: i40e_dev_stats_get(): tx_unicast: 0 > >>>> PMD: i40e_dev_stats_get(): tx_multicast: 0 > >>>> PMD: i40e_dev_stats_get(): tx_broadcast: 0 > >>>> PMD: i40e_dev_stats_get(): tx_discards: 0 > >>>> PMD: i40e_dev_stats_get(): tx_errors: 0 > >>>> PMD: i40e_dev_stats_get(): tx_dropped_link_down: 0 > >>>> PMD: i40e_dev_stats_get(): crc_errors: 0 > >>>> PMD: i40e_dev_stats_get(): illegal_bytes: 0 > >>>> PMD: i40e_dev_stats_get(): error_bytes: 0 > >>>> PMD: i40e_dev_stats_get(): mac_local_faults: 1 > >>>> PMD: i40e_dev_stats_get(): mac_remote_faults: 1 > >>>> PMD: i40e_dev_stats_get(): rx_length_errors: 0 > >>>> PMD: i40e_dev_stats_get(): link_xon_rx: 0 > >>>> PMD: i40e_dev_stats_get(): link_xoff_rx: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[0]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[0]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[1]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[1]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[2]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[2]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[3]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[3]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[4]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[4]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[5]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[5]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[6]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[6]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[7]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[7]: 0 > >>>> PMD: i40e_dev_stats_get(): link_xon_tx: 0 > >>>> PMD: i40e_dev_stats_get(): link_xoff_tx: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[0]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[0]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[0]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[1]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[1]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[1]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[2]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[2]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[2]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[3]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[3]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[3]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[4]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[4]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[4]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[5]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[5]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[5]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[6]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[6]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[6]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[7]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[7]: 0 > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[7]: 0 > >>>> PMD: i40e_dev_stats_get(): rx_size_64: 0 > >>>> PMD: i40e_dev_stats_get(): rx_size_127: 0 > >>>> PMD: i40e_dev_stats_get(): rx_size_255: 0 > >>>> PMD: i40e_dev_stats_get(): rx_size_511: 0 > >>>> PMD: i40e_dev_stats_get(): rx_size_1023: 0 > >>>> PMD: i40e_dev_stats_get(): rx_size_1522: 167790000 > >>>> PMD: i40e_dev_stats_get(): rx_size_big: 0 > >>>> PMD: i40e_dev_stats_get(): rx_undersize: 0 > >>>> PMD: i40e_dev_stats_get(): rx_fragments: 0 > >>>> PMD: i40e_dev_stats_get(): rx_oversize: 0 > >>>> PMD: i40e_dev_stats_get(): rx_jabber: 0 > >>>> PMD: i40e_dev_stats_get(): tx_size_64: 0 > >>>> PMD: i40e_dev_stats_get(): tx_size_127: 0 > >>>> PMD: i40e_dev_stats_get(): tx_size_255: 0 > >>>> PMD: i40e_dev_stats_get(): tx_size_511: 0 > >>>> PMD: i40e_dev_stats_get(): tx_size_1023: 0 > >>>> PMD: i40e_dev_stats_get(): tx_size_1522: 0 > >>>> PMD: i40e_dev_stats_get(): tx_size_big: 0 > >>>> PMD: i40e_dev_stats_get(): mac_short_packet_dropped: 0 > >>>> PMD: i40e_dev_stats_get(): checksum_error: 0 > >>>> PMD: i40e_dev_stats_get(): fdir_match: 0 > >>>> PMD: i40e_dev_stats_get(): ***************** PF stats end > >>>> ******************** > >>>> ... > >>>> > >>>> The count for rx_unicast is exactly the number of packets we would > >>>> have expected and the count for rx_discards in the VSI stats is > >>>> exactly the number of packets we are missing. > >>>> The question is why this number shows up only in the VSI stats and > >>>> not in the PF stats and of course why the packets which were > >>>> obviously discarded are still counted in the rx_unicast stats. > >>>> This test was performed using DPDK 2.1 and the firmware of the > >>>> XL710 is the latest one (FW 4.40 API 1.4 NVM 04.05.03). > >>>> Do you have an idea what might be going on? > >>>> > >>>> Best regards, > >>>> Martin > >>>> > >>>> >=20