From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtprelay01.ispgateway.de (smtprelay01.ispgateway.de [80.67.31.35]) by dpdk.org (Postfix) with ESMTP id 88B4D3F9 for ; Thu, 22 Oct 2015 09:59:28 +0200 (CEST) Received: from [87.172.153.71] (helo=nb-martin.allegro) by smtprelay01.ispgateway.de with esmtpsa (TLSv1.2:DHE-RSA-AES128-SHA:128) (Exim 4.84) (envelope-from ) id 1ZpAmV-0006WG-Q1; Thu, 22 Oct 2015 09:59:28 +0200 To: "Zhang, Helin" References: <55F01EC7.1070909@allegro-packets.com> <56275064.6000107@allegro-packets.com> From: Martin Weiser X-Enigmail-Draft-Status: N1110 Message-ID: <5628975F.8090906@allegro-packets.com> Date: Thu, 22 Oct 2015 09:59:27 +0200 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Df-Sender: bWFydGluLndlaXNlckBhbGxlZ3JvLXBhY2tldHMuY29t Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] i40e: problem with rx packet drops not accounted in statistics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Oct 2015 07:59:28 -0000 Hi Helin, good to know that there is work being done on that issue. By performance problem I mean that theses packet discards start to appear at low bandwidths where I would not expect any packets to be dropped. On the same system we can reach higher bandwidths using ixgbe NICs without loosing a single packet so seeing packets being lost at only ~5GBit/s and ~1.5Mpps on a 40Gb adapter worries me a bit. Best regards, Martin On 22.10.15 02:16, Zhang, Helin wrote: > Hi Martin > > Yes, we have a developer working on it now, and hopefully he will have = something soon later on this fix. > But what do you mean the performance problem? Did you mean the performa= nce number is not good as expected, or else? > > Regards, > Helin > >> -----Original Message----- >> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] >> Sent: Wednesday, October 21, 2015 4:44 PM >> To: Zhang, Helin >> Cc: dev@dpdk.org >> Subject: Re: i40e: problem with rx packet drops not accounted in stati= stics >> >> Hi Helin, >> >> any news on this issue? By the way this is not just a problem with sta= tistics for us >> but also a performance problem since these packet discards start appea= ring at a >> relatively low bandwidth (~5GBit/s and ~1.5Mpps). >> >> Best regards, >> Martin >> >> On 10.09.15 03:09, Zhang, Helin wrote: >>> Hi Martin >>> >>> Yes, the statistics issue has been reported several times recently. >>> We will check the issue and try to fix it or get a workaround soon. T= hank you >> very much! >>> Regards, >>> Helin >>> >>>> -----Original Message----- >>>> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] >>>> Sent: Wednesday, September 9, 2015 7:58 PM >>>> To: Zhang, Helin >>>> Cc: dev@dpdk.org >>>> Subject: i40e: problem with rx packet drops not accounted in >>>> statistics >>>> >>>> Hi Helin, >>>> >>>> in one of our test setups involving i40e adapters we are experiencin= g >>>> packet drops which are not reflected in the interfaces statistics. >>>> The call to rte_eth_stats_get suggests that all packets were properl= y >>>> received but the total number of packets received through >>>> rte_eth_rx_burst is less than the ipackets counter. >>>> When for example running the l2fwd application (l2fwd -c 0xfe -n 4 -= - >>>> -p >>>> 0x3) and having driver debug messages enabled the following output i= s >>>> generated for the interface in question: >>>> >>>> ... >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats start >>>> ******************* >>>> PMD: i40e_update_vsi_stats(): rx_bytes: 242624340000 >>>> PMD: i40e_update_vsi_stats(): rx_unicast: 167790000 >>>> PMD: i40e_update_vsi_stats(): rx_multicast: 0 >>>> PMD: i40e_update_vsi_stats(): rx_broadcast: 0 >>>> PMD: i40e_update_vsi_stats(): rx_discards: 1192557 >>>> PMD: i40e_update_vsi_stats(): rx_unknown_protocol: 0 >>>> PMD: i40e_update_vsi_stats(): tx_bytes: 0 >>>> PMD: i40e_update_vsi_stats(): tx_unicast: 0 >>>> PMD: i40e_update_vsi_stats(): tx_multicast: 0 >>>> PMD: i40e_update_vsi_stats(): tx_broadcast: 0 >>>> PMD: i40e_update_vsi_stats(): tx_discards: 0 >>>> PMD: i40e_update_vsi_stats(): tx_errors: 0 >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats end >>>> ******************* >>>> PMD: i40e_dev_stats_get(): ***************** PF stats start >>>> ******************* >>>> PMD: i40e_dev_stats_get(): rx_bytes: 242624340000 >>>> PMD: i40e_dev_stats_get(): rx_unicast: 167790000 >>>> PMD: i40e_dev_stats_get(): rx_multicast: 0 >>>> PMD: i40e_dev_stats_get(): rx_broadcast: 0 >>>> PMD: i40e_dev_stats_get(): rx_discards: 0 >>>> PMD: i40e_dev_stats_get(): rx_unknown_protocol: 167790000 >>>> PMD: i40e_dev_stats_get(): tx_bytes: 0 >>>> PMD: i40e_dev_stats_get(): tx_unicast: 0 >>>> PMD: i40e_dev_stats_get(): tx_multicast: 0 >>>> PMD: i40e_dev_stats_get(): tx_broadcast: 0 >>>> PMD: i40e_dev_stats_get(): tx_discards: 0 >>>> PMD: i40e_dev_stats_get(): tx_errors: 0 >>>> PMD: i40e_dev_stats_get(): tx_dropped_link_down: 0 >>>> PMD: i40e_dev_stats_get(): crc_errors: 0 >>>> PMD: i40e_dev_stats_get(): illegal_bytes: 0 >>>> PMD: i40e_dev_stats_get(): error_bytes: 0 >>>> PMD: i40e_dev_stats_get(): mac_local_faults: 1 >>>> PMD: i40e_dev_stats_get(): mac_remote_faults: 1 >>>> PMD: i40e_dev_stats_get(): rx_length_errors: 0 >>>> PMD: i40e_dev_stats_get(): link_xon_rx: 0 >>>> PMD: i40e_dev_stats_get(): link_xoff_rx: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[0]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[0]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[1]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[1]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[2]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[2]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[3]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[3]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[4]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[4]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[5]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[5]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[6]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[6]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[7]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[7]: 0 >>>> PMD: i40e_dev_stats_get(): link_xon_tx: 0 >>>> PMD: i40e_dev_stats_get(): link_xoff_tx: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[0]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[0]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[0]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[1]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[1]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[1]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[2]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[2]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[2]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[3]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[3]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[3]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[4]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[4]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[4]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[5]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[5]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[5]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[6]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[6]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[6]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[7]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[7]: 0 >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[7]: 0 >>>> PMD: i40e_dev_stats_get(): rx_size_64: 0 >>>> PMD: i40e_dev_stats_get(): rx_size_127: 0 >>>> PMD: i40e_dev_stats_get(): rx_size_255: 0 >>>> PMD: i40e_dev_stats_get(): rx_size_511: 0 >>>> PMD: i40e_dev_stats_get(): rx_size_1023: 0 >>>> PMD: i40e_dev_stats_get(): rx_size_1522: 167790000 >>>> PMD: i40e_dev_stats_get(): rx_size_big: 0 >>>> PMD: i40e_dev_stats_get(): rx_undersize: 0 >>>> PMD: i40e_dev_stats_get(): rx_fragments: 0 >>>> PMD: i40e_dev_stats_get(): rx_oversize: 0 >>>> PMD: i40e_dev_stats_get(): rx_jabber: 0 >>>> PMD: i40e_dev_stats_get(): tx_size_64: 0 >>>> PMD: i40e_dev_stats_get(): tx_size_127: 0 >>>> PMD: i40e_dev_stats_get(): tx_size_255: 0 >>>> PMD: i40e_dev_stats_get(): tx_size_511: 0 >>>> PMD: i40e_dev_stats_get(): tx_size_1023: 0 >>>> PMD: i40e_dev_stats_get(): tx_size_1522: 0 >>>> PMD: i40e_dev_stats_get(): tx_size_big: 0 >>>> PMD: i40e_dev_stats_get(): mac_short_packet_dropped: 0 >>>> PMD: i40e_dev_stats_get(): checksum_error: 0 >>>> PMD: i40e_dev_stats_get(): fdir_match: 0 >>>> PMD: i40e_dev_stats_get(): ***************** PF stats end >>>> ******************** >>>> ... >>>> >>>> The count for rx_unicast is exactly the number of packets we would >>>> have expected and the count for rx_discards in the VSI stats is >>>> exactly the number of packets we are missing. >>>> The question is why this number shows up only in the VSI stats and >>>> not in the PF stats and of course why the packets which were >>>> obviously discarded are still counted in the rx_unicast stats. >>>> This test was performed using DPDK 2.1 and the firmware of the XL710= >>>> is the latest one (FW 4.40 API 1.4 NVM 04.05.03). >>>> Do you have an idea what might be going on? >>>> >>>> Best regards, >>>> Martin >>>> >>>>