From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-f178.google.com (mail-ob0-f178.google.com [209.85.214.178]) by dpdk.org (Postfix) with ESMTP id EBD59532D for ; Sun, 25 Oct 2015 19:51:00 +0100 (CET) Received: by obctp1 with SMTP id tp1so100283290obc.2 for ; Sun, 25 Oct 2015 11:51:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=h6n/TJc/ObLwMg6JUfeIV1IJfilFe20Q6TJzYJAzPeo=; b=kw7R4qhwf3xVv69mQpWPaML93IzvzVXy9dR15YVEPv3iii2GfNlVPvWiSB+ArHgglR wvtbKGWjcEmerRa+CLv7YM2me+HOUqzGRLZUu9GWuJUOwDwT7hHkZRH9MqBXnME5nPhX Lvq1ObCn4m5Wbqxta/Nd18hjyWWgnbRuh6hTIwlZvRvj4Z1/HBWtDTHKZv6WSvxddEpn 9mGNahVZY0VHGdxQoX/9h6+qNlykB9+kej0GWSbHi2dIRUuL3IVmEkPxmUPdRR6iTrDq Ci8NmGygARmZSQaNFyPoSoLFwJc3TlCboKVY6IEzFCNIvS+9EKPJvtww41LgAR47UKdD Kxpg== X-Gm-Message-State: ALoCoQnraHFcfM0aiSRiw33g+FQ3TDltvefXLgZNJm1Zj93QdFZDJr5aDjsKjj8Riq/vkN3yCwmO MIME-Version: 1.0 X-Received: by 10.60.116.39 with SMTP id jt7mr1191877oeb.54.1445799060230; Sun, 25 Oct 2015 11:51:00 -0700 (PDT) Received: by 10.202.94.5 with HTTP; Sun, 25 Oct 2015 11:51:00 -0700 (PDT) In-Reply-To: References: <55F01EC7.1070909@allegro-packets.com> <56275064.6000107@allegro-packets.com> <5628975F.8090906@allegro-packets.com> Date: Sun, 25 Oct 2015 21:51:00 +0300 Message-ID: From: Arnon Warshavsky To: "Zhang, Helin" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" , Eimear Morrissey Subject: Re: [dpdk-dev] i40e: problem with rx packet drops not accounted in statistics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Oct 2015 18:51:01 -0000 Hi Helin I would like to add my input for this as well. I encountered the same issue, and as you suggested I updated to the latest fw and changed rx and tx ring sizes to 1024. Drop counters still do not increment as they should. I Inject 10mpps into an x710 nic (a 4 ports card, 10mpps on each port) read by a simple rx-only dpdk app. I read 10mpps from the in-packets counter , not getting any drop counters incrementing , while my application counts only 8 mpps per port that are actually arriving to the app. Running the same on x520 I get 8 mpps from the in-packets counter and 2 mpps from dropped packets as it should. Something seems to be broken in the error/discard accounting. /Arnon On Fri, Oct 23, 2015 at 3:42 AM, Zhang, Helin wrote: > Hi Martin > > Could you help to try bigger size of rx/tx ring, but not the default sizes? > For example, could you help to try 1024 for RX ring size, and 512 or 1024 > for TX ring size. > > In addition, please make sure you are using the latest version of NIC > firmware. > > Regards, > Helin > > > -----Original Message----- > > From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] > > Sent: Thursday, October 22, 2015 3:59 PM > > To: Zhang, Helin > > Cc: dev@dpdk.org > > Subject: Re: i40e: problem with rx packet drops not accounted in > statistics > > > > Hi Helin, > > > > good to know that there is work being done on that issue. > > By performance problem I mean that theses packet discards start to > appear at > > low bandwidths where I would not expect any packets to be dropped. On the > > same system we can reach higher bandwidths using ixgbe NICs without > loosing a > > single packet so seeing packets being lost at only ~5GBit/s and ~1.5Mpps > on a > > 40Gb adapter worries me a bit. > > > > Best regards, > > Martin > > > > > > On 22.10.15 02:16, Zhang, Helin wrote: > > > Hi Martin > > > > > > Yes, we have a developer working on it now, and hopefully he will have > > something soon later on this fix. > > > But what do you mean the performance problem? Did you mean the > > performance number is not good as expected, or else? > > > > > > Regards, > > > Helin > > > > > >> -----Original Message----- > > >> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] > > >> Sent: Wednesday, October 21, 2015 4:44 PM > > >> To: Zhang, Helin > > >> Cc: dev@dpdk.org > > >> Subject: Re: i40e: problem with rx packet drops not accounted in > > >> statistics > > >> > > >> Hi Helin, > > >> > > >> any news on this issue? By the way this is not just a problem with > > >> statistics for us but also a performance problem since these packet > > >> discards start appearing at a relatively low bandwidth (~5GBit/s and > > ~1.5Mpps). > > >> > > >> Best regards, > > >> Martin > > >> > > >> On 10.09.15 03:09, Zhang, Helin wrote: > > >>> Hi Martin > > >>> > > >>> Yes, the statistics issue has been reported several times recently. > > >>> We will check the issue and try to fix it or get a workaround soon. > > >>> Thank you > > >> very much! > > >>> Regards, > > >>> Helin > > >>> > > >>>> -----Original Message----- > > >>>> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com] > > >>>> Sent: Wednesday, September 9, 2015 7:58 PM > > >>>> To: Zhang, Helin > > >>>> Cc: dev@dpdk.org > > >>>> Subject: i40e: problem with rx packet drops not accounted in > > >>>> statistics > > >>>> > > >>>> Hi Helin, > > >>>> > > >>>> in one of our test setups involving i40e adapters we are > > >>>> experiencing packet drops which are not reflected in the interfaces > > statistics. > > >>>> The call to rte_eth_stats_get suggests that all packets were > > >>>> properly received but the total number of packets received through > > >>>> rte_eth_rx_burst is less than the ipackets counter. > > >>>> When for example running the l2fwd application (l2fwd -c 0xfe -n 4 > > >>>> -- -p > > >>>> 0x3) and having driver debug messages enabled the following output > > >>>> is generated for the interface in question: > > >>>> > > >>>> ... > > >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats start > > >>>> ******************* > > >>>> PMD: i40e_update_vsi_stats(): rx_bytes: 242624340000 > > >>>> PMD: i40e_update_vsi_stats(): rx_unicast: 167790000 > > >>>> PMD: i40e_update_vsi_stats(): rx_multicast: 0 > > >>>> PMD: i40e_update_vsi_stats(): rx_broadcast: 0 > > >>>> PMD: i40e_update_vsi_stats(): rx_discards: 1192557 > > >>>> PMD: i40e_update_vsi_stats(): rx_unknown_protocol: 0 > > >>>> PMD: i40e_update_vsi_stats(): tx_bytes: 0 > > >>>> PMD: i40e_update_vsi_stats(): tx_unicast: 0 > > >>>> PMD: i40e_update_vsi_stats(): tx_multicast: 0 > > >>>> PMD: i40e_update_vsi_stats(): tx_broadcast: 0 > > >>>> PMD: i40e_update_vsi_stats(): tx_discards: 0 > > >>>> PMD: i40e_update_vsi_stats(): tx_errors: 0 > > >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats end > > >>>> ******************* > > >>>> PMD: i40e_dev_stats_get(): ***************** PF stats start > > >>>> ******************* > > >>>> PMD: i40e_dev_stats_get(): rx_bytes: 242624340000 > > >>>> PMD: i40e_dev_stats_get(): rx_unicast: 167790000 > > >>>> PMD: i40e_dev_stats_get(): rx_multicast: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_broadcast: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_discards: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_unknown_protocol: 167790000 > > >>>> PMD: i40e_dev_stats_get(): tx_bytes: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_unicast: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_multicast: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_broadcast: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_discards: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_errors: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_dropped_link_down: 0 > > >>>> PMD: i40e_dev_stats_get(): crc_errors: 0 > > >>>> PMD: i40e_dev_stats_get(): illegal_bytes: 0 > > >>>> PMD: i40e_dev_stats_get(): error_bytes: 0 > > >>>> PMD: i40e_dev_stats_get(): mac_local_faults: 1 > > >>>> PMD: i40e_dev_stats_get(): mac_remote_faults: 1 > > >>>> PMD: i40e_dev_stats_get(): rx_length_errors: 0 > > >>>> PMD: i40e_dev_stats_get(): link_xon_rx: 0 > > >>>> PMD: i40e_dev_stats_get(): link_xoff_rx: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[0]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[0]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[1]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[1]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[2]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[2]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[3]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[3]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[4]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[4]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[5]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[5]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[6]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[6]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[7]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[7]: 0 > > >>>> PMD: i40e_dev_stats_get(): link_xon_tx: 0 > > >>>> PMD: i40e_dev_stats_get(): link_xoff_tx: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[0]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[0]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[0]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[1]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[1]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[1]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[2]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[2]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[2]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[3]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[3]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[3]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[4]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[4]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[4]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[5]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[5]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[5]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[6]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[6]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[6]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[7]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[7]: 0 > > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[7]: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_size_64: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_size_127: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_size_255: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_size_511: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_size_1023: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_size_1522: 167790000 > > >>>> PMD: i40e_dev_stats_get(): rx_size_big: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_undersize: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_fragments: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_oversize: 0 > > >>>> PMD: i40e_dev_stats_get(): rx_jabber: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_size_64: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_size_127: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_size_255: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_size_511: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_size_1023: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_size_1522: 0 > > >>>> PMD: i40e_dev_stats_get(): tx_size_big: 0 > > >>>> PMD: i40e_dev_stats_get(): mac_short_packet_dropped: 0 > > >>>> PMD: i40e_dev_stats_get(): checksum_error: 0 > > >>>> PMD: i40e_dev_stats_get(): fdir_match: 0 > > >>>> PMD: i40e_dev_stats_get(): ***************** PF stats end > > >>>> ******************** > > >>>> ... > > >>>> > > >>>> The count for rx_unicast is exactly the number of packets we would > > >>>> have expected and the count for rx_discards in the VSI stats is > > >>>> exactly the number of packets we are missing. > > >>>> The question is why this number shows up only in the VSI stats and > > >>>> not in the PF stats and of course why the packets which were > > >>>> obviously discarded are still counted in the rx_unicast stats. > > >>>> This test was performed using DPDK 2.1 and the firmware of the > > >>>> XL710 is the latest one (FW 4.40 API 1.4 NVM 04.05.03). > > >>>> Do you have an idea what might be going on? > > >>>> > > >>>> Best regards, > > >>>> Martin > > >>>> > > >>>> > > > > -- *Arnon Warshavsky* *Qwilt | work: +972-72-2221634 | mobile: +972-50-8583058 | arnon@qwilt.com *