From: Arnon Warshavsky <arnon@qwilt.com>
To: "Zhang, Helin" <helin.zhang@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
Eimear Morrissey <eimear.morrissey@ie.ibm.com>
Subject: Re: [dpdk-dev] i40e: problem with rx packet drops not accounted in statistics
Date: Sun, 25 Oct 2015 21:51:00 +0300 [thread overview]
Message-ID: <CAKy9EB1e2Lz0SEgVQ+Lw4enaT--hCqKVVk6afKDO8gdFE1mROw@mail.gmail.com> (raw)
In-Reply-To: <F35DEAC7BCE34641BA9FAC6BCA4A12E70A91C33E@SHSMSX104.ccr.corp.intel.com>
Hi Helin
I would like to add my input for this as well.
I encountered the same issue, and as you suggested I updated to the latest
fw and changed rx and tx ring sizes to 1024.
Drop counters still do not increment as they should.
I Inject 10mpps into an x710 nic (a 4 ports card, 10mpps on each port)
read by a simple rx-only dpdk app.
I read 10mpps from the in-packets counter , not getting any drop counters
incrementing , while my application counts only 8 mpps per port that are
actually arriving to the app.
Running the same on x520 I get 8 mpps from the in-packets counter and 2
mpps from dropped packets as it should.
Something seems to be broken in the error/discard accounting.
/Arnon
On Fri, Oct 23, 2015 at 3:42 AM, Zhang, Helin <helin.zhang@intel.com> wrote:
> Hi Martin
>
> Could you help to try bigger size of rx/tx ring, but not the default sizes?
> For example, could you help to try 1024 for RX ring size, and 512 or 1024
> for TX ring size.
>
> In addition, please make sure you are using the latest version of NIC
> firmware.
>
> Regards,
> Helin
>
> > -----Original Message-----
> > From: Martin Weiser [mailto:martin.weiser@allegro-packets.com]
> > Sent: Thursday, October 22, 2015 3:59 PM
> > To: Zhang, Helin
> > Cc: dev@dpdk.org
> > Subject: Re: i40e: problem with rx packet drops not accounted in
> statistics
> >
> > Hi Helin,
> >
> > good to know that there is work being done on that issue.
> > By performance problem I mean that theses packet discards start to
> appear at
> > low bandwidths where I would not expect any packets to be dropped. On the
> > same system we can reach higher bandwidths using ixgbe NICs without
> loosing a
> > single packet so seeing packets being lost at only ~5GBit/s and ~1.5Mpps
> on a
> > 40Gb adapter worries me a bit.
> >
> > Best regards,
> > Martin
> >
> >
> > On 22.10.15 02:16, Zhang, Helin wrote:
> > > Hi Martin
> > >
> > > Yes, we have a developer working on it now, and hopefully he will have
> > something soon later on this fix.
> > > But what do you mean the performance problem? Did you mean the
> > performance number is not good as expected, or else?
> > >
> > > Regards,
> > > Helin
> > >
> > >> -----Original Message-----
> > >> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com]
> > >> Sent: Wednesday, October 21, 2015 4:44 PM
> > >> To: Zhang, Helin
> > >> Cc: dev@dpdk.org
> > >> Subject: Re: i40e: problem with rx packet drops not accounted in
> > >> statistics
> > >>
> > >> Hi Helin,
> > >>
> > >> any news on this issue? By the way this is not just a problem with
> > >> statistics for us but also a performance problem since these packet
> > >> discards start appearing at a relatively low bandwidth (~5GBit/s and
> > ~1.5Mpps).
> > >>
> > >> Best regards,
> > >> Martin
> > >>
> > >> On 10.09.15 03:09, Zhang, Helin wrote:
> > >>> Hi Martin
> > >>>
> > >>> Yes, the statistics issue has been reported several times recently.
> > >>> We will check the issue and try to fix it or get a workaround soon.
> > >>> Thank you
> > >> very much!
> > >>> Regards,
> > >>> Helin
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Martin Weiser [mailto:martin.weiser@allegro-packets.com]
> > >>>> Sent: Wednesday, September 9, 2015 7:58 PM
> > >>>> To: Zhang, Helin
> > >>>> Cc: dev@dpdk.org
> > >>>> Subject: i40e: problem with rx packet drops not accounted in
> > >>>> statistics
> > >>>>
> > >>>> Hi Helin,
> > >>>>
> > >>>> in one of our test setups involving i40e adapters we are
> > >>>> experiencing packet drops which are not reflected in the interfaces
> > statistics.
> > >>>> The call to rte_eth_stats_get suggests that all packets were
> > >>>> properly received but the total number of packets received through
> > >>>> rte_eth_rx_burst is less than the ipackets counter.
> > >>>> When for example running the l2fwd application (l2fwd -c 0xfe -n 4
> > >>>> -- -p
> > >>>> 0x3) and having driver debug messages enabled the following output
> > >>>> is generated for the interface in question:
> > >>>>
> > >>>> ...
> > >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats start
> > >>>> *******************
> > >>>> PMD: i40e_update_vsi_stats(): rx_bytes: 242624340000
> > >>>> PMD: i40e_update_vsi_stats(): rx_unicast: 167790000
> > >>>> PMD: i40e_update_vsi_stats(): rx_multicast: 0
> > >>>> PMD: i40e_update_vsi_stats(): rx_broadcast: 0
> > >>>> PMD: i40e_update_vsi_stats(): rx_discards: 1192557
> > >>>> PMD: i40e_update_vsi_stats(): rx_unknown_protocol: 0
> > >>>> PMD: i40e_update_vsi_stats(): tx_bytes: 0
> > >>>> PMD: i40e_update_vsi_stats(): tx_unicast: 0
> > >>>> PMD: i40e_update_vsi_stats(): tx_multicast: 0
> > >>>> PMD: i40e_update_vsi_stats(): tx_broadcast: 0
> > >>>> PMD: i40e_update_vsi_stats(): tx_discards: 0
> > >>>> PMD: i40e_update_vsi_stats(): tx_errors: 0
> > >>>> PMD: i40e_update_vsi_stats(): ***************** VSI[6] stats end
> > >>>> *******************
> > >>>> PMD: i40e_dev_stats_get(): ***************** PF stats start
> > >>>> *******************
> > >>>> PMD: i40e_dev_stats_get(): rx_bytes: 242624340000
> > >>>> PMD: i40e_dev_stats_get(): rx_unicast: 167790000
> > >>>> PMD: i40e_dev_stats_get(): rx_multicast: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_broadcast: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_discards: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_unknown_protocol: 167790000
> > >>>> PMD: i40e_dev_stats_get(): tx_bytes: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_unicast: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_multicast: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_broadcast: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_discards: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_errors: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_dropped_link_down: 0
> > >>>> PMD: i40e_dev_stats_get(): crc_errors: 0
> > >>>> PMD: i40e_dev_stats_get(): illegal_bytes: 0
> > >>>> PMD: i40e_dev_stats_get(): error_bytes: 0
> > >>>> PMD: i40e_dev_stats_get(): mac_local_faults: 1
> > >>>> PMD: i40e_dev_stats_get(): mac_remote_faults: 1
> > >>>> PMD: i40e_dev_stats_get(): rx_length_errors: 0
> > >>>> PMD: i40e_dev_stats_get(): link_xon_rx: 0
> > >>>> PMD: i40e_dev_stats_get(): link_xoff_rx: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[0]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[0]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[1]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[1]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[2]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[2]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[3]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[3]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[4]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[4]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[5]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[5]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[6]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[6]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_rx[7]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_rx[7]: 0
> > >>>> PMD: i40e_dev_stats_get(): link_xon_tx: 0
> > >>>> PMD: i40e_dev_stats_get(): link_xoff_tx: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[0]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[0]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[0]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[1]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[1]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[1]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[2]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[2]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[2]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[3]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[3]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[3]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[4]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[4]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[4]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[5]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[5]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[5]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[6]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[6]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[6]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_tx[7]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xoff_tx[7]: 0
> > >>>> PMD: i40e_dev_stats_get(): priority_xon_2_xoff[7]: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_size_64: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_size_127: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_size_255: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_size_511: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_size_1023: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_size_1522: 167790000
> > >>>> PMD: i40e_dev_stats_get(): rx_size_big: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_undersize: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_fragments: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_oversize: 0
> > >>>> PMD: i40e_dev_stats_get(): rx_jabber: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_size_64: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_size_127: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_size_255: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_size_511: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_size_1023: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_size_1522: 0
> > >>>> PMD: i40e_dev_stats_get(): tx_size_big: 0
> > >>>> PMD: i40e_dev_stats_get(): mac_short_packet_dropped: 0
> > >>>> PMD: i40e_dev_stats_get(): checksum_error: 0
> > >>>> PMD: i40e_dev_stats_get(): fdir_match: 0
> > >>>> PMD: i40e_dev_stats_get(): ***************** PF stats end
> > >>>> ********************
> > >>>> ...
> > >>>>
> > >>>> The count for rx_unicast is exactly the number of packets we would
> > >>>> have expected and the count for rx_discards in the VSI stats is
> > >>>> exactly the number of packets we are missing.
> > >>>> The question is why this number shows up only in the VSI stats and
> > >>>> not in the PF stats and of course why the packets which were
> > >>>> obviously discarded are still counted in the rx_unicast stats.
> > >>>> This test was performed using DPDK 2.1 and the firmware of the
> > >>>> XL710 is the latest one (FW 4.40 API 1.4 NVM 04.05.03).
> > >>>> Do you have an idea what might be going on?
> > >>>>
> > >>>> Best regards,
> > >>>> Martin
> > >>>>
> > >>>>
> >
>
>
--
*Arnon Warshavsky*
*Qwilt | work: +972-72-2221634 | mobile: +972-50-8583058 | arnon@qwilt.com
<arnon@qwilt.com>*
next prev parent reply other threads:[~2015-10-25 18:51 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-09 11:57 Martin Weiser
2015-09-10 1:09 ` Zhang, Helin
2015-10-21 8:44 ` Martin Weiser
2015-10-22 0:16 ` Zhang, Helin
2015-10-22 7:59 ` Martin Weiser
2015-10-23 0:42 ` Zhang, Helin
2015-10-25 18:51 ` Arnon Warshavsky [this message]
2015-10-26 1:57 ` Zhang, Helin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAKy9EB1e2Lz0SEgVQ+Lw4enaT--hCqKVVk6afKDO8gdFE1mROw@mail.gmail.com \
--to=arnon@qwilt.com \
--cc=dev@dpdk.org \
--cc=eimear.morrissey@ie.ibm.com \
--cc=helin.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).