DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [dpdk-dev] [Bug 749] mlx5: ConnectX-6 not all missed packets accounted for when using large maximum packet size
Date: Fri, 02 Jul 2021 08:43:15 +0000	[thread overview]
Message-ID: <bug-749-3@http.bugs.dpdk.org/> (raw)

https://bugs.dpdk.org/show_bug.cgi?id=749

            Bug ID: 749
           Summary: mlx5: ConnectX-6 not all missed packets accounted for
                    when using large maximum packet size
           Product: DPDK
           Version: unspecified
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: martin.weiser@allegro-packets.com
  Target Milestone: ---

Version: 21.05 (which is not available in the Bugzilla version field)

When testing with ConnectX-6 we recently became aware that a lot of missed
packets did not show up in the statistics.
After some further testing it looks like this is somehow related to using a
large maximum packet size (which I believe
makes the driver use a different scatter-gather receive path).

The following testpmd invocations can be used to demonstrate this behavior.


First an example for a "normal" run without the large maximum packet size:

  ./app/dpdk-testpmd -a c1:00.0 -a c1:00.1 -n 4 --legacy-mem --
--total-num-mbufs=2000000 --rx-offloads=0x2800 --mbuf-size=2331 --rxd=4096

The xstats for one port look like this:

  rx_good_packets: 540138902
  tx_good_packets: 537739805
  rx_good_bytes: 298771912199
  tx_good_bytes: 171950534158
  rx_missed_errors: 572790
  rx_errors: 0
  tx_errors: 0
  rx_mbuf_allocation_errors: 0
  rx_q0_packets: 540138902
  rx_q0_bytes: 298771912199
  rx_q0_errors: 0
  tx_q0_packets: 537739805
  tx_q0_bytes: 171950534158
  rx_wqe_errors: 0
  rx_unicast_packets: 540711692
  rx_unicast_bytes: 301254119575
  tx_unicast_packets: 537739805
  tx_unicast_bytes: 171950534158
  rx_multicast_packets: 0
  rx_multicast_bytes: 0
  tx_multicast_packets: 0
  tx_multicast_bytes: 0
  rx_broadcast_packets: 0
  rx_broadcast_bytes: 0
  tx_broadcast_packets: 0
  tx_broadcast_bytes: 0
  tx_phy_packets: 537739805
  rx_phy_packets: 540719221
  rx_phy_crc_errors: 0
  tx_phy_bytes: 174101493378
  rx_phy_bytes: 301258662491
  rx_phy_in_range_len_errors: 0
  rx_phy_symbol_errors: 0
  rx_phy_discard_packets: 7529
  tx_phy_discard_packets: 0
  tx_phy_errors: 0
  rx_out_of_buffer: 0
  tx_pp_missed_interrupt_errors: 0
  tx_pp_rearm_queue_errors: 0
  tx_pp_clock_queue_errors: 0
  tx_pp_timestamp_past_errors: 0
  tx_pp_timestamp_future_errors: 0
  tx_pp_jitter: 0
  tx_pp_wander: 0
  tx_pp_sync_lost: 0

For this particular testcase the sum of rx_good_packets, rx_missed_errors and
rx_phy_discard_packets is always
the expected total packet count of 540719221.


If however testpmd is invoked like this:

  ./app/dpdk-testpmd -a c1:00.0 -a c1:00.1 -n 4 --legacy-mem --
--total-num-mbufs=2000000 --max-pkt-len=15360 --rx-offloads=0x2800
--mbuf-size=2331 --rxd=4096

The xstats after the testcase run look like this:

  rx_good_packets: 521670616
  tx_good_packets: 522641593
  rx_good_bytes: 288980135079
  tx_good_bytes: 167591285708
  rx_missed_errors: 879662
  rx_errors: 0
  tx_errors: 0
  rx_mbuf_allocation_errors: 0
  rx_q0_packets: 521670616
  rx_q0_bytes: 288980135079
  rx_q0_errors: 0
  tx_q0_packets: 522641593
  tx_q0_bytes: 167591285708
  rx_wqe_errors: 0
  rx_unicast_packets: 522550278
  rx_unicast_bytes: 291559156800
  tx_unicast_packets: 522641593
  tx_unicast_bytes: 167591285708
  rx_multicast_packets: 0
  rx_multicast_bytes: 0
  tx_multicast_packets: 0
  tx_multicast_bytes: 0
  rx_broadcast_packets: 0
  rx_broadcast_bytes: 0
  tx_broadcast_packets: 0
  tx_broadcast_bytes: 0
  tx_phy_packets: 522641593
  rx_phy_packets: 540719221
  rx_phy_crc_errors: 0
  tx_phy_bytes: 169681852080
  rx_phy_bytes: 301258662491
  rx_phy_in_range_len_errors: 0
  rx_phy_symbol_errors: 0
  rx_phy_discard_packets: 30665
  tx_phy_discard_packets: 0
  tx_phy_errors: 0
  rx_out_of_buffer: 0
  tx_pp_missed_interrupt_errors: 0
  tx_pp_rearm_queue_errors: 0
  tx_pp_clock_queue_errors: 0
  tx_pp_timestamp_past_errors: 0
  tx_pp_timestamp_future_errors: 0
  tx_pp_jitter: 0
  tx_pp_wander: 0
  tx_pp_sync_lost: 0

The rx_good_packets, rx_missed_errors and rx_phy_discard_packets counters never
sum up to the expected packet
count:

  521670616 + 879662 + 30665 = 522580943
  540719221 - 522580943 = 18138278 (packets not accounted for)

-- 
You are receiving this mail because:
You are the assignee for the bug.

                 reply	other threads:[~2021-07-02  8:43 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-749-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).