DPDK usage discussions
 help / color / mirror / Atom feed
From: David Christensen <drc@linux.vnet.ibm.com>
To: users@dpdk.org
Subject: Re: [dpdk-users] rte_eth_stats counters
Date: Mon, 21 Sep 2020 10:53:49 -0700
Message-ID: <8ca52ae9-ec04-a12a-82c6-1b61392c1e9a@linux.vnet.ibm.com> (raw)
In-Reply-To: <CAAcwi3-G-hmv6UnpR1=FLKhkJQr2oDH9xw812JAc+A-0hgmR+A@mail.gmail.com>

On 9/16/20 8:42 PM, Gerry Wan wrote:
> Hi,
> I'm testing out the maximum RX throughput my application can handle and am
> using rte_eth_stats_get() to measure the point at which RX packets start
> getting dropped. I have a traffic generator directly connected to my RX
> port.
> I've noticed that imissed is the number of packets that are dropped due to
> hardware queue fulls, while ipackets is the number of packets successfully
> received and agrees with the total number of packets retrieved from calls
> to rte_eth_rx_burst(). I'm not sure exactly what ierrors is supposed to
> count, but so far I have not seen this go beyond 0?
> I have been interpreting the sum of (ipackets + imissed + ierrors) = itotal
> as the total number of packets hitting the port. However, I've noticed that
> when throughput gets too high, imissed will remain 0 while itotal is
> smaller than the number of packets sent by the traffic generator. I've
> ruled out connection issues because increasing the number of RSS queues
> seems to fix the problem (up to a certain threshold before itotal again
> becomes smaller than the number sent), but I don't understand why. If it is
> not dropped in HW because the queues are full (since imissed = 0), where
> are the packets being dropped and is there a way I can count these?
> I am using DPDK 20.08 with a Mellanox CX-5, RSS queue size = 4096

When app buffers fill up then the HW buffers start to fill up.  When HW 
buffers are full then the PHY responds by generating flow control frames 
or simply dropping packets.  You could experiment by enabling/disabling 
flow control to verify that the packet counts are correct when flow 
control is enabled.

You could also look at the rx_discards_phy counter and contrast it with 
the rx_out_of_buffer statistic:


My read is that rx_out_of_buffer indicates that the HW doesn't have any 
RX descriptors available, possibly because of PCIe congestion or because 
the app's receive queue is empty.  On the other hand, rx_discards_phy 
indicates that the HW buffers are full.  I don't see the rx_discards_phy 
used in any stats, only available as an xstat.


      reply	other threads:[~2020-09-21 17:53 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-17  3:42 Gerry Wan
2020-09-21 17:53 ` David Christensen [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8ca52ae9-ec04-a12a-82c6-1b61392c1e9a@linux.vnet.ibm.com \
    --to=drc@linux.vnet.ibm.com \
    --cc=users@dpdk.org \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ https://inbox.dpdk.org/users \
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:

AGPL code for this site: git clone https://public-inbox.org/public-inbox.git