DPDK usage discussions
 help / color / mirror / Atom feed
From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
To: Filip Janiszewski <contact@filipjaniszewski.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Performance of rte_eth_stats_get
Date: Wed, 19 May 2021 15:14:38 +0000
Message-ID: <BYAPR11MB3143AB2DBB2D49DAB425924CD72B9@BYAPR11MB3143.namprd11.prod.outlook.com> (raw)
In-Reply-To: <01e2ccbe-a2c8-be7c-7f9d-af43f609e75f@filipjaniszewski.com>

> -----Original Message-----
> From: users <users-bounces@dpdk.org> On Behalf Of Filip Janiszewski
> Sent: Wednesday, May 19, 2021 2:10 PM
> To: users@dpdk.org
> Subject: [dpdk-users] Performance of rte_eth_stats_get
> Hi,
> Is it safe to call rte_eth_stats_get while capturing from the port?
> I'm mostly concerned about performance, if rte_eth_stats_get will in any
> way impact the port performance, in the application I plan to call the
> function from a thread that is not directly involved in the capture,
> there's another worker responsible for rx bursting, but I wonder if the
> NIC might get upset if I call it too frequently (say 10 times per
> second) and potentially cause some performance issues.
> The question is really Nic agnostic, but if the Nic vendor is actually
> relevant then I'm running Intel 700 series nic and Mellanox ConnectX-4/5.

To understand what really goes on when getting stats, it might help to list the
steps involved in getting statistics from the NIC HW.

1) CPU sends an MMIO read (Memory Mapped I/O, aka, sometimes referred
to as a "pci read") to the NIC.
2) The PCI bus has to handle extra TLPs (pci transactions) to satisfy read
3) NIC has to send a reply based on accessing its internal counters
4) CPU gets a result from the PCI read.

Notice how elegantly this whole process is abstracted from SW? In code, reading
a stat value is just dereferencing a pointer that is mapped to the NIC HW address.
In practice from a CPU performance point of view, doing an MMIO-read is one of
the slowest things you can do. You say the stats-reads are occurring from a thread
that is not handling rx/datapath, so perhaps the CPU cycle cost itself isn't a concern.

Do note however, that when reading a full set of extended stats from the NIC, there
could be many 10's to 100's of MMIO reads (depending on the statistics requested,
and how the PMD itself is implemented to handle stats updates).

The PCI bus does become more busy with reads to the NIC HW when doing lots of
statistic updates, so there is some more contention/activity to be expected there.
The PCM tool can be very useful to see MMIO traffic, you could measure how many
extra PCI transactions are occurring due to reading stats every X ms?

I can recommend measuring pkt latency/jitter as a histogram, as then outliers in performance
can be identified. If you specifically want to identify if these are due stats reads, compare
with a "no stats reads" latency/jitter histogram, and graphically see the impact.
In the end if it doesn't affect packet latency/jitter, then it has no impact right?

Ultimately, I can't give a generic answer - best steps are to measure carefully and find out!

> Thanks

Hope the above helps and doesn't add confusion :)  Regards, -Harry

  reply	other threads:[~2021-05-19 15:14 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-19 13:10 Filip Janiszewski
2021-05-19 15:14 ` Van Haaren, Harry [this message]
2021-05-19 16:06   ` Stephen Hemminger
2021-07-14 10:25     ` Alireza Sanaee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR11MB3143AB2DBB2D49DAB425924CD72B9@BYAPR11MB3143.namprd11.prod.outlook.com \
    --to=harry.van.haaren@intel.com \
    --cc=contact@filipjaniszewski.com \
    --cc=users@dpdk.org \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ https://inbox.dpdk.org/users \
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:

AGPL code for this site: git clone https://public-inbox.org/public-inbox.git