DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: spyroot <spyroot@gmail.com>
Cc: <dev@dpdk.org>
Subject: Re: 810 VFIO , SRIOV, multi-process stats read DPDK testpmd.
Date: Tue, 29 Apr 2025 14:22:30 +0100	[thread overview]
Message-ID: <aBDSliZlrdd1hOst@bricha3-mobl1.ger.corp.intel.com> (raw)
In-Reply-To: <CABsV3rARRuh8Qhm4q+QCDAEmA8WP+6QK241ik4qoLyGJBPFdHg@mail.gmail.com>

On Mon, Apr 14, 2025 at 07:21:57PM +0400, spyroot wrote:
>    Hi Folks,
> 
>    I'm observing some unexpected behavior related to how statistics are
>    retrieved from a Physical Function (PF) on an Intel 810 NIC.
> 
>    Scenario: I have two dpdk-testpmd instances running in separate
>    Kubernetes pods (same worker node). Each instance uses the -a flag to
>    bind to a different VF. (i.e to have consistent port id 0)
> 
>    Questions:
>     1. PF Statistics and 64B Line Rate:
>        I'm noticing that the RX packet-per-second value reported on the PF
>        side for a given VF is higher than the theoretical maximum for
>        64-byte packets.
>           + Does the Intel 810 PMD apply any kind of optimization,
>             offloading, or fast path processing when two VFs (e.g., A and
>             B) are on the same PF?

This wouldn't be something that the PMD does. The forwarding from VF to VF,
or PF to VF would happen internally in the hardware.

>     2. Concurrent Stats Polling:
>           + When two separate dpdk-testpmd processes are running (in pod A
>             and pod B), does the PMD or driver layer support concurrent
>             reading of PF statistics?

Looking at the iavf PMD, the reading of stats is done by sending an adminq
message to the PF and reading the response. Any serialization of stats
reading would then be done at the PF or adminq management level. The VF
should not need to worry about whether another VF is reading the stats at
the same time. [In fact it would be a serious bug if one VF needed to be
aware of what other VFs were doing, since different VFs could be attached
to different virtual machines which should be isolated from each other]

>           + Is there any locking or synchronization mechanism involved
>             when multiple testpmd instances attempt to pull stats from the
>             same PF simultaneously? ( in essence, does a firmware/OF
>             support concurrent read).
> 

See above.

/Bruce

>    Thank you,
> cd /usr/local/bin && dpdk-testpmd \
>   --main-lcore \$main -l \$cores -n 4 \
>   --socket-mem 2048 \
>   --proc-type auto --file-prefix testpmd_rx0 \
>   -a \$PCIDEVICE_INTEL_COM_DPDK \
>   -- --forward-mode=rxonly --auto-start --stats-period 1'"
> 
> cd /usr/local/bin && dpdk-testpmd \
>   --main-lcore \$main -l \$cores -n 4 \
>   --socket-mem 2048 \
>   --proc-type auto --file-prefix testpmd_rx1 \
>   -a \$PCIDEVICE_INTEL_COM_DPDK \
>   -- --forward-mode=rxonly --auto-start --stats-period 1'"

  reply	other threads:[~2025-04-29 13:22 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-14 15:21 spyroot
2025-04-29 13:22 ` Bruce Richardson [this message]
2025-04-29 16:04   ` spyroot
2025-04-29 16:18     ` Bruce Richardson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aBDSliZlrdd1hOst@bricha3-mobl1.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=spyroot@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).