DPDK patches and discussions
 help / color / mirror / Atom feed
From: Edwin Brossette <edwin.brossette@6wind.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Olivier Matz <olivier.matz@6wind.com>,
	Ferruh Yigit <ferruh.yigit@amd.com>,
	dev@dpdk.org
Subject: Re: net/virtio: duplicated xstats
Date: Fri, 24 Nov 2023 14:13:12 +0100	[thread overview]
Message-ID: <CANDF9xAJLF2HaiFdcrEB-kfYHHwYPhRs=zMJtP8FAq-UUYikAw@mail.gmail.com> (raw)
In-Reply-To: <eee58cee-cb54-4e3d-b714-dce9e0b13d6e@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 7256 bytes --]

Hello again,

The flag is already set during the device init, so it should be removed,
not added.
I can confirm removing it fixed my issue.

I will submit a patch for this bug.

Regards,
Edwin Brossette.

On Fri, Nov 24, 2023 at 11:39 AM Maxime Coquelin <maxime.coquelin@redhat.com>
wrote:

> Hi Edwin,
>
> Thanks for reporting the issue.
>
> On 11/24/23 10:26, Olivier Matz wrote:
> > Fix Maxime's mail.
>
> Thanks Olivier.
>
> > On Fri, Nov 24, 2023 at 10:18:27AM +0100, Edwin Brossette wrote:
> >> Hello,
> >>
> >> I noticed a small inconsistency in the virtio pmd's xstats.
> >> The stat "rx_q0_errors" appears twice.
> >> I also think the stats "rx_q0_packets", "rx_q0_bytes", "tx_q0_packets"
> and
> >> "tx_q0_bytes" are duplicates of "rx_q0_good_packets",
> "rx_q0_good_bytes",
> >> "tx_q0_good_packets" and "tx_q0_good_bytes"
> >>
> >> I believe this issue probably appeared after this commit:
> >>
> >> f30e69b41f94: ethdev: add device flag to bypass auto-filled queue xstats
> >>
> http://scm.6wind.com/vendor/dpdk.org/dpdk/commit/?id=f30e69b41f949cd4a9afb6ff39de196e661708e2
>
> Adding Ferruh as he is the author of this commit.
>
> >>  From what I understand, the rxq0_error stat was originally reported by
> the
> >> librte. However, changes were made so it is reported by the pmd instead.
> >> The flag RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS was temporarily set to keep
> the
> >> old behaviour so that every pmd could have time to adapt the change.
> >> But it seems the flag was forgotten in the virtio pmd and as a result,
> some
> >> stats are fetched at two different times when displaying xstats.
>
> Have you tried adding this flag to Virtio PMD? Does it fixes the issue?
>
> >> First in lib_rte_ethdev:
> >> https://git.dpdk.org/dpdk/tree/lib/ethdev/rte_ethdev.c#n3266
> >> (you can see the check on the RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS flag
> before
> >> the snprintf on eth_dev_rxq_stats_strings[] )
> >>
> >> And a second time in the virtio pmd:
> >> https://git.dpdk.org/dpdk/tree/drivers/net/virtio/virtio_ethdev.c#n705
> pmd
> >> (see the snprintf on rte_virtio_rxq_stat_strings[] )
> >>
> >> This problem can be reproduced on testpmd simply by displaying the
> xstats
> >> on a port with the net_virtio driver:
> >>
> >> Reproduction:
> >> ===========
> >>
> >>      1) start dpdk-testpmd:
> >>
> >> modprobe -a uio_pci_generic
> >> dpdk-devbind -b uio_pci_generic 03:00.0
> >> dpdk-devbind -b uio_pci_generic 04:00.0
> >>
> >> dpdk-devbind -s
> >>
> >> Network devices using DPDK-compatible driver
> >> ============================================
> >> 0000:03:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic
> >> unused=vfio-pci
> >> 0000:04:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic
> >> unused=vfio-pci
> >> [...]
> >>
> >> dpdk-testpmd -a 0000:03:00.0 -a 0000:04:00.0 -- -i --rxq=1 --txq=1
> >> --coremask=0x4 --total-num-mbufs=250000
> >> EAL: Detected CPU lcores: 3
> >> EAL: Detected NUMA nodes: 1
> >> EAL: Detected static linkage of DPDK
> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >> EAL: Selected IOVA mode 'PA'
> >> EAL: VFIO support initialized
> >> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0
> (socket
> >> -1)
> >> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0
> (socket
> >> -1)
> >> Interactive-mode selected
> >> Warning: NUMA should be configured manually by using --port-numa-config
> and
> >> --ring-numa-config parameters along with --numa.
> >> testpmd: create a new mbuf pool <mb_pool_0>: n=250000, size=2176,
> socket=0
> >> testpmd: preferred mempool ops selected: ring_mp_mc
> >> Configuring Port 0 (socket 0)
> >> Port 0: 52:54:00:B0:8F:88
> >> Configuring Port 1 (socket 0)
> >> Port 1: 52:54:00:EF:09:1F
> >> Checking link statuses...
> >> Done
> >>
> >>      2) port info:
> >>
> >> show port info 0
> >>
> >> ********************* Infos for port 0  *********************
> >> MAC address: 52:54:00:B0:8F:88
> >> Device name: 0000:03:00.0
> >> Driver name: net_virtio
> >> Firmware-version: not available
> >> Devargs:
> >> Connect to socket: 0
> >> memory allocation on the socket: 0
> >> Link status: up
> >> Link speed: Unknown
> >> Link duplex: full-duplex
> >> Autoneg status: On
> >> MTU: 1500
> >> Promiscuous mode: enabled
> >> Allmulticast mode: disabled
> >> Maximum number of MAC addresses: 64
> >> Maximum number of MAC addresses of hash filtering: 0
> >> VLAN offload:
> >>    strip off, filter off, extend off, qinq strip off
> >> No RSS offload flow type is supported.
> >> Minimum size of RX buffer: 64
> >> Maximum configurable length of RX packet: 9728
> >> Maximum configurable size of LRO aggregated packet: 0
> >> Current number of RX queues: 1
> >> Max possible RX queues: 1
> >> Max possible number of RXDs per queue: 32768
> >> Min possible number of RXDs per queue: 32
> >> RXDs number alignment: 1
> >> Current number of TX queues: 1
> >> Max possible TX queues: 1
> >> Max possible number of TXDs per queue: 32768
> >> Min possible number of TXDs per queue: 32
> >> TXDs number alignment: 1
> >> Max segment number per packet: 65535
> >> Max segment number per MTU/TSO: 65535
> >> Device capabilities: 0x0( )
> >> Device error handling mode: none
> >> Device private info:
> >> guest_features: 0x110af8020
> >> vtnet_hdr_size: 12
> >> use_vec: rx-0 tx-0
> >> use_inorder: rx-0 tx-0
> >> intr_lsc: 1
> >> max_mtu: 9698
> >> max_rx_pkt_len: 1530
> >> max_queue_pairs: 1
> >> req_guest_features: 0x8000005f10ef8028
> >>
> >>      3) show port xstats:
> >>
> >> show port xstats 0
> >> ###### NIC extended statistics for port 0
> >> rx_good_packets: 0
> >> tx_good_packets: 0
> >> rx_good_bytes: 0
> >> tx_good_bytes: 0
> >> rx_missed_errors: 0
> >> rx_errors: 0
> >> tx_errors: 0
> >> rx_mbuf_allocation_errors: 0
> >> rx_q0_packets: 0
> >> rx_q0_bytes: 0
> >> rx_q0_errors: 0      <==================
> >> tx_q0_packets: 0
> >> tx_q0_bytes: 0
> >> rx_q0_good_packets: 0
> >> rx_q0_good_bytes: 0
> >> rx_q0_errors: 0      <==================
> >> rx_q0_multicast_packets: 0
> >> rx_q0_broadcast_packets: 0
> >> rx_q0_undersize_packets: 0
> >> rx_q0_size_64_packets: 0
> >> rx_q0_size_65_127_packets: 0
> >> rx_q0_size_128_255_packets: 0
> >> rx_q0_size_256_511_packets: 0
> >> rx_q0_size_512_1023_packets: 0
> >> rx_q0_size_1024_1518_packets: 0
> >> rx_q0_size_1519_max_packets: 0
> >> tx_q0_good_packets: 0
> >> tx_q0_good_bytes: 0
> >> tx_q0_multicast_packets: 0
> >> tx_q0_broadcast_packets: 0
> >> tx_q0_undersize_packets: 0
> >> tx_q0_size_64_packets: 0
> >> tx_q0_size_65_127_packets: 0
> >> tx_q0_size_128_255_packets: 0
> >> tx_q0_size_256_511_packets: 0
> >> tx_q0_size_512_1023_packets: 0
> >> tx_q0_size_1024_1518_packets: 0
> >> tx_q0_size_1519_max_packets: 0
> >>
> >> You can see the stat "rx_q0_errors" appeared twice.
> >
>
> If simply adding the flag solves your issue, do you plan to submit a
> patch?
>
> If not, could you please file a Bz in the upstream bug tracker so that
> we don't lose track of this bug?
>
> Best regards,
> Maxime
>
>

[-- Attachment #2: Type: text/html, Size: 9555 bytes --]

  reply	other threads:[~2023-11-24 13:13 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-24  9:18 Edwin Brossette
2023-11-24  9:26 ` Olivier Matz
2023-11-24 10:39   ` Maxime Coquelin
2023-11-24 13:13     ` Edwin Brossette [this message]
2023-11-24 13:16       ` Maxime Coquelin
2023-11-24 13:52 ` [PATCH] net/virtio: fix duplicated rxq xstats edwin.brossette
2023-11-27 10:20   ` Ferruh Yigit
2024-02-06 14:56   ` Maxime Coquelin
2023-11-27 10:19 ` net/virtio: duplicated xstats Ferruh Yigit
2023-11-27 10:22   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANDF9xAJLF2HaiFdcrEB-kfYHHwYPhRs=zMJtP8FAq-UUYikAw@mail.gmail.com' \
    --to=edwin.brossette@6wind.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).