From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6EA77433B7; Fri, 24 Nov 2023 10:18:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4F6C6402B3; Fri, 24 Nov 2023 10:18:41 +0100 (CET) Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by mails.dpdk.org (Postfix) with ESMTP id 8A64940283 for ; Fri, 24 Nov 2023 10:18:39 +0100 (CET) Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-2851cc359c1so1278255a91.3 for ; Fri, 24 Nov 2023 01:18:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; t=1700817519; x=1701422319; darn=dpdk.org; h=cc:to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=ZagqwV5nEFTcbKqs6GeGMS5V3h4e1oveRvzPsm8QUXc=; b=kC38Z3KwjBM/bJmp/8EYUmCWmpSvizQCROw0yufFzcgCJiE7lOzj6AvqTLttsfbqP5 xClHV/cXbuIqxh8I1bSiRpBgkgBxoQo5qsRuBvNiscw5DrdCTZg9kNPQ6ChSMbasWIZH qWYDo+Hwn2GHTaJabElDJFZlW+0v6jQ6quai2Dxg8GE16uxfJ9zhEikNhioHXDLqHdD5 JdnMxgGpvZ8cd3ovMd6mXwzf63FpN4ycQsNei4kQLRH9MZU6N7aD0cMALTSAVtESc4fx 9WO61QkfSDJo4VHJpvYtvotioJeMcyxNX1MQM1B4cOcYRIvmhotXzU3v56z5LoMvlj8y 7eww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700817519; x=1701422319; h=cc:to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ZagqwV5nEFTcbKqs6GeGMS5V3h4e1oveRvzPsm8QUXc=; b=IbHDEPF1Rv+eyeQklxW1k4cA9jkk0f6NMvi/KGAfaXdL/HIQbII6hCrzTCboUhiHvh JL2g5QIpsOo0g0FxDhfe8bvkf2rVEfqUKo5fy6roBG/7fxXTGgY1xt2hzp0sOgJCeRAP azkyV0QOLpP+lOTG/097dtHmjarey8JTfFLCrT6EMLeVh9U/XFuTzb/fikPROVFO1lrg OUaYj7x14xtNGd3ftxvVyvWlFXvei/SDH2D028LE/i51AEKMwtNZi6DtKqAL1fm3BoZI Ap1Tkvw9shCOB2BuuUqZ8QRcltZ1J7UbgsGJXIH4T+NZ18czQSEXdcUjocwbF2hY15Gu t+Rw== X-Gm-Message-State: AOJu0YzpwPCm56FwXLHpo5e7kwMw4XCKvKiBDtL+bB/X/oIxjQeVvz/U F7ue3zS26bC5hQzftdhIUFbKdcS7wbBchh36jRI8EB5fhEF7LkKBSlE= X-Google-Smtp-Source: AGHT+IEX2hzsv8IcwkLnsPSs4++PVGskA0/qASOVaixkN56cPvqKsC0DGrfMZvt0uD6Whfve3ld9Cfhb4XkF7gr9g2k= X-Received: by 2002:a17:90a:49:b0:280:6b5b:3f40 with SMTP id 9-20020a17090a004900b002806b5b3f40mr2119093pjb.8.1700817518607; Fri, 24 Nov 2023 01:18:38 -0800 (PST) MIME-Version: 1.0 From: Edwin Brossette Date: Fri, 24 Nov 2023 10:18:27 +0100 Message-ID: Subject: net/virtio: duplicated xstats To: dev@dpdk.org Cc: maxime.cocquelin@dpdk.org, Olivier Matz Content-Type: multipart/alternative; boundary="000000000000065571060ae270d2" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000065571060ae270d2 Content-Type: text/plain; charset="UTF-8" Hello, I noticed a small inconsistency in the virtio pmd's xstats. The stat "rx_q0_errors" appears twice. I also think the stats "rx_q0_packets", "rx_q0_bytes", "tx_q0_packets" and "tx_q0_bytes" are duplicates of "rx_q0_good_packets", "rx_q0_good_bytes", "tx_q0_good_packets" and "tx_q0_good_bytes" I believe this issue probably appeared after this commit: f30e69b41f94: ethdev: add device flag to bypass auto-filled queue xstats http://scm.6wind.com/vendor/dpdk.org/dpdk/commit/?id=f30e69b41f949cd4a9afb6ff39de196e661708e2 >From what I understand, the rxq0_error stat was originally reported by the librte. However, changes were made so it is reported by the pmd instead. The flag RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS was temporarily set to keep the old behaviour so that every pmd could have time to adapt the change. But it seems the flag was forgotten in the virtio pmd and as a result, some stats are fetched at two different times when displaying xstats. First in lib_rte_ethdev: https://git.dpdk.org/dpdk/tree/lib/ethdev/rte_ethdev.c#n3266 (you can see the check on the RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS flag before the snprintf on eth_dev_rxq_stats_strings[] ) And a second time in the virtio pmd: https://git.dpdk.org/dpdk/tree/drivers/net/virtio/virtio_ethdev.c#n705 pmd (see the snprintf on rte_virtio_rxq_stat_strings[] ) This problem can be reproduced on testpmd simply by displaying the xstats on a port with the net_virtio driver: Reproduction: =========== 1) start dpdk-testpmd: modprobe -a uio_pci_generic dpdk-devbind -b uio_pci_generic 03:00.0 dpdk-devbind -b uio_pci_generic 04:00.0 dpdk-devbind -s Network devices using DPDK-compatible driver ============================================ 0000:03:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic unused=vfio-pci 0000:04:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic unused=vfio-pci [...] dpdk-testpmd -a 0000:03:00.0 -a 0000:04:00.0 -- -i --rxq=1 --txq=1 --coremask=0x4 --total-num-mbufs=250000 EAL: Detected CPU lcores: 3 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: VFIO support initialized EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket -1) EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket -1) Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool : n=250000, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) Port 0: 52:54:00:B0:8F:88 Configuring Port 1 (socket 0) Port 1: 52:54:00:EF:09:1F Checking link statuses... Done 2) port info: show port info 0 ********************* Infos for port 0 ********************* MAC address: 52:54:00:B0:8F:88 Device name: 0000:03:00.0 Driver name: net_virtio Firmware-version: not available Devargs: Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: Unknown Link duplex: full-duplex Autoneg status: On MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 64 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off, filter off, extend off, qinq strip off No RSS offload flow type is supported. Minimum size of RX buffer: 64 Maximum configurable length of RX packet: 9728 Maximum configurable size of LRO aggregated packet: 0 Current number of RX queues: 1 Max possible RX queues: 1 Max possible number of RXDs per queue: 32768 Min possible number of RXDs per queue: 32 RXDs number alignment: 1 Current number of TX queues: 1 Max possible TX queues: 1 Max possible number of TXDs per queue: 32768 Min possible number of TXDs per queue: 32 TXDs number alignment: 1 Max segment number per packet: 65535 Max segment number per MTU/TSO: 65535 Device capabilities: 0x0( ) Device error handling mode: none Device private info: guest_features: 0x110af8020 vtnet_hdr_size: 12 use_vec: rx-0 tx-0 use_inorder: rx-0 tx-0 intr_lsc: 1 max_mtu: 9698 max_rx_pkt_len: 1530 max_queue_pairs: 1 req_guest_features: 0x8000005f10ef8028 3) show port xstats: show port xstats 0 ###### NIC extended statistics for port 0 rx_good_packets: 0 tx_good_packets: 0 rx_good_bytes: 0 tx_good_bytes: 0 rx_missed_errors: 0 rx_errors: 0 tx_errors: 0 rx_mbuf_allocation_errors: 0 rx_q0_packets: 0 rx_q0_bytes: 0 rx_q0_errors: 0 <================== tx_q0_packets: 0 tx_q0_bytes: 0 rx_q0_good_packets: 0 rx_q0_good_bytes: 0 rx_q0_errors: 0 <================== rx_q0_multicast_packets: 0 rx_q0_broadcast_packets: 0 rx_q0_undersize_packets: 0 rx_q0_size_64_packets: 0 rx_q0_size_65_127_packets: 0 rx_q0_size_128_255_packets: 0 rx_q0_size_256_511_packets: 0 rx_q0_size_512_1023_packets: 0 rx_q0_size_1024_1518_packets: 0 rx_q0_size_1519_max_packets: 0 tx_q0_good_packets: 0 tx_q0_good_bytes: 0 tx_q0_multicast_packets: 0 tx_q0_broadcast_packets: 0 tx_q0_undersize_packets: 0 tx_q0_size_64_packets: 0 tx_q0_size_65_127_packets: 0 tx_q0_size_128_255_packets: 0 tx_q0_size_256_511_packets: 0 tx_q0_size_512_1023_packets: 0 tx_q0_size_1024_1518_packets: 0 tx_q0_size_1519_max_packets: 0 You can see the stat "rx_q0_errors" appeared twice. --000000000000065571060ae270d2 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hello,

I noticed a small inc= onsistency in the virtio pmd's xstats.
The stat "rx_q0_e= rrors" appears twice.
I also think the stats "rx_q0_packets", "rx_q0_bytes", &= quot;tx_q0_packets"=20 and "tx_q0_bytes" are duplicates of "rx_q0_good_packets"= ;,=20 "rx_q0_good_bytes", "tx_q0_good_packets" and "tx_q= 0_good_bytes"

I believe this issue probably a= ppeared after this commit:

f30e69b41f94: ethdev: a= dd device flag to bypass auto-filled queue xstats

=
From what I understand, the rxq0_error stat was originally reported by the=20 librte. However, changes were made so it is reported by the pmd instead.
The flag RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS was temporarily set to k= eep the old behaviour so that every pmd could have time to adapt the change= .
But it seems the flag was forgotten in the virtio pmd and as a result, some stats are fetched at two different times when displaying xstats.

First in lib_rte_ethdev:
(you can= see the check on the RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS flag before the snp= rintf on eth_dev_rxq_stats_strings[] )

And a second time in the virtio pmd:
(see the snprintf on rte_virtio_rxq_stat_strings= [] )

This problem can be reproduced on testpmd= simply by displaying the xstats on a port with the net_virtio driver:

Reproduction:
=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D

=C2=A0 =C2=A0 1) start dpdk-testpmd:
modprobe -a uio_pci_generic
dpdk-devbind -b uio_pci_generic 03:00.0
= dpdk-devbind -b uio_pci_generic 04:00.0

dpdk-devbind -s

Netwo= rk devices using DPDK-compatible driver
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D
0000:03:00.0 'Virtio 1.0 network device = 1041' drv=3Duio_pci_generic unused=3Dvfio-pci
0000:04:00.0 'Virt= io 1.0 network device 1041' drv=3Duio_pci_generic unused=3Dvfio-pci
= [...]

dpdk-testpmd -a 0000:03:00.0 -a 0000:04:00.0 -- -i --rxq=3D1 -= -txq=3D1 --coremask=3D0x4 --total-num-mbufs=3D250000
EAL: Detected CPU l= cores: 3
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of = DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selec= ted IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe P= CI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket -1)
EAL: = Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket -1)Interactive-mode selected
Warning: NUMA should be configured manually = by using --port-numa-config and --ring-numa-config parameters along with --= numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=3D250000, siz= e=3D2176, socket=3D0
testpmd: preferred mempool ops selected: ring_mp_mc=
Configuring Port 0 (socket 0)
Port 0: 52:54:00:B0:8F:88
Configuri= ng Port 1 (socket 0)
Port 1: 52:54:00:EF:09:1F
Checking link statuses= ...
Done

=C2=A0 =C2=A0 2) port info:

show port info 0
<= br>********************* Infos for port 0 =C2=A0*********************
MA= C address: 52:54:00:B0:8F:88
Device name: 0000:03:00.0
Driver name: n= et_virtio
Firmware-version: not available
Devargs:
Connect to soc= ket: 0
memory allocation on the socket: 0
Link status: up
Link spe= ed: Unknown
Link duplex: full-duplex
Autoneg status: On
MTU: 1500<= br>Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum numb= er of MAC addresses: 64
Maximum number of MAC addresses of hash filterin= g: 0
VLAN offload:
=C2=A0 strip off, filter off, extend off, qinq st= rip off
No RSS offload flow type is supported.
Minimum size of RX buf= fer: 64
Maximum configurable length of RX packet: 9728
Maximum config= urable size of LRO aggregated packet: 0
Current number of RX queues: 1Max possible RX queues: 1
Max possible number of RXDs per queue: 32768=
Min possible number of RXDs per queue: 32
RXDs number alignment: 1Current number of TX queues: 1
Max possible TX queues: 1
Max possib= le number of TXDs per queue: 32768
Min possible number of TXDs per queue= : 32
TXDs number alignment: 1
Max segment number per packet: 65535Max segment number per MTU/TSO: 65535
Device capabilities: 0x0( )
De= vice error handling mode: none
Device private info:
guest_features: 0= x110af8020
vtnet_hdr_size: 12
use_vec: rx-0 tx-0
use_inorder: rx-0= tx-0
intr_lsc: 1
max_mtu: 9698
max_rx_pkt_len: 1530
max_queue_= pairs: 1
req_guest_features: 0x8000005f10ef8028

=C2=A0 =C2=A0 3) = show port xstats:

show port xstats 0
###### NIC extended statisti= cs for port 0
rx_good_packets: 0
tx_good_packets: 0
rx_good_bytes= : 0
tx_good_bytes: 0
rx_missed_errors: 0
rx_errors: 0
tx_errors= : 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 0
rx_q0_bytes: 0rx_q0_errors: 0 =C2=A0 =C2=A0 =C2=A0<=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D
tx_q0_packets: 0
tx_q0_bytes: 0
rx_q0_good_p= ackets: 0
rx_q0_good_bytes: 0
rx_q0_errors: 0 =C2=A0 =C2=A0 =C2=A0<= ;=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
rx_q0_multicast_= packets: 0
rx_q0_broadcast_packets: 0
rx_q0_undersize_packets: 0
r= x_q0_size_64_packets: 0
rx_q0_size_65_127_packets: 0
rx_q0_size_128_2= 55_packets: 0
rx_q0_size_256_511_packets: 0
rx_q0_size_512_1023_packe= ts: 0
rx_q0_size_1024_1518_packets: 0
rx_q0_size_1519_max_packets: 0<= br>tx_q0_good_packets: 0
tx_q0_good_bytes: 0
tx_q0_multicast_packets:= 0
tx_q0_broadcast_packets: 0
tx_q0_undersize_packets: 0
tx_q0_siz= e_64_packets: 0
tx_q0_size_65_127_packets: 0
tx_q0_size_128_255_packe= ts: 0
tx_q0_size_256_511_packets: 0
tx_q0_size_512_1023_packets: 0tx_q0_size_1024_1518_packets: 0
tx_q0_size_1519_max_packets: 0

You can see the stat "rx_q0_errors" appeared twi= ce.
--000000000000065571060ae270d2--