From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE28B433B8; Fri, 24 Nov 2023 14:13:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB87E432A2; Fri, 24 Nov 2023 14:13:30 +0100 (CET) Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by mails.dpdk.org (Postfix) with ESMTP id 5337C40283 for ; Fri, 24 Nov 2023 14:13:29 +0100 (CET) Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-2858f58ed3cso362182a91.2 for ; Fri, 24 Nov 2023 05:13:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; t=1700831608; x=1701436408; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=R4mLk8NhDbTgg4Ck34z+Yv+OK6MtNg8NXyjbRMlqDbw=; b=FeXLVcMyz1k+MMbnqkxqrPf9INUpGXKAorYfn9UhCT5A/6rkCKkzSGCUO4jdMoXC6o nTSIzK5Fmig3+WIVj3gKZWqowkz9gZmFGgvFQdPQw3u6OfcUr80i/6xkgc72ex+mIIBA B77tdTRPa3XlVd0DnvbdDeay5Ag6JVJsG2nYXgjWbdlw+1Ssf1vG4UX+7qFiCcYduyHI vH8BK3dRvLBAlTZMHg8wPXU1mU1jl/0l7duRB0pX6IoAgirGEhmXAECHQS8bZhs5Uc4x a8IOn0ASovMmQ8EHn7lSAT4ieXRpqcSTnIDA8kfZIiw93A/uEhLT7zCfmScIdlYS4KeE Np9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700831608; x=1701436408; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=R4mLk8NhDbTgg4Ck34z+Yv+OK6MtNg8NXyjbRMlqDbw=; b=g9QkWAby1hSUo0/TDAhnC9vNJT7oY591+wtv1+7HbLbIjT/60BktFmKigsQIj83if0 9ONz+gaMBQ2T9pYBGj2tvix9M1idxd6dcGFFx9+J7LrlWq/iK0xo8HhGlJJx6YFYCm06 wEqx+RULvR4eqXAJo2KfJxzMln2qGnfC9erlyyOM+SdT/kVBpgwcDWJe0qriCoc0bbnt gI8mPSwj3KzAyF45TAfRBhZqCFJ8q41mCk4/k/FInP45vL8ZBAYI6ru13hCNkwZ0wpZ0 W8ijVb5+GsjN/6231DSZZZN3kJ50r6vnAIZcV5KvdawmwFK52UAW+O6Gw234a92HSrG2 mmiQ== X-Gm-Message-State: AOJu0YwASd3aWfz0bJC1AJGafnd1WzUa3J+QW76TfMNqqEXKG9+5EEE5 kaQ2wK0rvZH3EgzktuBho5H+RUVAoOnb9R9L4Byx5g== X-Google-Smtp-Source: AGHT+IFK80ibQo1UMm0poKlvwOVpIM4SUtCiC6N8Nt1cCayHfynIR8ZHUFkKv1sNvi4xDg3vDRGnJeWzUX8jeCT/w90= X-Received: by 2002:a17:90b:2247:b0:280:3911:ae02 with SMTP id hk7-20020a17090b224700b002803911ae02mr3271341pjb.16.1700831603918; Fri, 24 Nov 2023 05:13:23 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Edwin Brossette Date: Fri, 24 Nov 2023 14:13:12 +0100 Message-ID: Subject: Re: net/virtio: duplicated xstats To: Maxime Coquelin Cc: Olivier Matz , Ferruh Yigit , dev@dpdk.org Content-Type: multipart/alternative; boundary="0000000000009369e0060ae5b7ff" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --0000000000009369e0060ae5b7ff Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello again, The flag is already set during the device init, so it should be removed, not added. I can confirm removing it fixed my issue. I will submit a patch for this bug. Regards, Edwin Brossette. On Fri, Nov 24, 2023 at 11:39=E2=80=AFAM Maxime Coquelin wrote: > Hi Edwin, > > Thanks for reporting the issue. > > On 11/24/23 10:26, Olivier Matz wrote: > > Fix Maxime's mail. > > Thanks Olivier. > > > On Fri, Nov 24, 2023 at 10:18:27AM +0100, Edwin Brossette wrote: > >> Hello, > >> > >> I noticed a small inconsistency in the virtio pmd's xstats. > >> The stat "rx_q0_errors" appears twice. > >> I also think the stats "rx_q0_packets", "rx_q0_bytes", "tx_q0_packets" > and > >> "tx_q0_bytes" are duplicates of "rx_q0_good_packets", > "rx_q0_good_bytes", > >> "tx_q0_good_packets" and "tx_q0_good_bytes" > >> > >> I believe this issue probably appeared after this commit: > >> > >> f30e69b41f94: ethdev: add device flag to bypass auto-filled queue xsta= ts > >> > http://scm.6wind.com/vendor/dpdk.org/dpdk/commit/?id=3Df30e69b41f949cd4a9= afb6ff39de196e661708e2 > > Adding Ferruh as he is the author of this commit. > > >> From what I understand, the rxq0_error stat was originally reported b= y > the > >> librte. However, changes were made so it is reported by the pmd instea= d. > >> The flag RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS was temporarily set to keep > the > >> old behaviour so that every pmd could have time to adapt the change. > >> But it seems the flag was forgotten in the virtio pmd and as a result, > some > >> stats are fetched at two different times when displaying xstats. > > Have you tried adding this flag to Virtio PMD? Does it fixes the issue? > > >> First in lib_rte_ethdev: > >> https://git.dpdk.org/dpdk/tree/lib/ethdev/rte_ethdev.c#n3266 > >> (you can see the check on the RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS flag > before > >> the snprintf on eth_dev_rxq_stats_strings[] ) > >> > >> And a second time in the virtio pmd: > >> https://git.dpdk.org/dpdk/tree/drivers/net/virtio/virtio_ethdev.c#n705 > pmd > >> (see the snprintf on rte_virtio_rxq_stat_strings[] ) > >> > >> This problem can be reproduced on testpmd simply by displaying the > xstats > >> on a port with the net_virtio driver: > >> > >> Reproduction: > >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> > >> 1) start dpdk-testpmd: > >> > >> modprobe -a uio_pci_generic > >> dpdk-devbind -b uio_pci_generic 03:00.0 > >> dpdk-devbind -b uio_pci_generic 04:00.0 > >> > >> dpdk-devbind -s > >> > >> Network devices using DPDK-compatible driver > >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> 0000:03:00.0 'Virtio 1.0 network device 1041' drv=3Duio_pci_generic > >> unused=3Dvfio-pci > >> 0000:04:00.0 'Virtio 1.0 network device 1041' drv=3Duio_pci_generic > >> unused=3Dvfio-pci > >> [...] > >> > >> dpdk-testpmd -a 0000:03:00.0 -a 0000:04:00.0 -- -i --rxq=3D1 --txq=3D1 > >> --coremask=3D0x4 --total-num-mbufs=3D250000 > >> EAL: Detected CPU lcores: 3 > >> EAL: Detected NUMA nodes: 1 > >> EAL: Detected static linkage of DPDK > >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > >> EAL: Selected IOVA mode 'PA' > >> EAL: VFIO support initialized > >> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 > (socket > >> -1) > >> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 > (socket > >> -1) > >> Interactive-mode selected > >> Warning: NUMA should be configured manually by using --port-numa-confi= g > and > >> --ring-numa-config parameters along with --numa. > >> testpmd: create a new mbuf pool : n=3D250000, size=3D2176, > socket=3D0 > >> testpmd: preferred mempool ops selected: ring_mp_mc > >> Configuring Port 0 (socket 0) > >> Port 0: 52:54:00:B0:8F:88 > >> Configuring Port 1 (socket 0) > >> Port 1: 52:54:00:EF:09:1F > >> Checking link statuses... > >> Done > >> > >> 2) port info: > >> > >> show port info 0 > >> > >> ********************* Infos for port 0 ********************* > >> MAC address: 52:54:00:B0:8F:88 > >> Device name: 0000:03:00.0 > >> Driver name: net_virtio > >> Firmware-version: not available > >> Devargs: > >> Connect to socket: 0 > >> memory allocation on the socket: 0 > >> Link status: up > >> Link speed: Unknown > >> Link duplex: full-duplex > >> Autoneg status: On > >> MTU: 1500 > >> Promiscuous mode: enabled > >> Allmulticast mode: disabled > >> Maximum number of MAC addresses: 64 > >> Maximum number of MAC addresses of hash filtering: 0 > >> VLAN offload: > >> strip off, filter off, extend off, qinq strip off > >> No RSS offload flow type is supported. > >> Minimum size of RX buffer: 64 > >> Maximum configurable length of RX packet: 9728 > >> Maximum configurable size of LRO aggregated packet: 0 > >> Current number of RX queues: 1 > >> Max possible RX queues: 1 > >> Max possible number of RXDs per queue: 32768 > >> Min possible number of RXDs per queue: 32 > >> RXDs number alignment: 1 > >> Current number of TX queues: 1 > >> Max possible TX queues: 1 > >> Max possible number of TXDs per queue: 32768 > >> Min possible number of TXDs per queue: 32 > >> TXDs number alignment: 1 > >> Max segment number per packet: 65535 > >> Max segment number per MTU/TSO: 65535 > >> Device capabilities: 0x0( ) > >> Device error handling mode: none > >> Device private info: > >> guest_features: 0x110af8020 > >> vtnet_hdr_size: 12 > >> use_vec: rx-0 tx-0 > >> use_inorder: rx-0 tx-0 > >> intr_lsc: 1 > >> max_mtu: 9698 > >> max_rx_pkt_len: 1530 > >> max_queue_pairs: 1 > >> req_guest_features: 0x8000005f10ef8028 > >> > >> 3) show port xstats: > >> > >> show port xstats 0 > >> ###### NIC extended statistics for port 0 > >> rx_good_packets: 0 > >> tx_good_packets: 0 > >> rx_good_bytes: 0 > >> tx_good_bytes: 0 > >> rx_missed_errors: 0 > >> rx_errors: 0 > >> tx_errors: 0 > >> rx_mbuf_allocation_errors: 0 > >> rx_q0_packets: 0 > >> rx_q0_bytes: 0 > >> rx_q0_errors: 0 <=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D > >> tx_q0_packets: 0 > >> tx_q0_bytes: 0 > >> rx_q0_good_packets: 0 > >> rx_q0_good_bytes: 0 > >> rx_q0_errors: 0 <=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D > >> rx_q0_multicast_packets: 0 > >> rx_q0_broadcast_packets: 0 > >> rx_q0_undersize_packets: 0 > >> rx_q0_size_64_packets: 0 > >> rx_q0_size_65_127_packets: 0 > >> rx_q0_size_128_255_packets: 0 > >> rx_q0_size_256_511_packets: 0 > >> rx_q0_size_512_1023_packets: 0 > >> rx_q0_size_1024_1518_packets: 0 > >> rx_q0_size_1519_max_packets: 0 > >> tx_q0_good_packets: 0 > >> tx_q0_good_bytes: 0 > >> tx_q0_multicast_packets: 0 > >> tx_q0_broadcast_packets: 0 > >> tx_q0_undersize_packets: 0 > >> tx_q0_size_64_packets: 0 > >> tx_q0_size_65_127_packets: 0 > >> tx_q0_size_128_255_packets: 0 > >> tx_q0_size_256_511_packets: 0 > >> tx_q0_size_512_1023_packets: 0 > >> tx_q0_size_1024_1518_packets: 0 > >> tx_q0_size_1519_max_packets: 0 > >> > >> You can see the stat "rx_q0_errors" appeared twice. > > > > If simply adding the flag solves your issue, do you plan to submit a > patch? > > If not, could you please file a Bz in the upstream bug tracker so that > we don't lose track of this bug? > > Best regards, > Maxime > > --0000000000009369e0060ae5b7ff Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hello again,

The flag is alr= eady set during the device init, so it should be removed, not added.
<= div>I can confirm removing it fixed my issue.

I wi= ll submit a patch for this bug.

Regards,
Edwin Brossette.

On Fri, Nov 24, 2023 at 11:39=E2=80=AFAM Maxime = Coquelin <maxime.coquelin@= redhat.com> wrote:
Hi Edwin,

Thanks for reporting the issue.

On 11/24/23 10:26, Olivier Matz wrote:
> Fix Maxime's mail.

Thanks Olivier.

> On Fri, Nov 24, 2023 at 10:18:27AM +0100, Edwin Brossette wrote:
>> Hello,
>>
>> I noticed a small inconsistency in the virtio pmd's xstats. >> The stat "rx_q0_errors" appears twice.
>> I also think the stats "rx_q0_packets", "rx_q0_byte= s", "tx_q0_packets" and
>> "tx_q0_bytes" are duplicates of "rx_q0_good_packets= ", "rx_q0_good_bytes",
>> "tx_q0_good_packets" and "tx_q0_good_bytes" >>
>> I believe this issue probably appeared after this commit:
>>
>> f30e69b41f94: ethdev: add device flag to bypass auto-filled queue = xstats
>> http://scm.6wind.com/vendor/dpdk.org/dpdk/commit/?id=3Df30e69b41f949cd4= a9afb6ff39de196e661708e2

Adding Ferruh as he is the author of this commit.

>>=C2=A0 From what I understand, the rxq0_error stat was originally r= eported by the
>> librte. However, changes were made so it is reported by the pmd in= stead.
>> The flag RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS was temporarily set to = keep the
>> old behaviour so that every pmd could have time to adapt the chang= e.
>> But it seems the flag was forgotten in the virtio pmd and as a res= ult, some
>> stats are fetched at two different times when displaying xstats.
Have you tried adding this flag to Virtio PMD? Does it fixes the issue?

>> First in lib_rte_ethdev:
>> https://git.dpdk.org/dpdk/tree/= lib/ethdev/rte_ethdev.c#n3266
>> (you can see the check on the RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS fl= ag before
>> the snprintf on eth_dev_rxq_stats_strings[] )
>>
>> And a second time in the virtio pmd:
>> https://git.dpdk.org/= dpdk/tree/drivers/net/virtio/virtio_ethdev.c#n705 pmd
>> (see the snprintf on rte_virtio_rxq_stat_strings[] )
>>
>> This problem can be reproduced on testpmd simply by displaying the= xstats
>> on a port with the net_virtio driver:
>>
>> Reproduction:
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>
>>=C2=A0 =C2=A0 =C2=A0 1) start dpdk-testpmd:
>>
>> modprobe -a uio_pci_generic
>> dpdk-devbind -b uio_pci_generic 03:00.0
>> dpdk-devbind -b uio_pci_generic 04:00.0
>>
>> dpdk-devbind -s
>>
>> Network devices using DPDK-compatible driver
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> 0000:03:00.0 'Virtio 1.0 network device 1041' drv=3Duio_pc= i_generic
>> unused=3Dvfio-pci
>> 0000:04:00.0 'Virtio 1.0 network device 1041' drv=3Duio_pc= i_generic
>> unused=3Dvfio-pci
>> [...]
>>
>> dpdk-testpmd -a 0000:03:00.0 -a 0000:04:00.0 -- -i --rxq=3D1 --txq= =3D1
>> --coremask=3D0x4 --total-num-mbufs=3D250000
>> EAL: Detected CPU lcores: 3
>> EAL: Detected NUMA nodes: 1
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: VFIO support initialized
>> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0= (socket
>> -1)
>> EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0= (socket
>> -1)
>> Interactive-mode selected
>> Warning: NUMA should be configured manually by using --port-numa-c= onfig and
>> --ring-numa-config parameters along with --numa.
>> testpmd: create a new mbuf pool <mb_pool_0>: n=3D250000, siz= e=3D2176, socket=3D0
>> testpmd: preferred mempool ops selected: ring_mp_mc
>> Configuring Port 0 (socket 0)
>> Port 0: 52:54:00:B0:8F:88
>> Configuring Port 1 (socket 0)
>> Port 1: 52:54:00:EF:09:1F
>> Checking link statuses...
>> Done
>>
>>=C2=A0 =C2=A0 =C2=A0 2) port info:
>>
>> show port info 0
>>
>> ********************* Infos for port 0=C2=A0 *********************=
>> MAC address: 52:54:00:B0:8F:88
>> Device name: 0000:03:00.0
>> Driver name: net_virtio
>> Firmware-version: not available
>> Devargs:
>> Connect to socket: 0
>> memory allocation on the socket: 0
>> Link status: up
>> Link speed: Unknown
>> Link duplex: full-duplex
>> Autoneg status: On
>> MTU: 1500
>> Promiscuous mode: enabled
>> Allmulticast mode: disabled
>> Maximum number of MAC addresses: 64
>> Maximum number of MAC addresses of hash filtering: 0
>> VLAN offload:
>>=C2=A0 =C2=A0 strip off, filter off, extend off, qinq strip off
>> No RSS offload flow type is supported.
>> Minimum size of RX buffer: 64
>> Maximum configurable length of RX packet: 9728
>> Maximum configurable size of LRO aggregated packet: 0
>> Current number of RX queues: 1
>> Max possible RX queues: 1
>> Max possible number of RXDs per queue: 32768
>> Min possible number of RXDs per queue: 32
>> RXDs number alignment: 1
>> Current number of TX queues: 1
>> Max possible TX queues: 1
>> Max possible number of TXDs per queue: 32768
>> Min possible number of TXDs per queue: 32
>> TXDs number alignment: 1
>> Max segment number per packet: 65535
>> Max segment number per MTU/TSO: 65535
>> Device capabilities: 0x0( )
>> Device error handling mode: none
>> Device private info:
>> guest_features: 0x110af8020
>> vtnet_hdr_size: 12
>> use_vec: rx-0 tx-0
>> use_inorder: rx-0 tx-0
>> intr_lsc: 1
>> max_mtu: 9698
>> max_rx_pkt_len: 1530
>> max_queue_pairs: 1
>> req_guest_features: 0x8000005f10ef8028
>>
>>=C2=A0 =C2=A0 =C2=A0 3) show port xstats:
>>
>> show port xstats 0
>> ###### NIC extended statistics for port 0
>> rx_good_packets: 0
>> tx_good_packets: 0
>> rx_good_bytes: 0
>> tx_good_bytes: 0
>> rx_missed_errors: 0
>> rx_errors: 0
>> tx_errors: 0
>> rx_mbuf_allocation_errors: 0
>> rx_q0_packets: 0
>> rx_q0_bytes: 0
>> rx_q0_errors: 0=C2=A0 =C2=A0 =C2=A0 <=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> tx_q0_packets: 0
>> tx_q0_bytes: 0
>> rx_q0_good_packets: 0
>> rx_q0_good_bytes: 0
>> rx_q0_errors: 0=C2=A0 =C2=A0 =C2=A0 <=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> rx_q0_multicast_packets: 0
>> rx_q0_broadcast_packets: 0
>> rx_q0_undersize_packets: 0
>> rx_q0_size_64_packets: 0
>> rx_q0_size_65_127_packets: 0
>> rx_q0_size_128_255_packets: 0
>> rx_q0_size_256_511_packets: 0
>> rx_q0_size_512_1023_packets: 0
>> rx_q0_size_1024_1518_packets: 0
>> rx_q0_size_1519_max_packets: 0
>> tx_q0_good_packets: 0
>> tx_q0_good_bytes: 0
>> tx_q0_multicast_packets: 0
>> tx_q0_broadcast_packets: 0
>> tx_q0_undersize_packets: 0
>> tx_q0_size_64_packets: 0
>> tx_q0_size_65_127_packets: 0
>> tx_q0_size_128_255_packets: 0
>> tx_q0_size_256_511_packets: 0
>> tx_q0_size_512_1023_packets: 0
>> tx_q0_size_1024_1518_packets: 0
>> tx_q0_size_1519_max_packets: 0
>>
>> You can see the stat "rx_q0_errors" appeared twice.
>

If simply adding the flag solves your issue, do you plan to submit a
patch?

If not, could you please file a Bz in the upstream bug tracker so that
we don't lose track of this bug?

Best regards,
Maxime

--0000000000009369e0060ae5b7ff--